{ "paper_id": "D11-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:32:16.964202Z" }, "title": "Class Label Enhancement via Related Instances", "authors": [ { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "", "affiliation": {}, "email": "kozareva@isi.edu" }, { "first": "Konstantin", "middle": [], "last": "Voevodski", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Shang-Hua", "middle": [], "last": "Teng", "suffix": "", "affiliation": {}, "email": "shanghua@usc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Class-instance label propagation algorithms have been successfully used to fuse information from multiple sources in order to enrich a set of unlabeled instances with class labels. Yet, nobody has explored the relationships between the instances themselves to enhance an initial set of class-instance pairs. We propose two graph-theoretic methods (centrality and regularization), which start with a small set of labeled class-instance pairs and use the instance-instance network to extend the class labels to all instances in the network. We carry out a comparative study with state-of-the-art knowledge harvesting algorithm and show that our approach can learn additional class labels while maintaining high accuracy. We conduct a comparative study between class-instance and instance-instance graphs used to propagate the class labels and show that the latter one achieves higher accuracy.", "pdf_parse": { "paper_id": "D11-1011", "_pdf_hash": "", "abstract": [ { "text": "Class-instance label propagation algorithms have been successfully used to fuse information from multiple sources in order to enrich a set of unlabeled instances with class labels. Yet, nobody has explored the relationships between the instances themselves to enhance an initial set of class-instance pairs. We propose two graph-theoretic methods (centrality and regularization), which start with a small set of labeled class-instance pairs and use the instance-instance network to extend the class labels to all instances in the network. We carry out a comparative study with state-of-the-art knowledge harvesting algorithm and show that our approach can learn additional class labels while maintaining high accuracy. We conduct a comparative study between class-instance and instance-instance graphs used to propagate the class labels and show that the latter one achieves higher accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many natural language processing applications use and rely on semantic knowledge resources. Since manually built lexical repositories such as Word-Net (Fellbaum, 1998 ) cover a limited amount of knowledge and are tedious to maintain over time, researchers have developed algorithms for automatic knowledge extraction from structured and unstructured texts. There is a substantial body of work on extracting is-a relations (Etzioni et al., 2005; Kozareva et al., 2008) , part-of relations (Girju et al., 2003; Pantel and Pennacchiotti, 2006) and general facts (Lin and Pantel, 2001 ; Davidov and Rappoport, 2009; Jain and Pantel, 2010) . The usefulness of the generated resources has been shown to be valuable to information extraction (Riloff and Jones, 1999) , question answering (Katz et al., 2003) and textual entailment (Zanzotto et al., 2006) systems.", "cite_spans": [ { "start": 151, "end": 166, "text": "(Fellbaum, 1998", "ref_id": "BIBREF7" }, { "start": 422, "end": 444, "text": "(Etzioni et al., 2005;", "ref_id": "BIBREF6" }, { "start": 445, "end": 467, "text": "Kozareva et al., 2008)", "ref_id": "BIBREF15" }, { "start": 488, "end": 508, "text": "(Girju et al., 2003;", "ref_id": "BIBREF8" }, { "start": 509, "end": 540, "text": "Pantel and Pennacchiotti, 2006)", "ref_id": "BIBREF21" }, { "start": 559, "end": 580, "text": "(Lin and Pantel, 2001", "ref_id": "BIBREF17" }, { "start": 583, "end": 611, "text": "Davidov and Rappoport, 2009;", "ref_id": "BIBREF5" }, { "start": 612, "end": 634, "text": "Jain and Pantel, 2010)", "ref_id": "BIBREF11" }, { "start": 735, "end": 759, "text": "(Riloff and Jones, 1999)", "ref_id": "BIBREF23" }, { "start": 781, "end": 800, "text": "(Katz et al., 2003)", "ref_id": "BIBREF12" }, { "start": 824, "end": 847, "text": "(Zanzotto et al., 2006)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Among the most common knowledge acquisition approaches are those based on lexical patterns (Hearst, 1992; Etzioni et al., 2005; Kozareva et al., 2008) and clustering (Lin and Pantel, 2002; Davidov and Rappoport, 2008) . While clustering can find instances and classes that are not explicitly expressed in text, they often may not generate the granularity needed by the users. In contrast, pattern-based approaches generate highly accurate lists, but they are constraint to the information matched by the pattern and often suffer from recall. (Pa\u015fca, 2004; Snow et al., 2006; Kozareva and Hovy, 2010) have shown that complete lists of semantic classes and instances are valuable for the enrichment of existing resources like WordNet and for taxonomy induction. Therefore, researchers have focused on the development of methods that can automatically augment the initially extracted class-instance pairs. (Pennacchiotti and Pantel, 2009) fused information from pattern-based and distributional systems using an ensemble method and a rich set of features derived from query logs, web-crawl and Wikipedia. (Talukdar et al., 2008) improved class-instance extractions exploring the relationships between the classes and the instances to propagate the initial class-labels to the remaining unlabeled instances. Later on (Talukdar and Pereira, 2010) showed that class-instance extraction with label propagation can be further improved by adding semantic information in the form of instance-attribute edges derived from independently developed knowledge base. Similarly to (Talukdar et al., 2008) and (Talukdar and Pereira, 2010) , we are interested in enriching class-instance extractions with label propagation. However, unlike the previous work, we model the relationships between the instances themselves to propagate the initial set of class labels to the remaining unlabeled instances. To our knowledge, this is the first work to explore the connections between instances for the task of class-label propagation.", "cite_spans": [ { "start": 91, "end": 105, "text": "(Hearst, 1992;", "ref_id": "BIBREF9" }, { "start": 106, "end": 127, "text": "Etzioni et al., 2005;", "ref_id": "BIBREF6" }, { "start": 128, "end": 150, "text": "Kozareva et al., 2008)", "ref_id": "BIBREF15" }, { "start": 166, "end": 188, "text": "(Lin and Pantel, 2002;", "ref_id": "BIBREF18" }, { "start": 189, "end": 217, "text": "Davidov and Rappoport, 2008)", "ref_id": "BIBREF4" }, { "start": 542, "end": 555, "text": "(Pa\u015fca, 2004;", "ref_id": "BIBREF19" }, { "start": 556, "end": 574, "text": "Snow et al., 2006;", "ref_id": "BIBREF26" }, { "start": 575, "end": 599, "text": "Kozareva and Hovy, 2010)", "ref_id": "BIBREF14" }, { "start": 903, "end": 935, "text": "(Pennacchiotti and Pantel, 2009)", "ref_id": "BIBREF22" }, { "start": 1102, "end": 1125, "text": "(Talukdar et al., 2008)", "ref_id": "BIBREF29" }, { "start": 1313, "end": 1341, "text": "(Talukdar and Pereira, 2010)", "ref_id": "BIBREF28" }, { "start": 1564, "end": 1587, "text": "(Talukdar et al., 2008)", "ref_id": "BIBREF29" }, { "start": 1592, "end": 1620, "text": "(Talukdar and Pereira, 2010)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work addresses the following question: Is it possible to effectively explore the structure of the text-mined instance-instance networks to enhance an incomplete set of class labels? Our intuition is that if an instance like bear belongs to a semantic class carnivore, and the instance bear is connected to the instance fox, then it is more likely that the unlabeled instance fox is also of class carnivore. To solve this problem, we propose two graph-based approaches that use the structure of the instanceinstance graph to propagate the class labels. Our methods are agnostic to the sources of semantic instances and classes. In this work, we carried out experiments with a state-of-the-art instance extraction system and conducted a comparative study between the original and the enhanced class-instance pairs. The results show that this labeling procedure can begin to bridge the gap between the extraction power of the pattern-based approaches and the desired recall by finding class-instance pairs that are not explicitly mentioned in text. The contributions of the paper are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We use only the relationships between the instances themselves to propagate class labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We observe how often labels are propagated along the edges of our semantic network, and propose two ways to extend an initial set of class labels to all the instance nodes in the network. The first approach uses a linear system to compute the network centrality relative to the initially labeled instances. The second approach uses a regularization framework with respect to a random walk on the network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We evaluate the proposed approaches and show that they discover many new class-instance pairs compared to state-of-the-art knowledge harvesting algorithm, while still maintaining high accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct a comparative study between classinstance and instance-instance graphs used to propagate class labels. The experiments show that considering relationships between instances achieves higher accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. In Section 2, we review related work. Section 3 describes the Web-based knowledge harvesting algorithm used to extract the instance network and the class-instance pairs necessary for our experimental evaluation. Section 4 describes the two graphtheoretic methods for class label propagation using an instance-instance network. Section 5 shows a comparative study between the proposed graph algorithms and different baselines. We also show a comparison between class-instance and instanceinstance graphs used in the label propagation. Finally, we conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the past decade, we have reached a good understanding on the knowledge harvesting technology from structured (Suchanek et al., 2007) and unstructured text. Researchers have harvested with varying success semantic lexicons (Riloff and Shepherd, 1997) and concept lists (Katz et al., 2003) . Many efforts have also focused on the extraction of is-a relations (Hearst, 1992; Pa\u015fca, 2004; Etzioni et al., 2005; Pa\u015fca, 2007; Kozareva et al., 2008) , part-of relations (Girju et al., 2003; Pantel and Pennacchiotti, 2006) and general facts (Etzioni et al., 2005; Davidov and Rappoport, 2009; Jain and Pantel, 2010) . Various approaches have been proposed following the patterns of (Hearst, 1992) and clustering (Lin and Pantel, 2002; Davidov and Rappoport, 2008) . A substantial body of work has explored issues such as reranking the harvested knowledge using mutual information (Etzioni et al., 2005) and graph algorithms (Hovy et al., 2009) , estimating the goodness of textmining seeds (Vyas et al., 2009) , organizing the extracted information (Cafarella et al., 2007a; Cafarella et al., 2007b) and inducing term taxonomies with WordNet (Snow et al., 2006) or starting from scratch (Kozareva and Hovy, 2010) .", "cite_spans": [ { "start": 112, "end": 135, "text": "(Suchanek et al., 2007)", "ref_id": "BIBREF27" }, { "start": 225, "end": 252, "text": "(Riloff and Shepherd, 1997)", "ref_id": "BIBREF24" }, { "start": 271, "end": 290, "text": "(Katz et al., 2003)", "ref_id": "BIBREF12" }, { "start": 360, "end": 374, "text": "(Hearst, 1992;", "ref_id": "BIBREF9" }, { "start": 375, "end": 387, "text": "Pa\u015fca, 2004;", "ref_id": "BIBREF19" }, { "start": 388, "end": 409, "text": "Etzioni et al., 2005;", "ref_id": "BIBREF6" }, { "start": 410, "end": 422, "text": "Pa\u015fca, 2007;", "ref_id": "BIBREF20" }, { "start": 423, "end": 445, "text": "Kozareva et al., 2008)", "ref_id": "BIBREF15" }, { "start": 466, "end": 486, "text": "(Girju et al., 2003;", "ref_id": "BIBREF8" }, { "start": 487, "end": 518, "text": "Pantel and Pennacchiotti, 2006)", "ref_id": "BIBREF21" }, { "start": 537, "end": 559, "text": "(Etzioni et al., 2005;", "ref_id": "BIBREF6" }, { "start": 560, "end": 588, "text": "Davidov and Rappoport, 2009;", "ref_id": "BIBREF5" }, { "start": 589, "end": 611, "text": "Jain and Pantel, 2010)", "ref_id": "BIBREF11" }, { "start": 678, "end": 692, "text": "(Hearst, 1992)", "ref_id": "BIBREF9" }, { "start": 708, "end": 730, "text": "(Lin and Pantel, 2002;", "ref_id": "BIBREF18" }, { "start": 731, "end": 759, "text": "Davidov and Rappoport, 2008)", "ref_id": "BIBREF4" }, { "start": 876, "end": 898, "text": "(Etzioni et al., 2005)", "ref_id": "BIBREF6" }, { "start": 920, "end": 939, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" }, { "start": 986, "end": 1005, "text": "(Vyas et al., 2009)", "ref_id": "BIBREF30" }, { "start": 1045, "end": 1070, "text": "(Cafarella et al., 2007a;", "ref_id": "BIBREF2" }, { "start": 1071, "end": 1095, "text": "Cafarella et al., 2007b)", "ref_id": "BIBREF3" }, { "start": 1138, "end": 1157, "text": "(Snow et al., 2006)", "ref_id": "BIBREF26" }, { "start": 1183, "end": 1208, "text": "(Kozareva and Hovy, 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Since pattern-based approaches tend to be highprecision and low-recall in nature, recently of great interest to the research community is the development of approaches that can increment the recall of the harvested class-instance pairs. (Pennacchiotti and Pantel, 2009) proposed an ensemble semantic framework that mixes distributional and patternbased systems with a large set of features from a web-crawl, query logs, and Wikipedia. (Talukdar et al., 2008) combined extractions from free text and structured sources using graph-based label propagation algorithm. (Talukdar and Pereira, 2010) conducted a comparative study of graph algorithms and showed that class-instance extraction can be improved using additional information that can be modeled as instance-attribute edges.", "cite_spans": [ { "start": 237, "end": 269, "text": "(Pennacchiotti and Pantel, 2009)", "ref_id": "BIBREF22" }, { "start": 435, "end": 458, "text": "(Talukdar et al., 2008)", "ref_id": "BIBREF29" }, { "start": 565, "end": 593, "text": "(Talukdar and Pereira, 2010)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Closest to our work is that of (Talukdar et al., 2008; Talukdar and Pereira, 2010) who model classinstance relations to propagate class-labels. Although these algorithms can be applied to other relations (Alfonseca et al., 2010) , to our knowledge yet nobody has modeled the connections between the instances themselves for the task of class-label propagation. We propose regularization and centrality graph-theoretic methods, which exploit the instanceinstance network and a small set of class-instance pairs to propagate the class-labels to the remaining unlabeled instances. While objectives similar to regularization have been used for class-label propagation, the application of node centrality for this task is also novel. The proposed solutions are intuitive and almost parameter-free (both methods have a single parameter, which is easy to interpret and does not require careful tuning).", "cite_spans": [ { "start": 31, "end": 54, "text": "(Talukdar et al., 2008;", "ref_id": "BIBREF29" }, { "start": 55, "end": 82, "text": "Talukdar and Pereira, 2010)", "ref_id": "BIBREF28" }, { "start": 204, "end": 228, "text": "(Alfonseca et al., 2010)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our proposed class-label enhancement approaches are agnostic to the sources of semantic instances and classes. Several methods have been developed to harvest instances from the Web (Pa\u015fca, 2004; Etzioni et al., 2005; Pa\u015fca, 2007; Kozareva et al., 2008 ) and potentially we can use any of them.", "cite_spans": [ { "start": 181, "end": 194, "text": "(Pa\u015fca, 2004;", "ref_id": "BIBREF19" }, { "start": 195, "end": 216, "text": "Etzioni et al., 2005;", "ref_id": "BIBREF6" }, { "start": 217, "end": 229, "text": "Pa\u015fca, 2007;", "ref_id": "BIBREF20" }, { "start": 230, "end": 251, "text": "Kozareva et al., 2008", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Harvesting from the Web", "sec_num": "3" }, { "text": "In our experiments, we use the doubly-anchored (DAP) method of (Kozareva et al., 2008) , because it achieves higher precision than (Etzioni et al., 2005; Pa\u015fca, 2007) , it is easy to implement and requires minimum supervision (only one seed instance and a lexico-syntactic pattern).", "cite_spans": [ { "start": 63, "end": 86, "text": "(Kozareva et al., 2008)", "ref_id": "BIBREF15" }, { "start": 131, "end": 153, "text": "(Etzioni et al., 2005;", "ref_id": "BIBREF6" }, { "start": 154, "end": 166, "text": "Pa\u015fca, 2007)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Harvesting from the Web", "sec_num": "3" }, { "text": "For a given semantic class of interest say animals, the algorithm starts with a seed example of the class, say whales. The seed instance is fed into a doubly-anchored pattern \" such as and *\", which extracts on the position of the * new instances of the semantic class. Then, the newly acquired instances are individually placed on the position of the seed in the DAP pattern. The bootstrapping procedure is repeated until no new instances are found. We use the harvested instances to build the instance-instance graph in which the nodes are the learned instances and directed edges like (whales,dolphins) indicate that the instance whales extracted the instance dolphins. The edges between the instances are weighted based on the number of times the DAP pattern extracted the instances together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Harvesting from the Web", "sec_num": "3" }, { "text": "Different strategies can be employed to acquire semantic classes for each instance. We follow the fully automated approach of (Hovy et al., 2009) , which takes the learned instance pairs from DAP and feeds them into the pattern \"* such as and \". The algorithm extracts on the position of the * new semantic classes related to instance 1 . According to (Hovy et al., 2009) , the usage of two instances acts as a disambiguator and leads to much more accurate semantic class extraction compared to (Ritter et al., 2009) .", "cite_spans": [ { "start": 126, "end": 145, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" }, { "start": 379, "end": 398, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" }, { "start": 522, "end": 543, "text": "(Ritter et al., 2009)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Harvesting from the Web", "sec_num": "3" }, { "text": "We model the output of the instance harvesting algorithm as a directed weighted graph that is given by a set of vertices V and a set of edges E. We use n to denote the number of vertices. A node u corresponds to a learned instance, and an edge (u, v) \u2208 E indicates that the instance v was learned from the instance u using the DAP pattern. The weight of the edge w(u, v) specifies the number of times the pair of instances were found by the DAP pattern. We define the adjacency matrix of the graph as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "A(u, v) = w(u, v) if (u, v) \u2208 E 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "We use d out (u) to specify the out-degree of u:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "d out (u) = (u,v)\u2208E w(u, v), and d in (v) to specify the in-degree of v: d in (v) = (u,v)\u2208E w(u, v).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "We represent the initial set of instances L that are believed to belong to class C (the set of labeled instances) by a row vector l \u2208 {0, 1} n , where l(u) = 1 if u \u2208 L. Our objective is to compute a vectorl wherel(u) is proportional to how likely it is that u belongs to C. We write all vectors as row vectors, and use c to denote a 1 by n constant vector such that c(u) = c for all u \u2208 V .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "Our first approach is based on the intuition that if u \u2208 C and (u, v) \u2208 E, then it is more likely that v \u2208 C. Moreover, the larger the weight of the edge w(u, v), the more likely it is that v \u2208 C. When we extend this intuition to all the in-neighbors, we say that the score of each node is proportional to the sum of the scores of its in-neighbors scaled by the edge weights:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "l(v) = \u03b1 (u,v)\u2208El (u)w(u, v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": ". We can verify that the vectorl must then satisfyl = \u03b1lA, so it is an eigenvector of the adjacency matrix of the graph with an eigenvalue of \u03b1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "However, this formulation is insufficient because even though it captures our intuition that the nodes get their scores from their in-neighbors, we are still ignoring the initial scores of the nodes. A way to take the initial scores into consideration is to compute the following steady-state equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "l = l + \u03b1 \u2022lA.", "eq_num": "(1)" } ], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "Equation 1 specifies that the scorel(u) of each node u is the sum of its initial score l(u) and the weighted sum of the scores of its neighbors, which is scaled by \u03b1. This equation is known as \u03b1-centrality, which was first introduced by (Bonacich and Lloyd, 2001 ). The \u03b1 parameter controls how much the score of each node depends on the scores of its neighbors. When \u03b1 = 0 the score of each node is equivalent to its initial score, and does not depend on the scores of its neighbors at all. Alternately, we can think of the vectorl as the fixed-point of the process in which in each iteration some node v updates its scorel(", "cite_spans": [ { "start": 237, "end": 262, "text": "(Bonacich and Lloyd, 2001", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "v) by settingl(v) = l(v) + \u03b1 (u,v)\u2208E w(u, v)l(u).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "Solving Equation 1 we can see thatl = l(I \u2212 \u03b1A) \u22121 , where I is the identity matrix of size n. The solution is also closely related to the following expression, which is known as a Katz score (Katz, 1953) :", "cite_spans": [ { "start": 192, "end": 204, "text": "(Katz, 1953)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "s \u221e t=1 \u03b1 t A t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "We can verify that A t (u, v) gives the number of paths of length t between u and v. Katz proposed using the above expression with the starting vector s = 1 to measure centrality in a network. Therefore, the score of node v is given by the number of paths from u to v for all u \u2208 V , with longer paths given less weight based on the value of \u03b1. The method proposed here measures a similar quantity with a non-uniform starting vector. To show the relationship between the two measures we use the identity", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "that \u221e t=1 \u03b1 t A t = (I \u2212 \u03b1A) \u22121 \u2212 I. It is easy to see thatl = l(I \u2212 \u03b1A) \u22121 = l( \u221e t=1 \u03b1 t A t + I) = l \u221e t=1 \u03b1 t A t + l = l \u221e t=0 \u03b1 t A t .", "eq_num": "(2)" } ], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "Equation 2 shows thatl(v) is given by the number of paths from u to v for all u \u2208 L (the initial labeled set). Using a larger value of \u03b1 corresponds to giving more weight to paths of longer length. The summation \u221e t=0 \u03b1 t A t converges as long as |\u03b1| < 1/\u03bb max , where \u03bb max is the largest eigenvalue of A. Therefore, we can only consider values of \u03b1 in this range.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized Centrality", "sec_num": "4.1" }, { "text": "Our second approach constrainsl to be as consistent or smooth as possible with respect to the structure of the graph. The simplest way to express this is to require that for each edge (u, v) \u2208 E, the scores of the endpointsl(u) andl(v) must be as similar as possible. Moreover, the greater the weight of the edge w(u, v) the more important it is for the scores to match. Using this intuition we can define the following optimization problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization Using Random Walks", "sec_num": "4.2" }, { "text": "argminl \u2208{0,1} n (u,v)\u2208E (l(u) \u2212l(v)) 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization Using Random Walks", "sec_num": "4.2" }, { "text": "Settingl = 0 orl = 1 clearly optimizes this function, but does not give a meaningful solution. However, we can additionally constrainl by requiring that the initial labels cannot be modified, or more generally penalizing the discrepancy betweenl(u) and l(u) for u \u2208 L. The methods of (Talukdar and Pereira, 2010) optimize objective functions of this type.", "cite_spans": [ { "start": 284, "end": 312, "text": "(Talukdar and Pereira, 2010)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Regularization Using Random Walks", "sec_num": "4.2" }, { "text": "Unlike the work of (Talukdar and Pereira, 2010), here we use an objective function that considers smoothness with respect to a random walk on the graph. Performing a random walk allows us to take more of the graph structure into account. For example, if nodes u and v are part of the same cluster then it is likely that the edge (u, v) is heavily traversed during the random walk, and should have a lot of probability in the stationary distribution of the walk. Simply considering the weight of the edge w (u, v) gives us no such information. Therefore if our objective function requires the scores to be consistent with respect to the stationary probability of the edges in the random walk, we can compute scores that are consistent with the clustering structure of the graph.", "cite_spans": [ { "start": 506, "end": 512, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Regularization Using Random Walks", "sec_num": "4.2" }, { "text": "Our semantic network is not strongly connected, so we must make some modifications to the random walk to ensure that it has a stationary distribution. Section 4.2.1 describes our random walk and how we compute the transition probability matrix P and its stationary probability distribution \u03c0. The definition of our objective function and the description of how it is optimized is given in Section 4.2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization Using Random Walks", "sec_num": "4.2" }, { "text": "Formally, a random walk is a process where at each step we move from some node to one of its neighbors. The transition probabilities are given by edge weights, therefore the transition probability matrix W is the normalized adjacency matrix where each row sums to one:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "W = D \u22121 A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "Here the D matrix is the degree matrix, which is a diagonal matrix given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "D(u, v) = d out (u) if u = v 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "In our semantic network some nodes have no outneighbors, so in order to compute W we first add a self-loop to any such node. In addition, we modify the random walk to reset at each step with nonzero probability \u03b2 to ensure that it has a steady-state probability distribution. When the walk resets it jumps or teleports to any node in the graph with equal probability. The transition probability matrix of this process is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "P = \u03b2K + (1 \u2212 \u03b2)W,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "where K is an n by n matrix given by K(u, v) = 1 n for all u, v \u2208 V . The stationary distribution \u03c0 must satisfy \u03c0 = \u03c0P . Equivalently \u03c0 can be viewed as a solution to the following PageRank equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "\u03c0 = \u03b2s + (1 \u2212 \u03b2)\u03c0W.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "Here the starting vector s = 1 n 1 gives the probability distribution for where the walk transitions when it resets. In our computations we use a jump probability \u03b2 = 0.15, which is standard for computations of PageRank. The stationary distribution \u03c0 can be computed by either solving the PageRank equation or computing the eigenvector of P corresponding to the eigenvalue of 1. (Zhou et al., 2005) propose the following function to measure the smoothness ofl with respect to the stationary distribution of the random walk:", "cite_spans": [ { "start": 379, "end": 398, "text": "(Zhou et al., 2005)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Teleporting Random Walk", "sec_num": "4.2.1" }, { "text": "\u2126(l) = 1 2 (u,v)\u2208E \u03c0(u)P (u, v) l (u) \u03c0(u) \u2212l (v) \u03c0(v) 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "Here \u03c0(u)P (u, v) gives the steady-state probability of traversing the edge (u, v), and \u03c0(u) and \u03c0(v) specify how much probability u and v have in the stationary distribution \u03c0. Zhou et al. point out that using this function gives better results than smoothness with respect to the edge weights, which can be formulated by replacing \u03c0(u)p(u, v) with w(u, v), and replacing \u03c0(u) and \u03c0(v) with d out (u) and d in (v), respectively. This observation is consistent with our intuition that considering a random walk takes more of the graph structure into account.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "In addition to minimizing \u2126(l), we also wantl to be as close as possible to l, which gives the following optimization problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "argminl \u2208R n {\u2126(\u0177) + \u00b5||l \u2212 l|| 2 }.", "eq_num": "(3)" } ], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "Here the \u00b5 > 0 parameter specifies the tradeoff between the two terms: using a larger \u00b5 corresponds to placing more emphasis on agreement with the initial labels. (Zhou et al., 2005) show that this objective is optimized by computin\u011d", "cite_spans": [ { "start": 163, "end": 182, "text": "(Zhou et al., 2005)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "l = (I \u2212 \u03b3\u0398) \u22121 l,", "eq_num": "(4)" } ], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "where \u0398 = (\u03a0 1/2 P \u03a0 \u22121/2 + \u03a0 \u22121/2 P \u03a0 1/2 )/2, and \u03b3 = 1/(1 + \u00b5). \u03a0 is a diagonal matrix given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "\u03a0(u, v) = \u03c0(u) if u = v 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "Zhou et al. propose this approach for semisupervised learning of labels on the graph, given an initial vector l such that l(u) = 1 if vertex u has the label, l(u) = \u22121 if u does not have the label, and l(u) = 0 if the vertex is unlabeled. They propose taking the sign ofl(u) to classify u as positive or negative. Using our labeling procedure we do not have any negative examples, so our initial vector l is non-negative, resulting in a non-negative vectorl. This is not a problem because we can still interpret l(u) to be proportional to how likely it is that u has the label. Rather than trying different settings of \u00b5, we directly vary \u03b3, with a smaller \u03b3 placing more emphasis on agreement with initial labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regularization", "sec_num": "4.2.2" }, { "text": "For our experimental study, we select three widely used domains in the harvesting community (Etzioni et al., 2005; Pa\u015fca, 2007; Hovy et al., 2009; Kozareva and Hovy, 2010) : animals and vehicles. For each domain we randomly selected different semantic classes, which resulted in 20 classes altogether. To generate the instance-instance semantic network, we use the harvesting procedure described in Section 3. For example, to learn instances associated with animals, we instantiate the bootstrapping algorithm with the semantic class animals, the seed instance bears and the pattern \"animals such as bears and *\". We submitted the pattern as queries to Yahoo!Boss and collected new instances. We ranked the instances following (Kozareva et al., 2008) which resulted in 397 animal, 4471 plant and 1425 vehicle instances. Table 1 shows the number of nodes (instances) and directed edges for the constructed semantic networks. Next, we use the harvested instances to automatically learn the semantic classes associated with them. For example, bears and wolves are animals but also mammals, predators, vertebrates among others. The obtained class harvesting results are shown in Table 2 . We indicate with Inst (Hovy et al., 2009) the number of instances in the semantic network that discovered the class during the patternbased harvesting, and with InstInWordNet the number of instances in the semantic network belonging to the class according to WordNet.", "cite_spans": [ { "start": 92, "end": 114, "text": "(Etzioni et al., 2005;", "ref_id": "BIBREF6" }, { "start": 115, "end": 127, "text": "Pa\u015fca, 2007;", "ref_id": "BIBREF20" }, { "start": 128, "end": 146, "text": "Hovy et al., 2009;", "ref_id": "BIBREF10" }, { "start": 147, "end": 171, "text": "Kozareva and Hovy, 2010)", "ref_id": "BIBREF14" }, { "start": 727, "end": 750, "text": "(Kozareva et al., 2008)", "ref_id": "BIBREF15" }, { "start": 1207, "end": 1226, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 820, "end": 827, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1175, "end": 1182, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Data Collection", "sec_num": "5.1" }, { "text": "Inst (Hovy et al., 2009 We can see that the pattern-based approach of (Hovy et al., 2009) does not recover a lot of the class-instance relations present in WordNet. Because of this gap between the actual and the harvested class-instance pairs arises the objective of our work, which is to explore the relationships between the instances to propagate the initially learned class labels to the remaining unlabeled instances. To evaluate the performance of our approach, we use as a gold standard the WordNet class-instance mappings.", "cite_spans": [ { "start": 5, "end": 23, "text": "(Hovy et al., 2009", "ref_id": "BIBREF10" }, { "start": 70, "end": 89, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "ClassName", "sec_num": null }, { "text": "Our approach is based on the intuition that given a labeled instance u of class C, and an instance v in our network, if there is an edge (u, v) then it is more likely that v has the label C as well. For example, if the instance bears is of class vertebrates and there is an edge between the instances bears and wolves, then it is likely that wolves are also vertebrates. Before proceeding with the instance-instance classlabel propagation algorithms, first we study whether this intuition is correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing Our Approach", "sec_num": "5.2" }, { "text": "Individually for each class label C, we construct a set T C that contains all instances in the network belonging to C according to WordNet. Then we compute the probability that v belongs to C in WordNet given that (u, v) is an edge in the instance network and u belongs to C in WordNet:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing Our Approach", "sec_num": "5.2" }, { "text": "P r h = Pr[v \u2208 T C | (u, v) \u2208 E and u \u2208 T C ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing Our Approach", "sec_num": "5.2" }, { "text": "We compare this to the background probability P r b = Pr[v \u2208 T C | u, v \u2208 V and u \u2208 T C ], which gives the probability that v belongs to C in WordNet if it is chosen at random. In other words, if P r h = 1, this means that whenever u has the label C and (u, v) is an edge, then v is always labeled with C. If indeed this is the case, then a good classifier can simply take the initial set L and extend the labels to all nodes reachable from L in the semantic network. The larger the difference between P r h and P r b , the more information the links of the instance network carry for the task of label propagation. Table 3 shows the P r h and P r b values for each class. This study verifies our intuition that using the relationships between the instances to extend a class label to the remaining unlabeled nodes is an effective approach to enhancing an incomplete set of initial labels.", "cite_spans": [], "ref_spans": [ { "start": 616, "end": 623, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Testing Our Approach", "sec_num": "5.2" }, { "text": "The objective of our work is given a set of initially labeled nodes L, to assign to each node a score that indicates how likely it is to belong to L. The simplest way to do this using the edges of the instance network is to say that a node that has more in-neighbors that have a certain label is more likely to have this label. We define the in-neighbor score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Study", "sec_num": "5.3" }, { "text": "i(v) of a node v as i(v) = |{u \u2208 V |(u, v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Study", "sec_num": "5.3" }, { "text": ") \u2208 E and u \u2208 L}|. We expect that the higher the inneighbor score of v, the more likely it is that v has the label L. The personalized centrality method that we proposed generalizes this intuition to indirect neighbors (see Methods). Our regularization using random walks technique further explores the link structure of the instance network by considering a random walk on it (see Methods). We compare our approaches with a method that labels nodes at random. The expected accuracy for class C is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Study", "sec_num": "5.3" }, { "text": "|T C |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Study", "sec_num": "5.3" }, { "text": "n , where n is the number of nodes in the network, and T C is the set containing all nodes that belong to C according to WordNet. In other words, given that there are 84 nodes in the network that are classified as invertebrate according to WordNet, and there are 397 nodes in total, if we choose any number of nodes at random our expected accuracy is 21%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Study", "sec_num": "5.3" }, { "text": "We evaluate the performance of our approaches against the WordNet gold standard and show the obtained results in Tables 4 and 5 Table 4 : Accuracy @ Different Ranks. Table 4 shows the accuracy at rank R calculated as the number of correctly labeled instances with class C at rank R divided by the total number of instances with class C at rank R. Due to space limitation, we show detailed ranking only for three of the classes. We can see that using the semantic network significantly enhances our ability to learn class labels. Even the simple in-neighbor method produces results that are very significant compared to chance. Our centrality and regularization techniques further explore the structure of the semantic network to give better predictions. Table 5 shows the accuracy of the class label propagation algorithms for each class. For each class we consider the top k ranked nodes, where k is the number of instances that belong to this class according to WordNet. For example, the accuracy of centrality for carnivores is 80% showing that from the top 57 ranked animal instances, 80% belong to carnivores. In the final column we also report the performance of a label propagation algorithm that uses class-instance graph instead of an instance-instance graph. To build the graph we remove the edges between the instances and keep the class-instance mappings discovered by the harvesting algorithm of (Hovy et al., 2009) . We use the modified adsorption algorithm (MAD) of (Talukdar et al., 2008) , which is freely available from the Junto toolkit 1 . To rank the instances for each class label produced by Junto, we use the computed label scores as a ranking criteria and measure accuracy similarly to centrality and regularization. The obtained results show that for almost all cases the methods that use the structure of the instance network significantly outperform predictions that use the class-instance graph. This indicates that we can indeed learn a lot form the instance-instance relationships by exploring the structure of the instance network. Among all approaches regularization achieves the best results. We believe that regularization works well because it considers a random walk on the semantic graph, and within-cluster 1 http://code.google.com/p/junto/ edges are traversed more often in a random walk. The regularization technique computes scores that are consistent with the clustering structure of the graph by requiring that the endpoints of highly traversed edges, which are likely in the same cluster, have similar scores (see Methods). Overall, regularization enhanced the original output generated by the pattern-based knowledge harvesting approach of (Hovy et al., 2009) with 1219 new class-instance pairs (75% additional information) while maintaining 61.87% accuracy. ", "cite_spans": [ { "start": 1409, "end": 1428, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" }, { "start": 1481, "end": 1504, "text": "(Talukdar et al., 2008)", "ref_id": "BIBREF29" }, { "start": 2686, "end": 2705, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 113, "end": 127, "text": "Tables 4 and 5", "ref_id": "TABREF8" }, { "start": 128, "end": 135, "text": "Table 4", "ref_id": null }, { "start": 166, "end": 173, "text": "Table 4", "ref_id": null }, { "start": 754, "end": 761, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Comparative Study", "sec_num": "5.3" }, { "text": "Both of our centrality and regularization methods have a single tunable parameter. For centrality the parameter \u03b1 controls how much the label of each node depends on the labels of its neighbors in the graph. The values range from 0 to 1/\u03bb max , where \u03bb max is the largest eigenvalue of the adjacency matrix of the semantic network. When \u03b1 = 0 the label of each node is equivalent to its initial label, while higher values of \u03b1 give more weight to the labels of nodes that are further away.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Tuning", "sec_num": "5.4" }, { "text": "For regularization the parameter \u03b3 controls how much emphasis is placed on the agreement between the initial and learned labels. The values of \u03b3 are between 0 and 1. Smaller values require that the learned labels be more consistent with the original labels. When \u03b3 = 0 the learned labels will exactly match the original labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Tuning", "sec_num": "5.4" }, { "text": "For each method we try several parameter settings and show the results in Figure 1 for the propagation of the class label invertebrate. We can see that both methods are quite insensitive to the parameter settings, unless we choose very extreme values that ignore the original labels.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 82, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Parameter Tuning", "sec_num": "5.4" }, { "text": "We also study how the quality of the results is affected by the number of initial class-instance pairs used by our propagation methods. We conduct experiments using only 25%, 50%, 75% and 100% of the initial class-instance pairs learned by (Hovy et al., 2009) . Figure 2 shows the results for the label propagation of the class invertebrate.", "cite_spans": [ { "start": 240, "end": 259, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 262, "end": 270, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Effect of number of labeled class-instances", "sec_num": "5.5" }, { "text": "The performance of our methods significantly improves when we incorporate more labels. Still, if we are less concerned with recall and want to find small sets of nodes with very high accuracy, the number of initial labels is less important. For example, starting with only 13 labeled nodes we can still achieve 100% accuracy for the top 30 nodes using regularization, and 96% accuracy for the top 25 nodes using centrality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of number of labeled class-instances", "sec_num": "5.5" }, { "text": "In this paper we proposed a centrality and regularization graph-theoretic methods that explore the relationships between the instances themselves to effectively extend a small set of class-instance labels to all instances in a semantic network. The proposed approaches are intuitive and almost parameter-free. We conducted a series of experiments in which we compared the effectiveness of the centrality and reg- ularization methods to learn new labels for the unlabeled instances. We showed that the enhanced class labels improve the original output generated by the pattern-based knowledge harvesting approach of (Hovy et al., 2009) . Finally, we have studied the impact of the class-instance and instance-instance graphs for the class-label propagation task. The latter approach has shown to produce much more accurate results. In the future, we want to apply our approach to Web-based taxonomy induction, which according to (Kozareva and Hovy, 2010) is stifled due to the lacking relations between the instances and the classes, and the classes themselves. The proposed methods can be also applied to enhance fact farms (Jain and Pantel, 2010) .", "cite_spans": [ { "start": 615, "end": 634, "text": "(Hovy et al., 2009)", "ref_id": "BIBREF10" }, { "start": 928, "end": 953, "text": "(Kozareva and Hovy, 2010)", "ref_id": "BIBREF14" }, { "start": 1124, "end": 1147, "text": "(Jain and Pantel, 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" } ], "back_matter": [ { "text": "We acknowledge the support of DARPA contract number FA8750-09-C-3705 and NSF grant IIS-0429360. We would like to thank the three anonymous reviewers for their useful comments and suggestions and Partha Talukdar for the discussions on modified adsorption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Acquisition of instance attributes via labeled and related instances", "authors": [ { "first": "Enrique", "middle": [], "last": "Alfonseca", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pasca", "suffix": "" }, { "first": "Enrique", "middle": [], "last": "Robledo-Arnuncio", "suffix": "" } ], "year": 2010, "venue": "Proceeding of the 33rd international ACM SIGIR conference on Research and development in information retrieval, SI-GIR '10", "volume": "", "issue": "", "pages": "58--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enrique Alfonseca, Marius Pasca, and Enrique Robledo- Arnuncio. 2010. Acquisition of instance attributes via labeled and related instances. In Proceeding of the 33rd international ACM SIGIR conference on Re- search and development in information retrieval, SI- GIR '10, pages 58-65.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Social Networks", "authors": [ { "first": "Phillip", "middle": [], "last": "Bonacich", "suffix": "" }, { "first": "Paulette", "middle": [], "last": "Lloyd", "suffix": "" } ], "year": 2001, "venue": "", "volume": "23", "issue": "", "pages": "191--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phillip Bonacich and Paulette Lloyd. 2001. Social Net- works, 23(3):191-201.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Structured querying of web text: A technical challenge", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Cafarella", "suffix": "" }, { "first": "R", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Suciu", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Cafarella, Christopher R, Dan Suciu, Oren Et- zioni, and Michele Banko. 2007a. Structured query- ing of web text: A technical challenge. In in CIDR.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Navigating extracted data with schema discovery", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Cafarella", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Suciu", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2007, "venue": "Tenth International Workshop on the Web and Databases", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Cafarella, Dan Suciu, and Oren Etzioni. 2007b. Navigating extracted data with schema discov- ery. In Tenth International Workshop on the Web and Databases, WebDB 2007WebDB.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Classification of semantic relationships between nominals using pattern clusters", "authors": [ { "first": "Dmitry", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "227--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Davidov and Ari Rappoport. 2008. Classification of semantic relationships between nominals using pat- tern clusters. In Proceedings of ACL-08: HLT, pages 227-235, June.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Geo-mining: Discovery of road and transport networks using directional patterns", "authors": [ { "first": "Dmitry", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP-09", "volume": "", "issue": "", "pages": "267--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Davidov and Ari Rappoport. 2009. Geo-mining: Discovery of road and transport networks using direc- tional patterns. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Process- ing, EMNLP-09, pages 267-275.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised named-entity extraction from the web: an experimental study", "authors": [ { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Downey", "suffix": "" }, { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Shaked", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2005, "venue": "Artificial Intelligence", "volume": "165", "issue": "1", "pages": "91--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Etzioni, Michael Cafarella, Doug Downey, Ana- Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsuper- vised named-entity extraction from the web: an exper- imental study. Artificial Intelligence, 165(1):91-134, June.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning semantic constraints for the automatic discovery of part-whole relations", "authors": [ { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Badulescu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Girju, Adriana Badulescu, and Dan Moldovan. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology, pages 1-8.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic acquisition of hyponyms from large text corpora", "authors": [ { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 14th conference on Computational linguistics", "volume": "", "issue": "", "pages": "539--545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics, pages 539- 545.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Toward completeness in concept extraction and classification", "authors": [ { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "948--957", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard Hovy, Zornitsa Kozareva, and Ellen Riloff. 2009. Toward completeness in concept extraction and clas- sification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 948-957.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Factrank: Random walks on a web of facts", "authors": [ { "first": "Alpa", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "501--509", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alpa Jain and Patrick Pantel. 2010. Factrank: Random walks on a web of facts. In Proceedings of the 23rd In- ternational Conference on Computational Linguistics (Coling 2010), pages 501-509.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Integrating web-based and corpus-based techniques for question answering", "authors": [ { "first": "Boris", "middle": [], "last": "Katz", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Loreto", "suffix": "" }, { "first": "Wesley", "middle": [], "last": "Hildebrandt", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Bilotti", "suffix": "" }, { "first": "Sue", "middle": [], "last": "Felshin", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Fernandes", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Mora", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the twelfth text retrieval conference (TREC)", "volume": "", "issue": "", "pages": "426--435", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boris Katz, Jimmy Lin, Daniel Loreto, Wesley Hilde- brandt, Matthew Bilotti, Sue Felshin, Aaron Fernan- des, Gregory Marton, and Federico Mora. 2003. In- tegrating web-based and corpus-based techniques for question answering. In Proceedings of the twelfth text retrieval conference (TREC), pages 426-435.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A new status index derived from sociometric analysis", "authors": [ { "first": "Leo", "middle": [], "last": "Katz", "suffix": "" } ], "year": 1953, "venue": "Psychometrika", "volume": "18", "issue": "", "pages": "39--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leo Katz. 1953. A new status index derived from socio- metric analysis. Psychometrika, 18:39-43.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A semisupervised method to learn and construct taxonomies using the web", "authors": [ { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1110--1118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zornitsa Kozareva and Eduard Hovy. 2010. A semi- supervised method to learn and construct taxonomies using the web. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, pages 1110-1118.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Semantic class learning from the web with hyponym pattern linkage graphs", "authors": [ { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zornitsa Kozareva, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the web with hyponym pattern linkage graphs. In Proceedings of the 46th", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Annual Meeting of the Association for Computational Linguistics ACL-08: HLT", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1048--1056", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics ACL-08: HLT, pages 1048-1056.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Dirt -discovery of inference rules from text", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "323--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. Dirt -discovery of inference rules from text. In In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 323-328.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Concept discovery from text", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 19th international conference on Computational linguistics", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2002. Concept discovery from text. In Proc. of the 19th international confer- ence on Computational linguistics, pages 1-7.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Acquisition of categorized named entities for web search", "authors": [ { "first": "Marius", "middle": [], "last": "Pa\u015fca", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the thirteenth ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "137--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marius Pa\u015fca. 2004. Acquisition of categorized named entities for web search. In Proceedings of the thir- teenth ACM international conference on Information and knowledge management, pages 137-145.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Organizing and searching the world wide web of facts -step two: harnessing the wisdom of the crowds", "authors": [ { "first": "Marius", "middle": [], "last": "Pa\u015fca", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 16th international conference on World Wide Web", "volume": "", "issue": "", "pages": "101--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marius Pa\u015fca. 2007. Organizing and searching the world wide web of facts -step two: harnessing the wisdom of the crowds. In Proceedings of the 16th international conference on World Wide Web, pages 101-110.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Espresso: leveraging generic patterns for automatically harvesting semantic relations", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: leveraging generic patterns for automatically harvest- ing semantic relations. In Proceedings of the 21st In- ternational Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 113-120.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Entity extraction via ensemble semantics", "authors": [ { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "1", "issue": "", "pages": "238--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Pennacchiotti and Patrick Pantel. 2009. Entity extraction via ensemble semantics. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 -Volume 1, EMNLP '09, pages 238-247.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning dictionaries for information extraction by multi-level bootstrapping", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Rosie", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1999, "venue": "AAAI '99/IAAI '99: Proceedings of the Sixteenth National Conference on Artificial intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dictionar- ies for information extraction by multi-level bootstrap- ping. In AAAI '99/IAAI '99: Proceedings of the Six- teenth National Conference on Artificial intelligence.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A corpus-based approach for building semantic lexicons", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Jessica", "middle": [], "last": "Shepherd", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Empirical Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "117--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff and Jessica Shepherd. 1997. A corpus-based approach for building semantic lexicons. In Proceed- ings of the Empirical Methods for Natural Language Processing, pages 117-124.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "What is this, anyway: Automatic hypernym discovery", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the AAAI Spring Symposium on Learning by Reading and Learning to Read", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Stephen Soderland, and Oren Etzioni. 2009. What is this, anyway: Automatic hypernym discov- ery. In Proceedings of the AAAI Spring Symposium on Learning by Reading and Learning to Read.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Semantic taxonomy induction from heterogenous evidence", "authors": [ { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44", "volume": "", "issue": "", "pages": "801--808", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evi- dence. In Proceedings of the 21st International Con- ference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 801-808.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Yago: a core of semantic knowledge", "authors": [ { "first": "Fabian", "middle": [ "M" ], "last": "Suchanek", "suffix": "" }, { "first": "Gjergji", "middle": [], "last": "Kasneci", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2007, "venue": "WWW '07: Proceedings of the 16th international conference on World Wide Web", "volume": "", "issue": "", "pages": "697--706", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In WWW '07: Proceedings of the 16th international conference on World Wide Web, pages 697-706.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Experiments in graph-based semi-supervised learning methods for class-instance acquisition", "authors": [ { "first": "Partha", "middle": [ "Pratim" ], "last": "Talukdar", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1473--1481", "other_ids": {}, "num": null, "urls": [], "raw_text": "Partha Pratim Talukdar and Fernando Pereira. 2010. Experiments in graph-based semi-supervised learning methods for class-instance acquisition. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1473-1481.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Weakly-supervised acquisition of labeled class instances using graph random walks", "authors": [ { "first": "Partha", "middle": [ "Pratim" ], "last": "Talukdar", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Reisinger", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pasca", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Ravichandran", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Bhagat", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "582--590", "other_ids": {}, "num": null, "urls": [], "raw_text": "Partha Pratim Talukdar, Joseph Reisinger, Marius Pasca, Deepak Ravichandran, Rahul Bhagat, and Fernando Pereira. 2008. Weakly-supervised acquisition of la- beled class instances using graph random walks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 582- 590.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Helping editors choose better seed sets for entity set expansion", "authors": [ { "first": "Vishnu", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Crestan", "suffix": "" } ], "year": 2009, "venue": "Proceeding of the 18th ACM conference on Information and knowledge management, CIKM '09", "volume": "", "issue": "", "pages": "225--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vishnu Vyas, Patrick Pantel, and Eric Crestan. 2009. Helping editors choose better seed sets for entity set expansion. In Proceeding of the 18th ACM conference on Information and knowledge management, CIKM '09, pages 225-234.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Discovering asymmetric entailment relations between verbs using selectional preferences", "authors": [ { "first": "Fabio", "middle": [ "Massimo" ], "last": "Zanzotto", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Maria", "middle": [ "Teresa" ], "last": "Pazienza", "suffix": "" } ], "year": 2006, "venue": "ACL-44: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "849--856", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Massimo Zanzotto, Marco Pennacchiotti, and Maria Teresa Pazienza. 2006. Discovering asym- metric entailment relations between verbs using selec- tional preferences. In ACL-44: Proceedings of the 21st International Conference on Computational Linguis- tics and the 44th annual meeting of the Association for Computational Linguistics, pages 849-856.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Learning from labeled and unlabeled data on a directed graph", "authors": [ { "first": "Dengyong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jiayuan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 22nd international conference on Machine learning, ICML '05", "volume": "", "issue": "", "pages": "1036--1043", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dengyong Zhou, Jiayuan Huang, and Bernhard Sch\u00f6lkopf. 2005. Learning from labeled and unlabeled data on a directed graph. In Proceedings of the 22nd international conference on Machine learning, ICML '05, pages 1036-1043.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Parameter Tuning For Invertebrates." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Effect of Number of Initial Class-Instance Pairs for Invertebrates." }, "TABREF1": { "text": "Nodes & Edges in the Instance Network.", "html": null, "type_str": "table", "num": null, "content": "" }, "TABREF3": { "text": "", "html": null, "type_str": "table", "num": null, "content": "
" }, "TABREF5": { "text": "Learned & Gold Standard Class-Instances.", "html": null, "type_str": "table", "num": null, "content": "
" }, "TABREF6": { "text": ".", "html": null, "type_str": "table", "num": null, "content": "
Invertebrates
rankcentralityregularizationin-neighborrandom
5 10 20 50 1001.0 1.0 .95 .96 .691.0 1.0 1.0 .98 .73.80 .70 .75 .76 .67.21 .21 .21 .21 .21
Mammals
rankcentralityregularizationin-neighborrandom
5 10 20 50 100.80 .90 .95 .86 .921.0 1.0 .95 .96 .92.80 .90 .85 .80 .76.52 .52 .52 .52 .52
Carnivores
rankcentralityregularizationin-neighborrandom
5 10 20 50 1001.0 .80 .80 .50 .411.0 .80 .85 .68 .44.80 .60 .55 .48 .41.14 .14 .14 .14 .14
" }, "TABREF8": { "text": "Comparative Study.", "html": null, "type_str": "table", "num": null, "content": "" } } } }