{ "paper_id": "P06-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:24:30.186219Z" }, "title": "Relation Extraction Using Label Propagation Based Semi-supervised Learning", "authors": [ { "first": "Jinxiu", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "jinxiu@i2r.a-star.edu.sg" }, { "first": "Donghong", "middle": [], "last": "Ji", "suffix": "", "affiliation": {}, "email": "dhji@i2r.a-star.edu.sg" }, { "first": "Chew", "middle": [ "Lim" ], "last": "Tan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Zhengyu", "middle": [], "last": "Niu", "suffix": "", "affiliation": {}, "email": "zniu@i2r.a-star.edu.sg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Shortage of manually labeled data is an obstacle to supervised relation extraction methods. In this paper we investigate a graph based semi-supervised learning algorithm, a label propagation (LP) algorithm, for relation extraction. It represents labeled and unlabeled examples and their distances as the nodes and the weights of edges of a graph, and tries to obtain a labeling function to satisfy two constraints: 1) it should be fixed on the labeled nodes, 2) it should be smooth on the whole graph. Experiment results on the ACE corpus showed that this LP algorithm achieves better performance than SVM when only very few labeled examples are available, and it also performs better than bootstrapping for the relation extraction task.", "pdf_parse": { "paper_id": "P06-1017", "_pdf_hash": "", "abstract": [ { "text": "Shortage of manually labeled data is an obstacle to supervised relation extraction methods. In this paper we investigate a graph based semi-supervised learning algorithm, a label propagation (LP) algorithm, for relation extraction. It represents labeled and unlabeled examples and their distances as the nodes and the weights of edges of a graph, and tries to obtain a labeling function to satisfy two constraints: 1) it should be fixed on the labeled nodes, 2) it should be smooth on the whole graph. Experiment results on the ACE corpus showed that this LP algorithm achieves better performance than SVM when only very few labeled examples are available, and it also performs better than bootstrapping for the relation extraction task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Relation extraction is the task of detecting and classifying relationships between two entities from text. Many machine learning methods have been proposed to address this problem, e.g., supervised learning algorithms (Miller et al., 2000; Zelenko et al., 2002; Culotta and Soresen, 2004; Kambhatla, 2004; Zhou et al., 2005) , semi-supervised learning algorithms (Brin, 1998; Agichtein and Gravano, 2000; Zhang, 2004) , and unsupervised learning algorithms (Hasegawa et al., 2004) .", "cite_spans": [ { "start": 218, "end": 239, "text": "(Miller et al., 2000;", "ref_id": "BIBREF10" }, { "start": 240, "end": 261, "text": "Zelenko et al., 2002;", "ref_id": "BIBREF13" }, { "start": 262, "end": 288, "text": "Culotta and Soresen, 2004;", "ref_id": "BIBREF6" }, { "start": 289, "end": 305, "text": "Kambhatla, 2004;", "ref_id": "BIBREF8" }, { "start": 306, "end": 324, "text": "Zhou et al., 2005)", "ref_id": "BIBREF15" }, { "start": 363, "end": 375, "text": "(Brin, 1998;", "ref_id": "BIBREF4" }, { "start": 376, "end": 404, "text": "Agichtein and Gravano, 2000;", "ref_id": "BIBREF0" }, { "start": 405, "end": 417, "text": "Zhang, 2004)", "ref_id": "BIBREF14" }, { "start": 457, "end": 480, "text": "(Hasegawa et al., 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Supervised methods for relation extraction perform well on the ACE Data, but they require a large amount of manually labeled relation instances. Unsupervised methods do not need the definition of relation types and manually labeled data, but they cannot detect relations between entity pairs and its result cannot be directly used in many NLP tasks since there is no relation type label attached to each instance in clustering result. Considering both the availability of a large amount of untagged corpora and direct usage of extracted relations, semisupervised learning methods has received great attention. DIPRE (Dual Iterative Pattern Relation Expansion) (Brin, 1998) is a bootstrapping-based system that used a pattern matching system as classifier to exploit the duality between sets of patterns and relations. Snowball (Agichtein and Gravano, 2000) is another system that used bootstrapping techniques for extracting relations from unstructured text. Snowball shares much in common with DIPRE, including the employment of the bootstrapping framework as well as the use of pattern matching to extract new candidate relations. The third system approaches relation classification problem with bootstrapping on top of SVM, proposed by Zhang (2004) . This system focuses on the ACE subproblem, RDC, and extracts various lexical and syntactic features for the classification task. However, Zhang (2004)'s method doesn't actually \"detect\" relaitons but only performs relation classification between two entities given that they are known to be related.", "cite_spans": [ { "start": 827, "end": 856, "text": "(Agichtein and Gravano, 2000)", "ref_id": "BIBREF0" }, { "start": 1239, "end": 1251, "text": "Zhang (2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Bootstrapping works by iteratively classifying unlabeled examples and adding confidently classified examples into labeled data using a model learned from augmented labeled data in previous iteration. It can be found that the affinity information among unlabeled examples is not fully explored in this bootstrapping process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently a promising family of semi-supervised learning algorithm is introduced, which can effectively combine unlabeled data with labeled data in learning process by exploiting manifold structure (cluster structure) in data (Belkin and Niyogi, 2002; Blum and Chawla, 2001; Blum et al., 2004; Zhu and Ghahramani, 2002; Zhu et al., 2003) . These graph-based semi-supervised methods usually define a graph where the nodes represent labeled and unlabeled examples in a dataset, and edges (may be weighted) reflect the similarity of examples. Then one wants a labeling function to satisfy two constraints at the same time: 1) it should be close to the given labels on the labeled nodes, and 2) it should be smooth on the whole graph. This can be expressed in a regularization framework where the first term is a loss function, and the second term is a regularizer. These methods differ from traditional semisupervised learning methods in that they use graph structure to smooth the labeling function.", "cite_spans": [ { "start": 225, "end": 250, "text": "(Belkin and Niyogi, 2002;", "ref_id": "BIBREF1" }, { "start": 251, "end": 273, "text": "Blum and Chawla, 2001;", "ref_id": "BIBREF2" }, { "start": 274, "end": 292, "text": "Blum et al., 2004;", "ref_id": "BIBREF3" }, { "start": 293, "end": 318, "text": "Zhu and Ghahramani, 2002;", "ref_id": "BIBREF16" }, { "start": 319, "end": 336, "text": "Zhu et al., 2003)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, no work has been done on using graph based semi-supervised learning algorithms for relation extraction. Here we investigate a label propagation algorithm (LP) (Zhu and Ghahramani, 2002) for relation extraction task. This algorithm works by representing labeled and unlabeled examples as vertices in a connected graph, then propagating the label information from any vertex to nearby vertices through weighted edges iteratively, finally inferring the labels of unlabeled examples after the propagation process converges. In this paper we focus on the ACE RDC task 1 .", "cite_spans": [ { "start": 189, "end": 215, "text": "(Zhu and Ghahramani, 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 presents related work. Section 3 formulates relation extraction problem in the context of semisupervised learning and describes our proposed approach. Then we provide experimental results of our proposed method and compare with a popular supervised learning algorithm (SVM) and bootstrapping algorithm in Section 4. Finally we conclude our work in section 5. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The problem of relation extraction is to assign an appropriate relation type to an occurrence of two entity pairs in a given context. It can be represented as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R \u2192 (C pre , e 1 , C mid , e 2 , C post )", "eq_num": "(1)" } ], "section": "Problem Definition", "sec_num": "2.1" }, { "text": "where e 1 and e 2 denote the entity mentions, and C pre ,C mid ,and C post are the contexts before, between and after the entity mention pairs. In this paper, we set the mid-context window as the words between the two entity mentions and the pre-and postcontext as up to two words before and after the corresponding entity mention. Let X = {x i } n i=1 be a set of contexts of occurrences of all the entity mention pairs, where x i represents the contexts of the i-th occurrence, and n is the total number of occurrences. The first l examples (or contexts) are labeled as y g ( y g \u2208 {r j } R j=1 , r j denotes relation type and R is the total number of relation types). The remaining u(u = n \u2212 l) examples are unlabeled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "2.1" }, { "text": "Intuitively, if two occurrences of entity mention pairs have the similarity context, they tend to hold the same relation type. Based on the assumption, we define a graph where the vertices represent the contexts of labeled and unlabeled occurrences of entity mention pairs, and the edge between any two vertices x i and x j is weighted so that the closer the vertices in some distance measure, the larger the weight associated with this edge. Hence, the weights are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W ij = exp(\u2212 s 2 ij \u03b1 2 )", "eq_num": "(2)" } ], "section": "Problem Definition", "sec_num": "2.1" }, { "text": "where s ij is the similarity between x i and x j calculated by some similarity measures, e.g., cosine similarity, and \u03b1 is used to scale the weights. In this paper, we set \u03b1 as the average similarity between labeled examples from different classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "2.1" }, { "text": "In the LP algorithm, the label information of any vertex in a graph is propagated to nearby vertices through weighted edges until a global stable stage is achieved. Larger edge weights allow labels to travel through easier. Thus the closer the examples are, the more likely they have similar labels. We define soft label as a vector that is a probabilistic distribution over all the classes. In the label propagation process, the soft label of each initial labeled example is clamped in each iteration to replenish label sources from these labeled data. Thus the labeled data act like sources to push out labels through unlabeled data. With this push from labeled examples, the class boundaries will be pushed through edges with large weights and settle in gaps along edges with small weights. Hopefully, the values of W ij across different classes would be as small as possible and the values of W ij within the same class would be as large as possible. This will make label propagation to stay within the same class. This label propagation process will make the labeling function smooth on the graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "Define an n \u00d7 n probabilistic transition matrix T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "T ij = P (j \u2192 i) = w ij n k=1 w kj (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "where T ij is the probability to jump from vertex x j to vertex x i . We define a n \u00d7 R label matrix Y , where Y ij representing the probabilities of vertex y i to have the label r j . Then the label propagation algorithm consists the following main steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "Step1 : Initialization", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "\u2022 Set the iteration index t = 0; \u2022 Let Y 0 be the initial soft labels attached to each vertex, where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "Y 0 ij = 1 if y i is label r j and 0 otherwise. \u2022 Let Y 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "L be the top l rows of Y 0 and Y 0 U be the remaining u rows. Y 0 L is consistent with the labeling in labeled data and the initialization of Y 0 U can be arbitrary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "Step 2 : Propagate the labels of any vertex to nearby vertices by Y t+1 = T Y t , where T is the row-normalized matrix of T , i.e. T ij = T ij / k T ik , which can maintain the class probability interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "Step 3 : Clamp the labeled data, that is, replace the top l row of Y t+1 with Y 0 L .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "Step 4 : Repeat from step 2 until Y converges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "Step 5 : Assign x h (l + 1 \u2264 h \u2264 n) with a label:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "y h = argmax j Y hj .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "The above algorithm can ensure that the labeled data Y L never changes since it is clamped in Step 3. Actually we are interested in only Y U . This algorithm has been shown to converge to a unique solu- (Zhu and Ghahramani, 2002) . Here,T uu andT ul are acquired by splitting matrixT after the l-th row and the l-th column into 4 sub-matrices. And I is u \u00d7 u identity matrix. We can see that the initialization of Y 0 U in this solution is not important, since Y 0 U does not affect the estimation of\u0176 U .", "cite_spans": [ { "start": 203, "end": 229, "text": "(Zhu and Ghahramani, 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "tion\u0176 U = lim t\u2192\u221e Y t U = (I \u2212T uu ) \u22121T ul Y 0 L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Label Propagation Algorithm", "sec_num": "2.2" }, { "text": "Following (Zhang, 2004), we used lexical and syntactic features in the contexts of entity pairs, which are extracted and computed from the parse trees derived from Charniak Parser (Charniak, 1999) and the Chunklink script 2 written by Sabine Buchholz from Tilburg University.", "cite_spans": [ { "start": 180, "end": 196, "text": "(Charniak, 1999)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Set", "sec_num": "3.1" }, { "text": "Words: Surface tokens of the two entities and words in the three contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Set", "sec_num": "3.1" }, { "text": "Entity Type: the entity type of both entity mentions, which can be PERSON, ORGANIZA-TION, FACILITY, LOCATION and GPE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Set", "sec_num": "3.1" }, { "text": "Part-Of-Speech tags corresponding to all tokens in the two entities and words in the three contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS features:", "sec_num": null }, { "text": "Chunking features: This category of features are extracted from the chunklink representation, which includes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS features:", "sec_num": null }, { "text": "\u2022 Chunk tag information of the two entities and words in the three contexts. The \"0\" tag means that the word is not in any chunk. The \"I-XP\" tag means that this word is inside an XP chunk. The \"B-XP\" by default means that the word is at the beginning of an XP chunk. \u2022 Grammatical function of the two entities and words in the three contexts. The last word in each chunk is its head, and the function of the head is the function of the whole chunk. \"NP-SBJ\" means a NP chunk as the subject of the sentence. The other words in a chunk that are not the head have \"NOFUNC\" as their function. \u2022 IOB-chains of the heads of the two entities. So-called IOB-chain, noting the syntactic categories of all the constituents on the path from the root node to this leaf node of tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS features:", "sec_num": null }, { "text": "The position information is also specified in the description of each feature above. For example, word features with position information include: 1) WE1 (WE2): all words in e 1 (e 2 ) 2) WHE1 (WHE2): head word of e 1 (e 2 ) 3) WMNULL: no words in C mid 4) WMFL: the only word in C mid 5) WMF, WML, WM2, WM3, ...: first word, last word, second word, third word, ...in C mid when at least two words in C mid 6) WEL1, WEL2, ...: first word, second word, ... before e 1 7) WER1, WER2, ...: first word, second word, ... after e 2 We combine the above lexical and syntactic features with their position information in the contexts to form context vectors. Before that, we filter out low frequency features which appeared only once in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS features:", "sec_num": null }, { "text": "The similarity s ij between two occurrences of entity pairs is important to the performance of the LP algorithm. In this paper, we investigated two similarity measures, cosine similarity measure and Jensen-Shannon (JS) divergence (Lin, 1991) . Cosine similarity is commonly used semantic distance, which measures the angle between two feature vectors. JS divergence has ever been used as distance measure for document clustering, which outperforms cosine similarity based document clustering (Slonim et al., 2002) . JS divergence measures the distance between two probability distributions if feature vector is considered as probability distribution over features. JS divergence is defined as follows: ", "cite_spans": [ { "start": 230, "end": 241, "text": "(Lin, 1991)", "ref_id": "BIBREF9" }, { "start": 492, "end": 513, "text": "(Slonim et al., 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Measures", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D KL (q p) = y q(y)(log q(y) p(y) )", "eq_num": "(5)" } ], "section": "Similarity Measures", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D KL (r p) = y r(y)(log r(y) p(y) )", "eq_num": "(6)" } ], "section": "Similarity Measures", "sec_num": "3.2" }, { "text": "wherep = 1 2 (q + r) and JS(q, r) represents JS divergence between probability distribution q(y) and r(y) (y is a random variable), which is defined in terms of KL-divergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Measures", "sec_num": "3.2" }, { "text": "We evaluated this label propagation based relation extraction method for relation subtype detection and characterization task on the official ACE 2003 corpus. It contains 519 files from sources including broadcast, newswire, and newspaper. We dealt with only intra-sentence explicit relations and assumed that all entities have been detected beforehand in the EDT sub-task of ACE. Table 1 lists the types and subtypes of relations for the ACE Relation Detection and Characterization (RDC) task, along with their frequency of occurrence in the ACE training set and test set. We constructed labeled data by randomly sampling some examples from ACE training data and additionally sampling examples with the same size from the pool of unrelated entity pairs for the \"NONE\" class. We used the remaining examples in the ACE training set and the whole ACE test set as unlabeled data. The testing set was used for final evaluation.", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 388, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "3.3.1" }, { "text": "Support Vector Machine (SVM) is a state of the art technique for relation extraction task. In this experiment, we use LIBSVM tool 3 with linear kernel function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LP vs. SVM", "sec_num": "3.3.2" }, { "text": "For comparison between SVM and LP, we ran SVM and LP with different sizes of labeled data and evaluate their performance on unlabeled data using precision, recall and F-measure. Firstly, we ran SVM or LP algorithm to detect possible relations from unlabeled data. If an entity mention pair is classified not to the \"NONE\" class but to the other 24 subtype classes, then it has a relation. Then construct labeled datasets with different sampling set size l,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LP vs. SVM", "sec_num": "3.3.2" }, { "text": "including 1% \u00d7 N train , 10% \u00d7 N train , 25% \u00d7 N train , 50%\u00d7N train , 75%\u00d7N train , 100%\u00d7N train (N train", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LP vs. SVM", "sec_num": "3.3.2" }, { "text": "is the number of examples in the ACE train-3 LIBSV M : a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/\u223ccjlin/libsvm. ing set). If any relation subtype was absent from the sampled labeled set, we redid the sampling. For each size, we performed 20 trials and calculated average scores on test set over these 20 random trials. Table 2 reports the performance of SVM and LP with different sizes of labled data for relation detection task. We used the same sampled labeled data in LP as the training data for SVM model. Table 2 , we see that both LP Cosine and LP JS achieve higher Recall than SVM. Specifically, with small labeled dataset (percentage of labeled data \u2264 25%), the performance improvement by LP is significant. When the percentage of labeled data increases from 50% to 100%, LP Cosine is still comparable to SVM in F-measure while LP JS achieves slightly better F-measure than SVM. On the other hand, LP JS consistently outperforms LP Cosine . Table 3 reports the performance of relation classification by using SVM and LP with different sizes of labled data. And the performance describes the average values of Precision, Recall and F-measure over major relation subtypes.", "cite_spans": [], "ref_spans": [ { "start": 365, "end": 372, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 556, "end": 563, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 995, "end": 1002, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "LP vs. SVM", "sec_num": "3.3.2" }, { "text": "From Table 3 , we see that LP Cosine and LP JS outperform SVM by F-measure in almost all settings of labeled data, which is due to the increase of Recall. With smaller labeled dataset (percentage of labeled data \u2264 50%), the gap between LP and SVM is larger. When the percentage of labeled data in- creases from 75% to 100%, the performance of LP algorithm is still comparable to SVM. On the other hand, the LP algorithm based on JS divergence consistently outperforms the LP algorithm based on Cosine similarity. Figure 1 visualizes the accuracy of three algorithms. As shown in Figure 1 , the gap between SVM curve and LP JS curves is large when the percentage of labeled data is relatively low.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 513, "end": 521, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 579, "end": 587, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "From", "sec_num": null }, { "text": "In Figure 2 , we selected 25 instances in training set and 15 instances in test set from the ACE corpus,which covered five relation types. Using Isomap tool 4 , the 40 instances with 229 feature dimensions are visualized in a two-dimensional space as the figure. We randomly sampled only one labeled example for each relation type from the 25 training examples as labeled data. Comparing Figure 2 (b) and Figure 2 (c), we find that many examples are misclassified from class to other class symbols. This may be caused that SVMs method ignores the intrinsic structure in data. For Figure 2(d) , the labels of unlabeled examples are determined not only by nearby labeled examples, but also by nearby unlabeled examples, so using LP strategy achieves better performance than the local consistency based SVM strategy when the size of labeled data is quite small.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 388, "end": 396, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 405, "end": 413, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 580, "end": 591, "text": "Figure 2(d)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "An Example", "sec_num": "3.3.3" }, { "text": "In (Zhang, 2004) , they perform relation classification on ACE corpus with bootstrapping on top of SVM. To compare with their proposed Bootstrapped SVM algorithm, we use the same feature stream setting and randomly selected 100 instances from the training data as the size of initial labeled data. Table 4 lists the performance of the bootstrapped SVM method from (Zhang, 2004) and LP method with 100 seed labeled examples for relation type classification task. We can see that LP algorithm outperforms the bootstrapped SVM algorithm on four relation type classification tasks, and perform comparably on the relation \"SOC\" classification task.", "cite_spans": [ { "start": 3, "end": 16, "text": "(Zhang, 2004)", "ref_id": "BIBREF14" }, { "start": 364, "end": 377, "text": "(Zhang, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 298, "end": 305, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "LP vs. Bootstrapping", "sec_num": "3.3.4" }, { "text": "In this paper,we have investigated a graph-based semi-supervised learning approach for relation extraction problem. Experimental results showed that the LP algorithm performs better than SVM and bootstrapping. We have some findings from these results:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "The LP based relation extraction method can use the graph structure to smooth the labels of unlabeled examples. Therefore, the labels of unlabeled examples are determined not only by the nearby labeled examples, but also by nearby unlabeled examples. For supervised methods, e.g., SVM, very few labeled examples are not enough to reveal the structure of each class. Therefore they can not perform well, since the classification hyperplane was learned only from few labeled data and the coherent structure in unlabeled data was not explored when inferring class boundary. Hence, our LP-based semisupervised method achieves better performance on both relation detection and classification when only few labeled data is available. Bootstrapping Currently most of works on the RDC task of ACE focused on supervised learning methods Culotta and Soresen (2004; Kambhatla (2004; Zhou et al. (2005) . Table 5 lists a comparison on relation detection and classification of these methods. Zhou et al. (2005) reported the best result as 63.1%/49.5%/55.5% in Precision/Recall/F-measure on the relation subtype classification using feature based method, which outperforms tree kernel based method by Culotta and Soresen (2004) . Compared with Zhou et al.'s method, the performance of LP algorithm is slightly lower. It may be due to that we used a much simpler feature set. Our work in this paper focuses on the investigation of a graph based semi-supervised learning algorithm for relation extraction. In the future, we would like to use more effective feature sets Zhou et al. (2005) or kernel based similarity measure with LP for relation extraction.", "cite_spans": [ { "start": 855, "end": 871, "text": "Kambhatla (2004;", "ref_id": "BIBREF8" }, { "start": 872, "end": 890, "text": "Zhou et al. (2005)", "ref_id": "BIBREF15" }, { "start": 979, "end": 997, "text": "Zhou et al. (2005)", "ref_id": "BIBREF15" }, { "start": 1187, "end": 1213, "text": "Culotta and Soresen (2004)", "ref_id": "BIBREF6" }, { "start": 1554, "end": 1572, "text": "Zhou et al. (2005)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 893, "end": 900, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "This paper approaches the problem of semisupervised relation extraction using a label propagation algorithm. It represents labeled and unlabeled examples and their distances as the nodes and the weights of edges of a graph, and tries to obtain a labeling function to satisfy two constraints: 1) it should be fixed on the labeled nodes, 2) it should be smooth on the whole graph. In the classification process, the labels of unlabeled examples are determined not only by nearby labeled examples, but also by nearby unlabeled examples. Our experimental results demonstrated that this graph based algorithm can achieve better performance than SVM when only very few labeled examples are available, and also outperforms the bootstrapping method for relation extraction task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In the future, we would like to investigate more effective feature set or use feature selection to improve the performance of this graph-based semisupervised relation extraction method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "Software available at http://ilk.uvt.nl/\u223csabine/chunklink/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The tool is available at http://isomap.stanford.edu/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Snowball: Extracting Relations from large Plain-Text Collections", "authors": [ { "first": "E", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "L", "middle": [], "last": "Gravano", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 5 th ACM International Conference on Digital Libraries (ACMDL'00)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agichtein E. and Gravano L.. 2000. Snowball: Ex- tracting Relations from large Plain-Text Collections, In Proceedings of the 5 th ACM International Confer- ence on Digital Libraries (ACMDL'00).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using Manifold Structure for Partially Labeled Classification", "authors": [ { "first": "M", "middle": [], "last": "Belkin", "suffix": "" }, { "first": "P", "middle": [], "last": "Niyogi", "suffix": "" } ], "year": 2002, "venue": "Advances in Neural Infomation Processing Systems 15", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Belkin M. and Niyogi P.. 2002. Using Manifold Struc- ture for Partially Labeled Classification. Advances in Neural Infomation Processing Systems 15.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning from Labeled and Unlabeled Data Using Graph Mincuts", "authors": [ { "first": "A", "middle": [], "last": "Blum", "suffix": "" }, { "first": "S", "middle": [], "last": "Chawla", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 18th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blum A. and Chawla S. 2001. Learning from Labeled and Unlabeled Data Using Graph Mincuts. In Pro- ceedings of the 18th International Conference on Ma- chine Learning.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Semi-Supervised Learning Using Randomized Mincuts", "authors": [ { "first": "A", "middle": [], "last": "Blum", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "R", "middle": [], "last": "Rwebangira", "suffix": "" }, { "first": "R", "middle": [], "last": "Reddy", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 21th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blum A., Lafferty J., Rwebangira R. and Reddy R. 2004. Semi-Supervised Learning Using Randomized Min- cuts. In Proceedings of the 21th International Confer- ence on Machine Learning..", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Extracting patterns and relations from world wide web", "authors": [ { "first": "Brin", "middle": [], "last": "Sergey", "suffix": "" } ], "year": 1998, "venue": "Proceedings of WebDB Workshop at 6th International Conference on Extending Database Technology (WebDB'98)", "volume": "", "issue": "", "pages": "172--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brin Sergey. 1998. Extracting patterns and relations from world wide web. In Proceedings of WebDB Work- shop at 6th International Conference on Extending Database Technology (WebDB'98). pages 172-183.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Maximum-entropy-inspired parser", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak E. 1999. A Maximum-entropy-inspired parser. Technical Report CS-99-12. Computer Science De- partment, Brown University.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Dependency tree kernels for relation extraction", "authors": [ { "first": "A", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "J", "middle": [], "last": "Soresen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of 42th Annual Meeting of the Association for Computational Linguistics. 21-26", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Culotta A. and Soresen J. 2004. Dependency tree kernels for relation extraction, In Proceedings of 42th Annual Meeting of the Association for Computational Linguis- tics. 21-26 July 2004. Barcelona, Spain.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Discovering Relations among Named Entities from Large Corpora", "authors": [ { "first": "T", "middle": [], "last": "Hasegawa", "suffix": "" }, { "first": "S", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "Proceeding of Conference ACL2004", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hasegawa T., Sekine S. and Grishman R. 2004. Dis- covering Relations among Named Entities from Large Corpora, In Proceeding of Conference ACL2004. Barcelona, Spain.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Combining lexical, syntactic and semantic features with Maximum Entropy Models for extracting relations", "authors": [ { "first": "N", "middle": [], "last": "Kambhatla", "suffix": "" } ], "year": 2004, "venue": "Proceedings of 42th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "21--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kambhatla N. 2004. Combining lexical, syntactic and semantic features with Maximum Entropy Models for extracting relations, In Proceedings of 42th Annual Meeting of the Association for Computational Linguis- tics.. 21-26 July 2004. Barcelona, Spain.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Divergence Measures Based on the Shannon Entropy", "authors": [ { "first": "J", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1991, "venue": "IEEE Transactions on Information Theory", "volume": "37", "issue": "1", "pages": "145--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin J. 1991. Divergence Measures Based on the Shan- non Entropy. IEEE Transactions on Information The- ory. Vol 37, No.1, 145-150.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A novel use of statistical parsing to extract information from text", "authors": [ { "first": "S", "middle": [], "last": "Miller", "suffix": "" }, { "first": "H", "middle": [], "last": "Fox", "suffix": "" }, { "first": "L", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2000, "venue": "Proceedings of 6th Applied Natural Language Processing Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller S.,Fox H.,Ramshaw L. and Weischedel R. 2000. A novel use of statistical parsing to extract information from text. In Proceedings of 6th Applied Natural Lan- guage Processing Conference 29 April-4 may 2000, Seattle USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised Document Classification Using Sequential Information Maximization", "authors": [ { "first": "N", "middle": [], "last": "Slonim", "suffix": "" }, { "first": "N", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "N", "middle": [], "last": "Tishby", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slonim, N., Friedman, N., and Tishby, N. 2002. Un- supervised Document Classification Using Sequential Information Maximization. In Proceedings of the 25th Annual International ACM SIGIR Conference on Re- search and Development in Information Retrieval.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Unsupervised Word Sense Disambiguation Rivaling Supervised Methods", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky D. 1995. Unsupervised Word Sense Disam- biguation Rivaling Supervised Methods. In Proceed- ings of the 33rd Annual Meeting of the Association for Computational Linguistics. pp.189-196.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Kernel Methods for Relation Extraction", "authors": [ { "first": "D", "middle": [], "last": "Zelenko", "suffix": "" }, { "first": "C", "middle": [], "last": "Aone", "suffix": "" }, { "first": "A", "middle": [], "last": "Richardella", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zelenko D., Aone C. and Richardella A. 2002. Ker- nel Methods for Relation Extraction, Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP). Philadelphia.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Weakly-supervised relation classification for Information Extraction", "authors": [ { "first": "Zhang", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACM 13th conference on Information and Knowledge Management (CIKM'2004). 8-13", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang Zhu. 2004. Weakly-supervised relation classifi- cation for Information Extraction, In Proceedings of ACM 13th conference on Information and Knowledge Management (CIKM'2004). 8-13 Nov 2004. Wash- ington D.C.,USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Exploring Various Knowledge in Relation Extraction", "authors": [ { "first": "Zhou", "middle": [], "last": "Guodong", "suffix": "" }, { "first": "Su", "middle": [], "last": "Jian", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Jie", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang Min", "suffix": "" } ], "year": 2005, "venue": "Proceedings of 43th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou GuoDong, Su Jian, Zhang Jie and Zhang min. 2005. Exploring Various Knowledge in Relation Ex- traction. In Proceedings of 43th Annual Meeting of the Association for Computational Linguistics. USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning from Labeled and Unlabeled Data with Label Propagation", "authors": [ { "first": "Zhu", "middle": [], "last": "Xiaojin", "suffix": "" }, { "first": "Ghahramani", "middle": [], "last": "Zoubin", "suffix": "" } ], "year": 2002, "venue": "CMU CALD tech report CMU", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhu Xiaojin and Ghahramani Zoubin. 2002. Learning from Labeled and Unlabeled Data with Label Propa- gation. CMU CALD tech report CMU-CALD-02-107.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions", "authors": [ { "first": "Zhu", "middle": [], "last": "Xiaojin", "suffix": "" }, { "first": "Ghahramani", "middle": [], "last": "Zoubin", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 20th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhu Xiaojin, Ghahramani Zoubin, and Lafferty J. 2003. Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. In Proceedings of the 20th Inter- national Conference on Machine Learning.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "1 http://www.ldc.upenn.edu/Projects/ACE/, Three tasks of ACE program: Entity Detection and Tracking (EDT), Relation Detection and Characterization (RDC), and Event Detection and Characterization (EDC) 2 The Proposed Method", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Comparison of the performance of SVM and LP with different sizes of labeled data", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "Figure 2(a) and 2(b) show the initial state and ground truth result respectively. Figure 2(c) reports the classification result on test set by SVM (accuracy = 4 15 = 26.7%), and Figure 2(d) gives the classification result on both training set and test set by LP (accuracy = 11 15 = 73.3%).", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "An example: comparison of SVM and LP algorithm on a data set from ACE corpus. \u2022 and denote the unlabeled examples in training set and test set respectively, and other symbols ( , \u00d7, 2, + and ) represent the labeled examples with respective relation type sampled from training set.", "num": null }, "TABREF0": { "html": null, "content": "
TypeSubTypeTraining Devtest
ROLE General-Staff550149
Management677122
Citizen-Of12724
Founder115
Owner14615
Affiliate-Partner11115
Member460145
Client6713
Other157
PARTPart-Of490103
Subsidiary8519
Other21
ATLocated975192
Based-In18764
Residence15454
SOCOther-Professional19525
Other-Personal6010
Parent6824
Spouse214
Associate497
Other-Relative2310
Sibling74
GrandParent61
NEAR Relative-Location8832
JS(q, r) =1 2[DKL(q p) + DKL(r p)](4)
", "type_str": "table", "text": "Frequency of Relation SubTypes in the ACE training and devtest corpus.", "num": null }, "TABREF1": { "html": null, "content": "
SVMLP CosineLP JS
Percentage PRFPRFPRF
1% 35.932.634.458.356.157.158.558.758.5
10% 51.341.545.964.557.560.764.662.063.2
25% 67.152.959.168.759.063.468.963.766.1
50% 74.057.864.969.961.865.670.164.166.9
75% 77.659.467.271.863.467.372.464.868.3
100% 79.862.970.373.966.970.274.268.271.1
", "type_str": "table", "text": "The Performance of SVM and LP algorithm with different sizes of labeled data for relation detection on relation subtypes.The LP algorithm is run with two similarity measures: cosine similarity and JS divergence.", "num": null }, "TABREF2": { "html": null, "content": "
SVMLPCosineLPJS
Percentage PRFPRFPRF
1% 31.626.128.639.637.538.540.138.039.0
10% 39.132.735.645.939.642.546.241.643.7
25% 49.835.041.151.044.547.352.346.048.9
50% 52.541.346.254.148.651.254.950.852.7
75% 58.746.752.056.052.053.956.152.654.3
100% 60.848.954.256.252.354.156.352.954.6
", "type_str": "table", "text": "The performance of SVM and LP algorithm with different sizes of labeled data for relation detection and classification on relation subtypes. The LP algorithm is run with two similarity measures: cosine similarity and JS divergence.", "num": null }, "TABREF3": { "html": null, "content": "
BootstrappingLP JS
Relation typePRFPRF
ROLE78.569.773.881.074.777.7
PART65.634.144.970.141.652.2
AT61.084.870.974.279.176.6
SOC47.057.451.745.059.151.0
NEAR\u2212\u2212\u221213.712.513.0
", "type_str": "table", "text": "Comparison of the performance of the bootstrapped SVM method from(Zhang, 2004) and LP method with 100 seed labeled examples for relation type classification task.", "num": null }, "TABREF4": { "html": null, "content": "
Relation Dectection Relation Detection and Classification
on Typeson Subtypes
MethodPRFPRFPRF
Culotta and Soresen (2004) Tree kernel based81.2 51.8 63.2 67.1 35.0 45.8---
Kambhatla (2004)Feature based, Maxi-------63.5 45.2 52.8
mum Entropy
Zhou et al. (2005)Feature based,SVM84.8 66.7 74.7 77.2 60.7 68.0 63.1 49.5 55.5
", "type_str": "table", "text": "Comparison of the performance of previous methods on ACE RDC task.", "num": null } } } }