Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:36.206830Z"
},
"title": "Weakly Supervised Natural Language Learning Without Redundant Views",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14853-7501",
"region": "NY"
}
},
"email": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14853-7501",
"region": "NY"
}
},
"email": "cardie\u00a1@cs.cornell.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We investigate single-view algorithms as an alternative to multi-view algorithms for weakly supervised learning for natural language processing tasks without a natural feature split. In particular, we apply co-training, self-training, and EM to one such task and find that both selftraining and FS-EM, a new variation of EM that incorporates feature selection, outperform cotraining and are comparatively less sensitive to parameter changes.",
"pdf_parse": {
"paper_id": "N03-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "We investigate single-view algorithms as an alternative to multi-view algorithms for weakly supervised learning for natural language processing tasks without a natural feature split. In particular, we apply co-training, self-training, and EM to one such task and find that both selftraining and FS-EM, a new variation of EM that incorporates feature selection, outperform cotraining and are comparatively less sensitive to parameter changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multi-view weakly supervised learning paradigms such as co-training (Blum and Mitchell, 1998) and co-EM (Nigam and Ghani, 2000) learn a classification task from a small set of labeled data and a large pool of unlabeled data using separate, but redundant, views of the data (i.e. using disjoint feature subsets to represent the data). Multi-view learning has been successfully applied to a number of tasks in natural language processing (NLP), including text classification (Blum and Mitchell, 1998; Nigam and Ghani, 2000) , named entity classification (Collins and Singer, 1999) , base noun phrase bracketing (Pierce and Cardie, 2001) , and statistical parsing (Sarkar, 2001; Steedman et al., 2003) .",
"cite_spans": [
{
"start": 68,
"end": 93,
"text": "(Blum and Mitchell, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 104,
"end": 127,
"text": "(Nigam and Ghani, 2000)",
"ref_id": "BIBREF18"
},
{
"start": 473,
"end": 498,
"text": "(Blum and Mitchell, 1998;",
"ref_id": "BIBREF4"
},
{
"start": 499,
"end": 521,
"text": "Nigam and Ghani, 2000)",
"ref_id": "BIBREF18"
},
{
"start": 552,
"end": 578,
"text": "(Collins and Singer, 1999)",
"ref_id": "BIBREF6"
},
{
"start": 609,
"end": 634,
"text": "(Pierce and Cardie, 2001)",
"ref_id": "BIBREF20"
},
{
"start": 661,
"end": 675,
"text": "(Sarkar, 2001;",
"ref_id": "BIBREF22"
},
{
"start": 676,
"end": 698,
"text": "Steedman et al., 2003)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The theoretical performance guarantees of multi-view weakly supervised algorithms come with two fairly strong assumptions on the views. First, each view must be sufficient to learn the given concept. Second, the views must be conditionally independent of each other given the class label. When both conditions are met, Blum and Mitchell prove that an initial weak learner can be boosted using unlabeled data.",
"cite_spans": [
{
"start": 319,
"end": 327,
"text": "Blum and",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, finding a set of views that satisfies both of these conditions is by no means an easy problem. In addition, recent empirical results by Muslea et al. (2002) and Nigam and Ghani (2000) have shown that multi-view algorithms are quite sensitive to the two underlying assumptions on the views. Effective view factorization in multi-view learning paradigms, therefore, remains an important issue for their successful application. In practice, views are supplied by users or domain experts, who determine a natural feature split that is expected to be redundant (i.e. each view is expected to be sufficient to learn the target concept) and conditionally independent given the class label. 1 We investigate here the application of weakly supervised learning algorithms to problems for which no obvious natural feature split exists and hypothesize that, in these cases, single-view weakly supervised algorithms will perform better than their multi-view counterparts. Motivated, in part, by the results in Mueller et al. (2002) , we use the task of noun phrase coreference resolution for illustration throughout the paper. 2 In our experiments, we compare the performance of the Blum and Mitchell co-training algorithm with that of two commonly used single-view algorithms, namely, self-training and Expectation-Maximization (EM). In comparison to co-training, self-training achieves substantially superior performance and is less sensitive to its input parameters. EM, on the other hand, fails to boost performance, and we attribute this phenomenon to the presence of redundant features in the underlying generative model. Consequently, we propose a wrapper-based feature selection method (John et al., 1994) for EM that results in performance improvements comparable to that observed with self-training. Overall, our results suggest that single-view 1 Abney (2002) argues that the conditional independence assumption is remarkably strong and is rarely satisfied in real data sets, showing that a weaker independence assumption suffices.",
"cite_spans": [
{
"start": 151,
"end": 171,
"text": "Muslea et al. (2002)",
"ref_id": "BIBREF16"
},
{
"start": 176,
"end": 198,
"text": "Nigam and Ghani (2000)",
"ref_id": "BIBREF18"
},
{
"start": 698,
"end": 699,
"text": "1",
"ref_id": null
},
{
"start": 1012,
"end": 1033,
"text": "Mueller et al. (2002)",
"ref_id": "BIBREF15"
},
{
"start": 1129,
"end": 1130,
"text": "2",
"ref_id": null
},
{
"start": 1696,
"end": 1715,
"text": "(John et al., 1994)",
"ref_id": "BIBREF11"
},
{
"start": 1860,
"end": 1872,
"text": "Abney (2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Mueller et al. (2002) explore a heuristic method for view factorization for the related problem of anaphora resolution, but find that co-training shows no performance improvements for any type of German anaphor except pronouns over a baseline classifier trained on a small set of labeled data.",
"cite_spans": [
{
"start": 2,
"end": 23,
"text": "Mueller et al. (2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "weakly supervised learning algorithms are a viable alternative to multi-view algorithms for data sets where a natural feature split into separate, redundant views is not available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is organized as follows. Section 2 presents an overview of the three weakly supervised learning algorithms mentioned previously. In section 3, we introduce noun phrase coreference resolution and describe the machine learning framework for the problem. In section 4, we evaluate the weakly supervised learning algorithms on the task of coreference resolution. Section 5 introduces a method for improving the performance of weakly supervised EM via feature selection. We conclude with future work in section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we give a high-level description of our implementation of the three weakly supervised algorithms that we use in our comparison, namely, co-training, selftraining, and EM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weakly Supervised Algorithms",
"sec_num": "2"
},
{
"text": "Co-training (Blum and Mitchell, 1998 ) is a multi-view weakly supervised algorithm that trains two classifiers that can help augment each other's labeled data using two separate but redundant views of the data. Each classifier is trained using one view of the data and predicts the labels for all instances in the data pool, which consists of a randomly chosen subset of the unlabeled data. Each then selects its most confident predictions from the pool and adds the corresponding instances with their predicted labels to the labeled data while maintaining the class distribution in the labeled data.",
"cite_spans": [
{
"start": 12,
"end": 36,
"text": "(Blum and Mitchell, 1998",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Co-Training",
"sec_num": "2.1"
},
{
"text": "The number of instances to be added to the labeled data by each classifier at each iteration is limited by a pre-specified growth size to ensure that only the instances that have a high probability of being assigned the correct label are incorporated. The data pool is refilled with instances drawn from the unlabeled data and the process is repeated for several iterations. During testing, each classifier makes an independent decision for a test instance and the decision associated with the higher confidence is taken to be the final prediction for the instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-Training",
"sec_num": "2.1"
},
{
"text": "Self-training is a single-view weakly supervised algorithm that has appeared in various forms in the literature. The version of the algorithm that we consider here is a variation of the one presented in Banko and Brill (2001) .",
"cite_spans": [
{
"start": 203,
"end": 225,
"text": "Banko and Brill (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Training",
"sec_num": "2.2"
},
{
"text": "Initially, we use bagging (Breiman, 1996) to train a committee of classifiers using the labeled data. Specifically, each classifier is trained on a bootstrap sample created by randomly sampling instances with replacement from the labeled data until the size of the bootstrap sample is equal to that of the labeled data. Then each member of the committee (or bag) predicts the labels of all unlabeled data. The algorithm selects an unlabeled instance for adding to the labeled data if and only if all bags agree upon its label. This ensures that only the unlabeled instances that have a high probability of being assigned the correct label will be incorporated into the labeled set. The above steps are repeated until all unlabeled data is labeled or a fixed point is reached. Following Breiman (1996) , we perform simple majority voting using the committee to predict the label of a test instance.",
"cite_spans": [
{
"start": 26,
"end": 41,
"text": "(Breiman, 1996)",
"ref_id": "BIBREF5"
},
{
"start": 786,
"end": 800,
"text": "Breiman (1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Training",
"sec_num": "2.2"
},
{
"text": "The use of EM as a single-view weakly supervised classification algorithm is introduced in . Like the classic unsupervised EM algorithm (Dempster et al., 1977) , weakly supervised EM assumes a parametric model of data generation. The labels of the unlabeled data are treated as missing data. The goal is to find a model such that the posterior probability of its parameters is locally maximized given both the labeled data and the unlabeled data.",
"cite_spans": [
{
"start": 136,
"end": 159,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EM",
"sec_num": "2.3"
},
{
"text": "Initially, the algorithm estimates the model parameters by training a probabilistic classifier on the labeled instances. Then, in the E-step, all unlabeled data is probabilistically labeled by the classifier. In the M-step, the parameters of the generative model are re-estimated using both the initially labeled data and the probabilistically labeled data to obtain a maximum a posteriori (MAP) hypothesis. The E-step and the M-step are repeated for several iterations. The resulting model is then used to make predictions for the test instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EM",
"sec_num": "2.3"
},
{
"text": "Noun phrase coreference resolution refers to the problem of determining which noun phrases (NPs) refer to each real-world entity mentioned in a document. In this section, we give an overview of the coreference resolution system to which the weakly supervised algorithms described in the previous section are applied. The framework underlying the system is a standard combination of classification and clustering employed by supervised learning approaches (e.g. Ng and Cardie (2002) ; Soon et al. (2001) ). Specifically, coreference resolution is recast as a classification task, in which a pair of NPs is classified as co-referring or not based on constraints that are learned from an annotated corpus. Training instances are generated by pairing each NP with each of its preceding NPs in the document. The classification associated with a training instance is one of COREFER-ENT or NOT COREFERENT depending on whether the NPs starts with a demonstrative such as \"this,\" \"that,\" \"these,\" or \"those;\" else N. precedes NP\u00a5\u00a3 . Non-relational features test some property P of one of the NPs under consideration and take on a value of YES or NO depending on whether P holds. Relational features test whether some property P holds for the NP pair under consideration and indicate whether the NPs are COMPATIBLE or INCOMPATIBLE w.r.t. P; a value of NOT APPLICABLE is used when property P does not apply. co-refer in the text. A separate clustering mechanism then coordinates the possibly contradictory pairwise classifications and constructs a partition on the set of NPs.",
"cite_spans": [
{
"start": 461,
"end": 481,
"text": "Ng and Cardie (2002)",
"ref_id": "BIBREF17"
},
{
"start": 484,
"end": 502,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Machine Learning Framework for Coreference Resolution",
"sec_num": "3"
},
{
"text": "We perform the experiments in this paper using our coreference resolution system (see Ng and Cardie (2002) ).",
"cite_spans": [
{
"start": 86,
"end": 106,
"text": "Ng and Cardie (2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Machine Learning Framework for Coreference Resolution",
"sec_num": "3"
},
{
"text": "For the sake of completeness, we include the descriptions of the 25 features employed by the system in Table 1 . Linguistically, the features can be divided into five groups: lexical, grammatical, semantic, positional, and others. However, we use naive Bayes rather than decision tree induction as the underlying learning algorithm to train a coreference classifier, simply because (1) it provides a generative model assumed by EM and hence facilitates comparison between different approaches and (2) it is more robust to the skewed class distributions inherent in coreference data sets than decision tree learners. When the coreference system is used within the weakly supervised setting, a weakly supervised algorithm bootstraps the corefer-ence classifier from the given labeled and unlabeled data rather than from a much larger set of labeled instances.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 110,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The Machine Learning Framework for Coreference Resolution",
"sec_num": "3"
},
{
"text": "We conclude this section by noting that view factorization is a non-trivial task for coreference resolution. For many lexical tagging problems such as part-of-speech tagging, views can be drawn naturally from the left-hand and right-hand context. For other tasks such as named entity classification, views can be derived from features inside and outside the phrase under consideration (Collins and Singer, 1999) . Unfortunately, neither of these options is possible for coreference resolution. We will explore several heuristic methods for view factorization in the next section.",
"cite_spans": [
{
"start": 385,
"end": 411,
"text": "(Collins and Singer, 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Machine Learning Framework for Coreference Resolution",
"sec_num": "3"
},
{
"text": "To ensure a fair comparison of the weakly supervised algorithms, the experiments are designed to determine the best parameter setting of each algorithm (in terms of its effectiveness to improve performance) for the data sets we investigate. Specifically, we keep the parameters common to all three weakly supervised algorithms (i.e. the labeled and unlabeled data) constant and vary the algorithm-specific parameters, as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Evaluation. We use the MUC-6 (1995) and MUC-7 (1998) coreference data sets for evaluation. The training set is composed of 30 \"dry run\" texts, 1 of which is selected to be the annotated text and the remaining 29 texts are used as unannotated data. For MUC-6, 3486 training instances are generated from 84 NPs in the annotated text. For MUC-7, 3741 training instances are generated from 87 NPs. The unlabeled data is composed of 488173 instances and 478384 instances for the MUC-6 and MUC-7 data sets, respectively. Testing is performed by applying the bootstrapped coreference classifier and the clustering algorithm described in section 3 on the 20-30 \"formal evaluation\" texts for each of the MUC-6 and MUC-7 data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Co-training parameters. The co-training parameters are set as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Views. We tested three pairs of views. Table 2 reproduces the 25 features of the coreference system and shows the views we employ. Specifically, the three view pairs are generated by the following methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Mueller et al.'s heuristic method. Starting from two empty views, the iterative algorithm selects for each view the feature whose addition maximizes the performance of the respective view on the labeled data at each iteration. 3 This method produces the view pair V1 and V2 in Table 2 for the MUC-6 data set. A different view pair is produced for MUC-7.",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Random splitting of features into views. Starting from two empty views, an iterative algorithm that randomly chooses a feature for each view at each step is used to split the feature set. The resulting view pair V3 and V4 is used for both the MUC-6 and MUC-7 data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Splitting of features according to the feature type. Specifically, one view comprises the lexicosyntactic features and the other the remaining ones. This approach produces the view pair V5 and V6, which is used for both data sets. Pool size. We tested pool sizes of 500, 1000, 5000. Growth size. We tested values of 10, 50, 100, 200, 250.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "3 Space limitation precludes a detailed description of this method. See Mueller et al. (2002) for details. Number of co-training iterations. We monitored performance on the test data at every 10 iterations of cotraining and ran the algorithm until performance stabilized.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "Mueller et al. (2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Feature V1 V2 V3 V4 V5 V6 PRO STR X X X PN STR X X X SOON STR NONPRO X X X PRONOUN 1 X X X PRONOUN 2 X X X DEMONSTRATIVE 2 X X X BOTH PROPER NOUNS X X X NUMBER X X X GENDER X X X ANIMACY X X X APPOSITIVE X X X PREDNOM X X X BINDING X X X CONTRAINDICES X X X SPAN X X X MAXIMALNP X X X SYNTAX X X X INDEFINITE X X X PRONOUN X X X EMBEDDED 1 X X X TITLE X X X WNCLASS X X X ALIAS X X X SENTNUM X X X PRO RESOLVE X X X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Self-training parameters. Given the labeled and unlabeled data, self-training requires only the specification of the number of bags. We tested all odd number of bags between 1 and 25. EM parameters. Given the labeled and unlabeled data, EM has only one parameter -the number of iterations. We ran EM to convergence and kept track of its test set performance at every iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Results are shown in Table 3 , where performance is reported in terms of recall, precision, and F-measure using the model-theoretic MUC scoring program (Vilain et al., 1995) . The baseline coreference system, which is trained only on the labeled document using naive Bayes, achieves an F-measure of 55.5 and 43.8 on the MUC-6 and MUC-7 data sets, respectively.",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "The results shown in row 2 of Table 3 correspond to the best F-measure scores achieved by co-training for the two data sets based on co-training runs that comprise all of the parameter combinations described in the previous subsection. The parameter settings with which the best Table 3 : Comparative results of co-training, self-training, EM, and FS-EM (to be described in section 5). Recall, Precision, and F-measure are provided. For co-training, self-training, and EM, the best results (F-measure) achieved by the algorithms and the corresponding parameter settings (with views v, growth size g, pool size p, number of iterations i, and number of bags b) are shown. results are obtained are also shown in the table. To get a better picture of the behavior of co-training, we present the learning curve for the co-training run that gives rise to the best F-measure for the MUC-6 data set in Figure 1 . The horizontal (dotted) line shows the performance of the baseline system, which achieves an F-measure of 55.5, as described above. As co-training progresses, F-measure peaks at iteration 220 and then gradually drops below that of the baseline after iteration 570.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 3",
"ref_id": null
},
{
"start": 279,
"end": 286,
"text": "Table 3",
"ref_id": null
},
{
"start": 894,
"end": 902,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Although co-training produces substantial improvements over the baseline at its best parameter settings, a closer examination of our results reveals that they corroborate previous findings: the algorithm is sensitive not only to the number of iterations, but to other input parameters such as the pool size and the growth size as well (Nigam and Ghani, 2000; Pierce and Cardie, 2001) . The lack of a principled method for determining these parameters in a weakly supervised setting where labeled data is scarce remains a serious disadvantage for co-training.",
"cite_spans": [
{
"start": 335,
"end": 358,
"text": "(Nigam and Ghani, 2000;",
"ref_id": "BIBREF18"
},
{
"start": 359,
"end": 383,
"text": "Pierce and Cardie, 2001)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Self-training results are shown in row 3 of Table 3 : self-training performs substantially better than both the baseline and co-training for both data sets. In contrast to co-training, however, self-training is relatively insensi- tive to its input parameter. Figure 2 shows the fairly consistent performance of self-training with seven or more bags for the MUC-6 data set. We observe similar trends for the MUC-7 data set. These results are consistent with empirical studies of bagging across a variety of classification tasks where seven to 25 bags are deemed sufficient (Breiman, 1996) .",
"cite_spans": [
{
"start": 573,
"end": 588,
"text": "(Breiman, 1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 44,
"end": 51,
"text": "Table 3",
"ref_id": null
},
{
"start": 260,
"end": 268,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "To gain a deeper insight into the behavior of selftraining, we plot the learning curve for self-training using 7 bags in Figure 3 , again for the MUC-6 data set. At iteration 0 (i.e. before any unlabeled data is incorporated), the F-measure score achieved by self-training is higher than that of the baseline system (58.5 vs. 55.5). The observed difference is due to voting within the self-training algorithm. Voting has proved to be an effective technique for improving the accuracy of a classifier when training data is scarce by reducing the variance of a particular training corpus (Breiman, 1996) . After the first iteration, there is a rapid increase in F-measure, which is accompanied by large gains in precision and smaller drops in recall. These results are consistent with our intuition regarding self-training: at each iteration the algorithm incorporates only instances whose label it is most confident about into the labeled data, thereby ensuring that precision will increase. 4 As we can see from Table 3 , the recall level achieved by co-training is much lower than that of self-training. This is an indication that each co-training view is insufficient to learn the concept: the feature split limits any interaction of features in different views that might produce better recall. Overall, these results provide evidence that self-training is a better alternative to co-training for weakly supervised learning for problems such as coreference resolution where no natural feature split exists.",
"cite_spans": [
{
"start": 586,
"end": 601,
"text": "(Breiman, 1996)",
"ref_id": "BIBREF5"
},
{
"start": 991,
"end": 992,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 1012,
"end": 1019,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "On the other hand, EM only gives rise to modest performance gains over the baseline system, as we can see from row 4 of Table 3 . The performance of EM depends in part on the correctness of the underlying generative model , which in our case is naive Bayes. In this model, an instance with feature values , , ! # \" and class $ is created by first choosing the class with prior probability",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 127,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "% ' & \u00a4 $ ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "and then generating each available feature ! 0 with probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "% 1 & 2 0 4 3 $ ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "independently, under the assumption that the feature values are conditionally independent given the class. As a result, model correctness is adversely affected by redundant features, which clearly invalidate the conditional independence assumption. In fact, naive Bayes is known to be bad at handling redundant features (Langley and Sage, 1994) . We hypothesize that the presence of redundant fea-4 When tackling the task of confusion set disambiguation, Banko and Brill (2001) observe only modest gains from selftraining by bootstrapping from a seed corpus of one million words. We speculate that a labeled data set of this size can possibly enable them to train a reasonably good classifier with which self-training can only offer marginal benefits, but the relationship between the behavior of self-training and the size of the seed (labeled) corpus remains to be shown.",
"cite_spans": [
{
"start": 320,
"end": 344,
"text": "(Langley and Sage, 1994)",
"ref_id": "BIBREF12"
},
{
"start": 455,
"end": 477,
"text": "Banko and Brill (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "tures causes the generative model and hence EM to perform poorly. Although self-training depends on the same model, it only makes use of the binary decisions returned by the model and is therefore more robust to the naive Bayes assumptions, as reflected in its fairly impressive empirical performance. 5 In contrast, the fact that EM relies on the probability estimates of the model makes it more sensitive to the correctness of the model.",
"cite_spans": [
{
"start": 302,
"end": 303,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "If our hypothesis regarding the presence of redundant features were correct, then feature selection could result in an improved generative model, which could in turn improve the performance of weakly supervised EM. This section discusses a wrapper-based feature selection method for EM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meta-Bootstrapping with Feature Selection",
"sec_num": "5"
},
{
"text": "We now describe the FS-EM algorithm for boosting the performance of weakly supervised algorithms via feature selection. Although named after EM, the algorithm as described is potentially applicable to all single-view weakly supervised algorithms. FS-EM takes as input a supervised learner, a single-view weakly supervised learner, a labeled data set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Two-Tiered Bootstrapping Algorithm",
"sec_num": "5.1"
},
{
"text": ", and an unlabeled data set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": ". In addition, it assumes knowledge of the positive class prior (i.e. the true percentage of positive instances in the data) like co-training and requires a deviation threshold that we will explain shortly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": "FS-EM, which has a two-level bootstrapping structure, is reminiscent of the meta-bootstrapping algorithm introduced in Riloff and Jones (1999) . The outer-level bootstrapping task is feature selection, whereas the inner-level task is to learn a bootstrapped classifier from labeled and unlabeled data as described in section 4. At a high level, FS-EM uses a forward feature selection algorithm to impose a total ordering on the features based on the order in which the features are selected. Specifically, FS-EM performs the three steps below for each feature that has not been selected. First, it uses the weakly supervised learner to train a classifier @ from the labeled and unlabeled data (5 ) using only the feature as well as the features selected thus far. Second, the algorithm uses @ to classify all of the instances in . Finally, FS-EM trains a new model on just 6 , which is now labeled by @ . At the end of the three steps, exactly one model is trained for each feature that has not been selected. The forward selection algorithm then selects the feature with which the corresponding model achieves the best performance , does not deviate from the true positive class prior, Q , by more than a pre-specified threshold value,",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "Riloff and Jones (1999)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
},
{
"text": ". A large deviation from the true prior is an indication that the resulting classification of the data does not correspond closely to the actual classification. This algorithmic bias is particularly useful for weakly supervised learners (such as EM) that optimize an objective function other than classification accuracy and can potentially produce a classification that is substantially different from the actual one. Specifically, FS-EM attempts to ensure that the classification produced by the weakly supervised learner weakly agrees with the actual classification, where the weak disagreement rate between two classifications is defined as the difference between their positive class priors. Note that weak agreement is a necessary but not sufficient condition for two classifications to be identical. 7 Nevertheless, if the addition of any of the features to",
"cite_spans": [
{
"start": 807,
"end": 808,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": "D E I F P H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": "does not produce a classification that weakly agrees with the true one, FS-EM picks the feature whose inclusion results in a positive class prior that has the least deviation instead. This step can be viewed as introducing \"pseudo-random\" noise into the feature selection process. The hope is that the deviation of the high-scoring, \"highdeviation\" features can be lowered by first incorporating those with \"low deviation\", thus continuing to strive for weak agreement while potentially achieving better performance on",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": ". The final set of features,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "D ! S 8 \u00a4 E I T 4 U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": ", is composed of the first V features chosen by the feature selection algorithm, where V is the largest number of features that can achieve the best performance on 5 subject to the condition that the corresponding classification produced by the weakly supervised algorithm weakly disagrees with the true one by at most R . The output of FS-EM is a classifier that the weakly supervised learner learns from . The pseudo-code describing FS-EM is shown in Figure 4 . 6 The reason for using only W (instead of W and X ) in the validation step is primarily to preclude the possibility of getting a poor estimation of model performance as a result of the presence of potentially inaccurately labeled data from Figure 4 : The FS-EM algorithm.",
"cite_spans": [
{
"start": 464,
"end": 465,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 453,
"end": 461,
"text": "Figure 4",
"ref_id": null
},
{
"start": 704,
"end": 712,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "We instantiate FS-EM with naive Bayes as the supervised learner and EM as the weakly supervised learner, providing it with the same amount of labeled and unlabeled data as in previous experiments and setting R to 0.01. EM is run for 7 iterations whenever it is invoked. 8 Results using FS-EM are shown in row 5 of Table 3 . In comparison to EM, F-measure increases from 57.6 to 65.4 for MUC-6, and from 46.4 to 60.5 for MUC-7, allowing FS-EM to even surpass the performance of self-training. These results are consistent with our hypothesis that the performance of EM can be boosted by improving the underlying generative model using feature selection.",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 321,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "Finally, although FS-EM is only applicable to twoclass problems, it can be generalized fairly easily to handle multi-class problems, where the true label distribution is assumed to be available and the weak agreement rate can be measured based on the similarity of two distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "We have investigated single-view algorithms (selftraining and EM) as an alternative to multi-view algorithms (co-training) for weakly supervised learning for problems that do not appear to have a natural feature split. Experimental results on two coreference data sets indicate that self-training outperforms co-training under various parameter settings and is comparatively less sensitive to parameter changes. While weakly supervised EM is not able to outperform co-training, we introduce a variation of EM, FS-EM, for boosting the performance of EM via feature selection. Like self-training, FS-EM easily outperforms co-training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Co-training algorithms such as CoBoost (Collins and Singer, 1999) and Greedy Agreement (Abney, 2002) that explicitly trade classifier agreement on unlabeled data against error on labeled data may be more robust to the underlying assumptions of co-training and can conceivably perform better than the Blum and Mitchell algorithm for problems without a natural feature split. 9 Other less studied single-view weakly supervised algorithms in the NLP community such as co-training with different learning algorithms (Goldman and Zhou, 2000) and graph mincuts (Blum and Chawla, 2001 ) can be similarly applied to these problems to further test our original hypothesis. We plan to explore these possibilities in future research.",
"cite_spans": [
{
"start": 39,
"end": 65,
"text": "(Collins and Singer, 1999)",
"ref_id": "BIBREF6"
},
{
"start": 87,
"end": 100,
"text": "(Abney, 2002)",
"ref_id": "BIBREF0"
},
{
"start": 374,
"end": 375,
"text": "9",
"ref_id": "BIBREF3"
},
{
"start": 512,
"end": 536,
"text": "(Goldman and Zhou, 2000)",
"ref_id": "BIBREF10"
},
{
"start": 555,
"end": 577,
"text": "(Blum and Chawla, 2001",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "EvaluationIn this section, we empirically test our hypothesis that single-view weakly supervised algorithms can potentially outperform their multi-view counterparts for problems without a natural feature split.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is possible for naive Bayes classifiers to return optimal classifications even if the conditional independence assumption is violated. SeeDomingos and Pazzani (1997) for an analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Seven is used because we follow the choice of previous work(Muslea et al., 2002;Nigam and Ghani, 2000). Additional experiments in which EM is run for 5 and 9 iterations give similar results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Lillian Lee, Thorsten Joachims, and the Cornell NLP group including Regina Barzilay, Eric Breck, Bo Pang, and Steven Baker for many helpful comments. We also thank three anonymous reviewers for their feedback and Ted Pedersen for encouraging us to apply ensemble methods to coreference resolution. This work was supported in part by NSF Grant IIS-0208028.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bootstrapping",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "360--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Abney. 2002. Bootstrapping. In Proceedings of the ACL, pages 360-367.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scaling to very very large corpora for natural language disambiguation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the ACL/EACL",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Banko and E. Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of the ACL/EACL, pages 26-33.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning from labeled and unlabeled data using graph mincuts",
"authors": [
{
"first": "A",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chawla",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Blum and S. Chawla. 2001. Learning from labeled and un- labeled data using graph mincuts. In Proceedings of ICML, pages 19-26.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "show that, when the conditional independence assumption of the views is satisfied, view classifiers whose agreement on unlabeled data is explicitly maximized will have low generalization error",
"authors": [
{
"first": "Dasgupta",
"middle": [],
"last": "Indeed",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Indeed, Dasgupta et al. (2001) show that, when the condi- tional independence assumption of the views is satisfied, view classifiers whose agreement on unlabeled data is explicitly max- imized will have low generalization error.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Combining labeled and unlabeled data with co-training",
"authors": [
{
"first": "A",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLT",
"volume": "",
"issue": "",
"pages": "92--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Blum and T. Mitchell. 1998. Combining labeled and unla- beled data with co-training. In Proceedings of COLT, pages 92-100.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bagging predictors",
"authors": [
{
"first": "L",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 1996,
"venue": "Machine Learning",
"volume": "24",
"issue": "",
"pages": "123--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Breiman. 1996. Bagging predictors. Machine Learning, 24:123-140.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised models for named entity classification",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EMNLP/VLC",
"volume": "",
"issue": "",
"pages": "100--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins and Y. Singer. 1999. Unsupervised models for named entity classification. In Proceedings of EMNLP/VLC, pages 100-110.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "PAC generalization bounds for co-training",
"authors": [
{
"first": "S",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Littman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mcallester",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Dasgupta, M. Littman, and D. McAllester. 2001. PAC gen- eralization bounds for co-training. In Advances in NIPS.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society, Series B",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likeli- hood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On the optimality of the simple Bayesian classifier under zero-one loss",
"authors": [
{
"first": "P",
"middle": [],
"last": "Domingos",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Pazzani",
"suffix": ""
}
],
"year": 1997,
"venue": "Machine Learning",
"volume": "29",
"issue": "",
"pages": "103--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Domingos and M. J. Pazzani. 1997. On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning, 29:103-130.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Enhancing supervised learning with unlabeled data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Goldman",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "327--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Goldman and Y. Zhou. 2000. Enhancing supervised learning with unlabeled data. In Proceedings of ICML, pages 327- 334.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Irrelevant features and the subset selection problem",
"authors": [
{
"first": "G",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kohavi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Pfleger",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. John, R. Kohavi, and K. Pfleger. 1994. Irrelevant features and the subset selection problem. In Proceedings of ICML.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Induction of selective Bayesian classifiers",
"authors": [
{
"first": "P",
"middle": [],
"last": "Langley",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sage",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of UAI",
"volume": "",
"issue": "",
"pages": "399--406",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Langley and S. Sage. 1994. Induction of selective Bayesian classifiers. In Proceedings of UAI, pages 399-406.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Proceedings of the Sixth Message Understanding Conference (MUC-6)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MUC-6. 1995. Proceedings of the Sixth Message Understand- ing Conference (MUC-6).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Proceedings of the Seventh Message Understanding Conference (MUC-7)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MUC-7. 1998. Proceedings of the Seventh Message Under- standing Conference (MUC-7).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Applying cotraining to reference resolution",
"authors": [
{
"first": "C",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rapp",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "352--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Mueller, S. Rapp, and M. Strube. 2002. Applying co- training to reference resolution. In Proceedings of the ACL, pages 352-359.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Active + Semi-Supervised Learning = Robust Multi-View Learning",
"authors": [
{
"first": "I",
"middle": [],
"last": "Muslea",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Minton",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Knoblock",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Muslea, S. Minton, and C. Knoblock. 2002. Active + Semi- Supervised Learning = Robust Multi-View Learning. In Pro- ceedings of ICML.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Combining sample selection and error-driven pruning for machine learning of coreference rules",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng and C. Cardie. 2002. Combining sample selection and error-driven pruning for machine learning of coreference rules. In Proceedings of EMNLP, pages 55-62.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Analyzing the effectiveness and applicability of co-training",
"authors": [
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ghani",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Nigam and R. Ghani. 2000. Analyzing the effectiveness and applicability of co-training. In Proceedings of CIKM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Text classification from labeled and unlabeled documents using EM",
"authors": [
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Thrun",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2000,
"venue": "Machine Learning",
"volume": "39",
"issue": "",
"pages": "103--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103-134.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Limitations of co-training for natural language learning from large datasets",
"authors": [
{
"first": "D",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Pierce and C. Cardie. 2001. Limitations of co-training for natural language learning from large datasets. In Proceed- ings of EMNLP, pages 1-9.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning dictionaries for information extraction by multi-level bootstrapping",
"authors": [
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "474--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Riloff and R. Jones. 1999. Learning dictionaries for infor- mation extraction by multi-level bootstrapping. In Proceed- ings of AAAI, pages 474-479.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Applying co-training methods to statistical parsing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the NAACL",
"volume": "",
"issue": "",
"pages": "175--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Sarkar. 2001. Applying co-training methods to statistical parsing. In Proceedings of the NAACL, pages 175-182.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "W",
"middle": [
"M"
],
"last": "Soon",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "D",
"middle": [
"C Y"
],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. M. Soon, H. T. Ng, and D. C. Y. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bootstrapping statistical parsers from small datasets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ruhlen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Crim",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Steedman, M. Osborne, A. Sarkar, S. Clark, R. Hwa, J. Hockenmaier, P. Ruhlen, S. Baker, and J. Crim. 2003. Bootstrapping statistical parsers from small datasets. In Pro- ceedings of the EACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A model-theoretic coreference scoring scheme",
"authors": [
{
"first": "M",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Connolly",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Sixth Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the Sixth Message Understanding Conference, pages 45-52.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Learning curve for co-training (pool size = 5000, growth size = 50) for the MUC-6 data set."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Effect of the number of bags on the performance of self-training for the MUC-6 data set."
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Learning curve for self-training using 7 bags for the MUC-6 data set."
},
"FIGREF10": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "of features selected thus far).6 The process is repeated until all features have been selected. actual model performance. To handle this problem, FS-EM has a preference for adding features whose inclusion results in a classification in which the positive class prior (i.e. the probability that an instance is labeled as positive), Q 8"
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Feature Type</td><td>Feature</td><td>Description</td><td/><td/></tr><tr><td>Lexical Grammatical</td><td>PRO STR PRONOUN 1 PRONOUN 2 DEMONSTRATIVE 2</td><td>C if both \u00a4 \u00a3 Y if NP\u00a2 is a pronoun; else N. \u00a7 \u00a3 Y if NP\u00a5 is a pronoun; else N. \u00a6 \u00a3 Y if NP\u00a5 \u00a6 \u00a3</td><td>matches that of NP\u00a5</td><td>; else I. \u00a6 \u00a3</td></tr></table>",
"text": "NPs are pronominal and are the same string; else I. PN STR C if both NPs are proper names and are the same string; else I. SOON STR NONPRO C if both NPs are non-pronominal and the string of NP\u00a2"
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>\u00a4 \u00a3</td><td>and NP\u00a5\u00a3 , in document</td></tr></table>",
"text": "Feature set for the coreference system. The feature set contains relational and non-relational features that are used to generate an instance representing two NPs, NP\u00a2"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": ""
}
}
}
}