{ "paper_id": "P10-1044", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:21:48.532393Z" }, "title": "A Latent Dirichlet Allocation method for Selectional Preferences", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": { "postBox": "Box 352350", "postCode": "98195", "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "aritter@cs.washington.edu" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": { "postBox": "Box 352350", "postCode": "98195", "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "etzioni@cs.washington.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation's preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP's effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al.'s system (Pantel et al., 2007).", "pdf_parse": { "paper_id": "P10-1044", "_pdf_hash": "", "abstract": [ { "text": "The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation's preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP's effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al.'s system (Pantel et al., 2007).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Selectional Preferences encode the set of admissible argument values for a relation. For example, locations are likely to appear in the second argument of the relation X is headquartered in Y and companies or organizations in the first. A large, high-quality database of preferences has the potential to improve the performance of a wide range of NLP tasks including semantic role labeling (Gildea and Jurafsky, 2002) , pronoun resolution (Bergsma et al., 2008) , textual inference (Pantel et al., 2007) , word-sense disambiguation (Resnik, 1997) , and many more. Therefore, much attention has been focused on automatically computing them based on a corpus of relation instances. Resnik (1996) presented the earliest work in this area, describing an information-theoretic approach that inferred selectional preferences based on the WordNet hypernym hierarchy. Recent work (Erk, 2007; Bergsma et al., 2008) has moved away from generalization to known classes, instead utilizing distributional similarity between nouns to generalize beyond observed relation-argument pairs. This avoids problems like WordNet's poor coverage of proper nouns and is shown to improve performance. These methods, however, no longer produce the generalized class for an argument.", "cite_spans": [ { "start": 390, "end": 417, "text": "(Gildea and Jurafsky, 2002)", "ref_id": "BIBREF13" }, { "start": 439, "end": 461, "text": "(Bergsma et al., 2008)", "ref_id": "BIBREF1" }, { "start": 482, "end": 503, "text": "(Pantel et al., 2007)", "ref_id": "BIBREF25" }, { "start": 532, "end": 546, "text": "(Resnik, 1997)", "ref_id": "BIBREF29" }, { "start": 680, "end": 693, "text": "Resnik (1996)", "ref_id": "BIBREF28" }, { "start": 872, "end": 883, "text": "(Erk, 2007;", "ref_id": "BIBREF10" }, { "start": 884, "end": 905, "text": "Bergsma et al., 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we describe a novel approach to computing selectional preferences by making use of unsupervised topic models. Our approach is able to combine benefits of both kinds of methods: it retains the generalization and humaninterpretability of class-based approaches and is also competitive with the direct methods on predictive tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unsupervised topic models, such as latent Dirichlet allocation (LDA) (Blei et al., 2003) and its variants are characterized by a set of hidden topics, which represent the underlying semantic structure of a document collection. For our problem these topics offer an intuitive interpretationthey represent the (latent) set of classes that store the preferences for the different relations. Thus, topic models are a natural fit for modeling our relation data.", "cite_spans": [ { "start": 69, "end": 88, "text": "(Blei et al., 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In particular, our system, called LDA-SP, uses LinkLDA (Erosheva et al., 2004) , an extension of LDA that simultaneously models two sets of distributions for each topic. These two sets represent the two arguments for the relations. Thus, LDA-SP is able to capture information about the pairs of topics that commonly co-occur. This information is very helpful in guiding inference.", "cite_spans": [ { "start": 55, "end": 78, "text": "(Erosheva et al., 2004)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We run LDA-SP to compute preferences on a massive dataset of binary relations r(a 1 , a 2 ) ex-tracted from the Web by TEXTRUNNER (Banko and Etzioni, 2008) . Our experiments demonstrate that LDA-SP significantly outperforms state of the art approaches obtaining an 85% increase in recall at precision 0.9 on the standard pseudodisambiguation task.", "cite_spans": [ { "start": 130, "end": 155, "text": "(Banko and Etzioni, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Additionally, because LDA-SP is based on a formal probabilistic model, it has the advantage that it can naturally be applied in many scenarios. For example, we can obtain a better understanding of similar relations (Table 1) , filter out incorrect inferences based on querying our model (Section 4.3), as well as produce a repository of class-based preferences with a little manual effort as demonstrated in Section 4.4. In all these cases we obtain high quality results, for example, massively outperforming Pantel et al.'s approach in the textual inference task. 1", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 224, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work on selectional preferences can be broken into four categories: class-based approaches (Resnik, 1996; Li and Abe, 1998; Clark and Weir, 2002; Pantel et al., 2007) , similarity based approaches (Dagan et al., 1999; Erk, 2007) , discriminative (Bergsma et al., 2008) , and generative probabilistic models (Rooth et al., 1999) .", "cite_spans": [ { "start": 100, "end": 114, "text": "(Resnik, 1996;", "ref_id": "BIBREF28" }, { "start": 115, "end": 132, "text": "Li and Abe, 1998;", "ref_id": "BIBREF17" }, { "start": 133, "end": 154, "text": "Clark and Weir, 2002;", "ref_id": "BIBREF6" }, { "start": 155, "end": 175, "text": "Pantel et al., 2007)", "ref_id": "BIBREF25" }, { "start": 206, "end": 226, "text": "(Dagan et al., 1999;", "ref_id": "BIBREF7" }, { "start": 227, "end": 237, "text": "Erk, 2007)", "ref_id": "BIBREF10" }, { "start": 255, "end": 277, "text": "(Bergsma et al., 2008)", "ref_id": "BIBREF1" }, { "start": 316, "end": 336, "text": "(Rooth et al., 1999)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Class-based approaches, first proposed by Resnik (1996) , are the most studied of the four. They make use of a pre-defined set of classes, either manually produced (e.g. WordNet), or automatically generated (Pantel, 2003) . For each relation, some measure of the overlap between the classes and observed arguments is used to identify those that best describe the arguments. These techniques produce a human-interpretable output, but often suffer in quality due to an incoherent taxonomy, inability to map arguments to a class (poor lexical coverage), and word sense ambiguity.", "cite_spans": [ { "start": 42, "end": 55, "text": "Resnik (1996)", "ref_id": "BIBREF28" }, { "start": 207, "end": 221, "text": "(Pantel, 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Because of these limitations researchers have investigated non-class based approaches, which attempt to directly classify a given noun-phrase as plausible/implausible for a relation. Of these, the similarity based approaches make use of a distributional similarity measure between arguments and evaluate a heuristic scoring function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "S rel (arg) = arg \u2208Seen(rel) sim(arg, arg ) \u2022 wt rel (arg)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "1 Our repository of selectional preferences is available at http://www.cs.washington.edu/research/ ldasp. Erk (2007) showed the advantages of this approach over Resnik's information-theoretic classbased method on a pseudo-disambiguation evaluation. These methods obtain better lexical coverage, but are unable to obtain any abstract representation of selectional preferences.", "cite_spans": [ { "start": 106, "end": 116, "text": "Erk (2007)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Our solution fits into the general category of generative probabilistic models, which model each relation/argument combination as being generated by a latent class variable. These classes are automatically learned from the data. This retains the class-based flavor of the problem, without the knowledge limitations of the explicit classbased approaches. Probably the closest to our work is a model proposed by Rooth et al. (1999) , in which each class corresponds to a multinomial over relations and arguments and EM is used to learn the parameters of the model. In contrast, we use a LinkLDA framework in which each relation is associated with a corresponding multinomial distribution over classes, and each argument is drawn from a class-specific distribution over words; LinkLDA captures co-occurrence of classes in the two arguments. Additionally we perform full Bayesian inference using collapsed Gibbs sampling, in which parameters are integrated out (Griffiths and Steyvers, 2004) .", "cite_spans": [ { "start": 410, "end": 429, "text": "Rooth et al. (1999)", "ref_id": "BIBREF30" }, { "start": 957, "end": 987, "text": "(Griffiths and Steyvers, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Recently, Bergsma et. al. (2008) proposed the first discriminative approach to selectional preferences. Their insight that pseudo-negative examples could be used as training data allows the application of an SVM classifier, which makes use of many features in addition to the relation-argument co-occurrence frequencies used by other methods. They automatically generated positive and negative examples by selecting arguments having high and low mutual information with the relation. Since it is a discriminative approach it is amenable to feature engineering, but needs to be retrained and tuned for each task. On the other hand, generative models produce complete probability distributions of the data, and hence can be integrated with other systems and tasks in a more principled manner (see Sections 4.2.2 and 4.3.1). Additionally, unlike LDA-SP Bergsma et al.'s system doesn't produce human-interpretable topics. Finally, we note that LDA-SP and Bergsma's system are potentially complimentary -the output of LDA-SP could be used to generate higher-quality training data for Bergsma, potentially improving their results.", "cite_spans": [ { "start": 10, "end": 32, "text": "Bergsma et. al. (2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Topic models such as LDA (Blei et al., 2003) and its variants have recently begun to see use in many NLP applications such as summarization (Daum\u00e9 III and Marcu, 2006) , document alignment and segmentation (Chen et al., 2009) , and inferring class-attribute hierarchies (Reisinger and Pasca, 2009 ). Our particular model, LinkLDA, has been applied to a few NLP tasks such as simultaneously modeling the words appearing in blog posts and users who will likely respond to them (Yano et al., 2009) , modeling topic-aligned articles in different languages , and word sense induction (Brody and Lapata, 2009) .", "cite_spans": [ { "start": 25, "end": 44, "text": "(Blei et al., 2003)", "ref_id": "BIBREF2" }, { "start": 140, "end": 167, "text": "(Daum\u00e9 III and Marcu, 2006)", "ref_id": "BIBREF8" }, { "start": 206, "end": 225, "text": "(Chen et al., 2009)", "ref_id": "BIBREF5" }, { "start": 270, "end": 296, "text": "(Reisinger and Pasca, 2009", "ref_id": "BIBREF27" }, { "start": 475, "end": 494, "text": "(Yano et al., 2009)", "ref_id": "BIBREF33" }, { "start": 579, "end": 603, "text": "(Brody and Lapata, 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Finally, we highlight two systems, developed independently of our own, which apply LDA-style models to similar tasks.\u00d3 S\u00e9aghdha (2010) proposes a series of LDA-style models for the task of computing selectional preferences. This work learns selectional preferences between the following grammatical relations: verb-object, nounnoun, and adjective-noun. It also focuses on jointly modeling the generation of both predicate and argument, and evaluation is performed on a set of human-plausibility judgments obtaining impressive results against Keller and Lapata's (2003) Web hit-count based system. Van Durme and Gildea (2009) proposed applying LDA to general knowledge templates extracted using the KNEXT system (Schubert and Tong, 2003) . In contrast, our work uses LinkLDA and focuses on modeling multiple arguments of a relation (e.g., the subject and direct object of a verb).", "cite_spans": [ { "start": 542, "end": 568, "text": "Keller and Lapata's (2003)", "ref_id": "BIBREF15" }, { "start": 611, "end": 624, "text": "Gildea (2009)", "ref_id": "BIBREF32" }, { "start": 711, "end": 736, "text": "(Schubert and Tong, 2003)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "We present a series of topic models for the task of computing selectional preferences. These models vary in the amount of independence they assume between a 1 and a 2 . At one extreme is Indepen-dentLDA, a model which assumes that both a 1 and a 2 are generated completely independently. On the other hand, JointLDA, the model at the other extreme ( Figure 1 ) assumes both arguments of a specific extraction are generated based on a single hidden variable z. LinkLDA (Figure 2 ) lies between these two extremes, and as demonstrated in Section 4, it is the best model for our relation data.", "cite_spans": [], "ref_spans": [ { "start": 350, "end": 358, "text": "Figure 1", "ref_id": null }, { "start": 468, "end": 477, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Topic Models for Selectional Prefs.", "sec_num": "3" }, { "text": "We are given a set R of binary relations and a corpus D = {r(a 1 , a 2 )} of extracted instances for these relations. 2 Our task is to compute, for each argument a i of each relation r, a set of usual argument values (noun phrases) that it takes. For example, for the relation is headquartered in the first argument set will include companies like Microsoft, Intel, General Motors and second argument will favor locations like New York, California, Seattle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Models for Selectional Prefs.", "sec_num": "3" }, { "text": "We first describe the straightforward application of LDA to modeling our corpus of extracted relations. In this case two separate LDA models are used to model a 1 and a 2 independently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IndependentLDA", "sec_num": "3.1" }, { "text": "In the generative model for our data, each relation r has a corresponding multinomial over topics \u03b8 r , drawn from a Dirichlet. For each extraction, a hidden topic z is first picked according to \u03b8 r , and then the observed argument a is chosen according to the multinomial \u03b2 z .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IndependentLDA", "sec_num": "3.1" }, { "text": "Readers familiar with topic modeling terminology can understand our approach as follows: we treat each relation as a document whose contents consist of a bags of words corresponding to all the noun phrases observed as arguments of the relation in our corpus. Formally, LDA generates each argument in the corpus of relations as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IndependentLDA", "sec_num": "3.1" }, { "text": "for each topic t = 1 . . . T do Generate \u03b2 t according to symmetric Dirichlet distribution Dir(\u03b7). end for for each relation r = 1 . . . |R| do Generate \u03b8 r according to Dirichlet distribution Dir(\u03b1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IndependentLDA", "sec_num": "3.1" }, { "text": "for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IndependentLDA", "sec_num": "3.1" }, { "text": "each tuple i = 1 . . . N r do Generate z r,i from Multinomial(\u03b8 r ). Generate the argument a r,i from multi- nomial \u03b2 z r,i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IndependentLDA", "sec_num": "3.1" }, { "text": "end for end for One weakness of IndependentLDA is that it doesn't jointly model a 1 and a 2 together. Clearly this is undesirable, as information about which topics one of the arguments favors can help inform the topics chosen for the other. For example, class pairs such as (team, game), (politician, political issue) form much more plausible selectional preferences than, say, (team, political issue), (politician, game).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IndependentLDA", "sec_num": "3.1" }, { "text": "As a more tightly coupled alternative, we first propose JointLDA, whose graphical model is depicted in Figure 1 . The key difference in JointLDA (versus LDA) is that instead of one, it maintains two sets of topics (latent distributions over words) denoted by \u03b2 and \u03b3, one for classes of each argument. A topic id k represents a pair of topics, \u03b2 k and \u03b3 k , that co-occur in the arguments of extracted relations. Common examples include (Person, Location), (Politician, Political issue), etc. The hidden variable z = k indicates that the noun phrase for the first argument was drawn from the multinomial \u03b2 k , and that the second argument was drawn from \u03b3 k . The per-relation distribution \u03b8 r is a multinomial over the topic ids and represents the selectional preferences, both for arg1s and arg2s of a relation r.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "JointLDA", "sec_num": "3.2" }, { "text": "Although JointLDA has many desirable properties, it has some drawbacks as well. Most notably, in JointLDA topics correspond to pairs of multinomials (\u03b2 k , \u03b3 k ); this leads to a situation in which multiple redundant distributions are needed to represent the same underlying semantic class. For example consider the case where we we need to represent the following selectional preferences for our corpus of relations: (person, location), (person, organization), and (person, crime). Because JointLDA requires a separate pair of multinomials for each topic, it is forced to use 3 separate multinomials to represent the class person, rather than learning a single distribution representing person and choosing 3 different topics for a 2 . This results in poor generalization because the data for a single class is divided into multiple topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "JointLDA", "sec_num": "3.2" }, { "text": "In order to address this problem while maintaining the sharing of influence between a 1 and a 2 , we next present LinkLDA, which represents a compromise between IndependentLDA and JointLDA. LinkLDA is more flexible than JointLDA, allowing different topics to be chosen for a 1 , and a 2 , however still models the generation of topics from the same distribution for a given relation. Figure 2 illustrates the LinkLDA model in the plate notation, which is analogous to the model in (Erosheva et al., 2004) . In particular note that each a i is drawn from a different hidden topic z i , however the z i 's are drawn from the same distribution \u03b8 r for a given relation r. To facilitate learn- ing related topic pairs between arguments we employ a sparse prior over the per-relation topic distributions. Because a few topics are likely to be assigned most of the probability mass for a given relation it is more likely (although not necessary) that the same topic number k will be drawn for both arguments. When comparing LinkLDA with JointLDA the better model may not seem immediately clear. On the one hand, JointLDA jointly models the generation of both arguments in an extracted tuple. This allows one argument to help disambiguate the other in the case of ambiguous relation strings. LinkLDA, however, is more flexible; rather than requiring both arguments to be generated from one of |Z| possible pairs of multinomials (\u03b2 z , \u03b3 z ), Lin-kLDA allows the arguments of a given extraction to be generated from |Z| 2 possible pairs. Thus, instead of imposing a hard constraint that z 1 = z 2 (as in JointLDA), LinkLDA simply assigns a higher probability to states in which z 1 = z 2 , because both hidden variables are drawn from the same (sparse) distribution \u03b8 r . LinkLDA can thus re-use argument classes, choosing different combinations of topics for the arguments if it fits the data better. In Section 4 we show experimentally that LinkLDA outperforms JointLDA (and Inde-pendentLDA) by wide margins. We use LDA-SP to refer to LinkLDA in all the experiments below.", "cite_spans": [ { "start": 481, "end": 504, "text": "(Erosheva et al., 2004)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 384, "end": 392, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "JointLDA", "sec_num": "3.2" }, { "text": "For all the models we use collapsed Gibbs sampling for inference in which each of the hidden variables (e.g., z r,i,1 and z r,i,2 in LinkLDA) are sampled sequentially conditioned on a fullassignment to all others, integrating out the parameters (Griffiths and Steyvers, 2004) . This produces robust parameter estimates, as it allows computation of expectations over the posterior distribution as opposed to estimating maximum likelihood parameters. In addition, the integration allows the use of sparse priors, which are typically more appropriate for natural language data. In all experiments we use hyperparameters \u03b1 = \u03b7 1 = \u03b7 2 = 0.1. We generated initial code for our samplers using the Hierarchical Bayes Compiler (Daume III, 2007) .", "cite_spans": [ { "start": 245, "end": 275, "text": "(Griffiths and Steyvers, 2004)", "ref_id": "BIBREF14" }, { "start": 719, "end": 736, "text": "(Daume III, 2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.4" }, { "text": "There are several advantages to using topic models for our task. First, they naturally model the class-based nature of selectional preferences, but don't take a pre-defined set of classes as input. Instead, they compute the classes automatically. This leads to better lexical coverage since the issue of matching a new argument to a known class is side-stepped. Second, the models naturally handle ambiguous arguments, as they are able to assign different topics to the same phrase in different contexts. Inference in these models is also scalable -linear in both the size of the corpus as well as the number of topics. In addition, there are several scalability enhancements such as SparseLDA (Yao et al., 2009) , and an approximation of the Gibbs Sampling procedure can be efficiently parallelized (Newman et al., 2009 ). Finally we note that, once a topic distribution has been learned over a set of training relations, one can efficiently apply inference to unseen relations (Yao et al., 2009) .", "cite_spans": [ { "start": 694, "end": 712, "text": "(Yao et al., 2009)", "ref_id": "BIBREF34" }, { "start": 800, "end": 820, "text": "(Newman et al., 2009", "ref_id": "BIBREF22" }, { "start": 979, "end": 997, "text": "(Yao et al., 2009)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Advantages of Topic Models", "sec_num": "3.5" }, { "text": "We perform three main experiments to assess the quality of the preferences obtained using topic models. The first is a task-independent evaluation using a pseudo-disambiguation experiment (Section 4.2), which is a standard way to evaluate the quality of selectional preferences (Rooth et al., 1999; Erk, 2007; Bergsma et al., 2008) . We use this experiment to compare the various topic models as well as the best model with the known state of the art approaches to selectional preferences. Secondly, we show significant improvements to performance at an end-task of textual inference in Section 4.3. Finally, we report on the quality of a large database of Wordnet-based preferences obtained after manually associating our topics with Wordnet classes (Section 4.4).", "cite_spans": [ { "start": 278, "end": 298, "text": "(Rooth et al., 1999;", "ref_id": "BIBREF30" }, { "start": 299, "end": 309, "text": "Erk, 2007;", "ref_id": "BIBREF10" }, { "start": 310, "end": 331, "text": "Bergsma et al., 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For all experiments we make use of a corpus of r(a 1 , a 2 ) tuples, which was automatically ex-tracted by TEXTRUNNER (Banko and Etzioni, 2008) from 500 million Web pages.", "cite_spans": [ { "start": 118, "end": 143, "text": "(Banko and Etzioni, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Generalization Corpus", "sec_num": "4.1" }, { "text": "To create a generalization corpus from this large dataset. We first selected 3,000 relations from the middle of the tail (we used the 2,000-5,000 most frequent ones) 3 and collected all instances. To reduce sparsity, we discarded all tuples containing an NP that occurred fewer than 50 times in the data. This resulted in a vocabulary of about 32,000 noun phrases, and a set of about 2.4 million tuples in our generalization corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization Corpus", "sec_num": "4.1" }, { "text": "We inferred topic-argument and relation-topic multinomials (\u03b2, \u03b3, and \u03b8) on the generalization corpus by taking 5 samples at a lag of 50 after a burn in of 750 iterations. Using multiple samples introduces the risk of topic drift due to lack of identifiability, however we found this to not be a problem in practice. During development we found that the topics tend to remain stable across multiple samples after sufficient burn in, and multiple samples improved performance. Table 1 lists sample topics and high ranked words for each (for both arguments) as well as relations favoring those topics.", "cite_spans": [], "ref_spans": [ { "start": 476, "end": 483, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Generalization Corpus", "sec_num": "4.1" }, { "text": "We first compare the three LDA-based approaches to each other and two state of the art similarity based systems (Erk, 2007 ) (using mutual information and Jaccard similarity respectively). These similarity measures were shown to outperform the generative model of Rooth et al. (1999) , as well as class-based methods such as Resnik's. In this pseudo-disambiguation experiment an observed tuple is paired with a pseudo-negative, which has both arguments randomly generated from the whole vocabulary (according to the corpus-wide distribution over arguments). The task is, for each relation-argument pair, to determine whether it is observed, or a random distractor.", "cite_spans": [ { "start": 112, "end": 122, "text": "(Erk, 2007", "ref_id": "BIBREF10" }, { "start": 264, "end": 283, "text": "Rooth et al. (1999)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Task Independent Evaluation", "sec_num": "4.2" }, { "text": "For this experiment we gathered a primary corpus by first randomly selecting 100 high-frequency relations not in the generalization corpus. For each relation we collected all tuples containing arguments in the vocabulary. We held out 500 randomly selected tuples as the test set. For each tu-Topic t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Set", "sec_num": "4.2.1" }, { "text": "Relations which assign highest probability to t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arg1", "sec_num": null }, { "text": "The residue -The mixture -The reaction mixture -The solution -the mixture -the reaction mixture -the residue -The reactionthe solution -The filtrate -the reaction -The product -The crude product -The pellet -The organic layer -Thereto -This solution -The resulting solution -Next -The organic phase -The resulting mixture -C. )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arg2 18", "sec_num": null }, { "text": "was treated with, is treated with, was poured into, was extracted with, was purified by, was diluted with, was filtered through, is disolved in, is washed with EtOAc -CH2Cl2 -H2O -CH.sub.2Cl.sub.2 -H.sub.2O -water -MeOH -NaHCO3 -Et2O -NHCl -CHCl.sub.3 -NHCl -dropwise -CH2Cl.sub.2 -Celite -Et.sub.2O -Cl.sub.2 -NaOH -AcOEt -CH2C12 -the mixture -saturated NaHCO3 -SiO2 -H2O -N hydrochloric acid -NHCl -preparative HPLC -to0 C 151", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arg2 18", "sec_num": null }, { "text": "the Court -The Court -the Supreme Court -The Supreme Court -this Court -Court -The US Supreme Court -the court -This Court -the US Supreme Court -The court -Supreme Court -Judge -the Court of Appeals -A federal judge will hear, ruled in, decides, upholds, struck down, overturned, sided with, affirms the case -the appeal -arguments -a caseevidence -this case -the decision -the law -testimony -the State -an interview -an appeal -cases -the Court -that decision -Congress -a decision -the complaint -oral arguments -a law -the statute 211", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arg2 18", "sec_num": null }, { "text": "President Bush -Bush -The President -Clinton -the President -President Clinton -President George W. Bush -Mr. Bush -The Governor -the Governor -Romney -McCain -The White House -President -Schwarzenegger -Obama hailed, vetoed, promoted, will deliver, favors, denounced, defended the bill -a bill -the decision -the war -the idea -the plan -the move -the legislationlegislation -the measure -the proposal -the deal -this bill -a measure -the programthe law -the resolution -efforts -the agreement -gay marriage -the report -abortion 224", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arg2 18", "sec_num": null }, { "text": "Google -Software -the CPU -Clicking -Excel -the user -Firefox -System -The CPU -Internet Explorer -the ability -Program -users -Option -SQL Server -Code -the OS -the BIOS will display, to store, to load, processes, cannot find, invokes, to search for, to delete data -files -the data -the file -the URLinformation -the files -images -a URL -the information -the IP address -the user -text -the code -a file -the page -IP addresses -PDF files -messages -pages -an IP address Table 1 : Example argument lists from the inferred topics. For each topic number t we list the most probable values according to the multinomial distributions for each argument (\u03b2 t and \u03b3 t ). The middle column reports a few relations whose inferred topic distributions \u03b8 r assign highest probability to t.", "cite_spans": [], "ref_spans": [ { "start": 474, "end": 481, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Arg2 18", "sec_num": null }, { "text": "ple r(a 1 , a 2 ) in the held-out set, we removed all tuples in the training set containing either of the rel-arg pairs, i.e., any tuple matching r(a 1 , * ) or r( * , a 2 ). Next we used collapsed Gibbs sampling to infer a distribution over topics, \u03b8 r , for each of the relations in the primary corpus (based solely on tuples in the training set) using the topics from the generalization corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arg2 18", "sec_num": null }, { "text": "For each of the 500 observed tuples in the testset we generated a pseudo-negative tuple by randomly sampling two noun phrases from the distribution of NPs in both corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arg2 18", "sec_num": null }, { "text": "Our prediction system needs to determine whether a specific relation-argument pair is admissible according to the selectional preferences or is a random distractor (D). Following previous work, we perform this experiment independently for the two relation-argument pairs (r, a 1 ) and (r, a 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction", "sec_num": "4.2.2" }, { "text": "We first compute the probability of observing a 1 for first argument of relation r given that it is not a distractor, P (a 1 |r, \u00acD), which we approximate by its probability given an estimate of the parameters inferred by our model, marginalizing over hidden topics t. The analysis for the second argument is similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction", "sec_num": "4.2.2" }, { "text": "P (a1|r, \u00acD) \u2248 PLDA(a1|r) = T X t=0 P (a1|t)P (t|r) = T X t=0 \u03b2t(a1)\u03b8r(t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction", "sec_num": "4.2.2" }, { "text": "A simple application of Bayes Rule gives the probability that a particular argument is not a distractor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction", "sec_num": "4.2.2" }, { "text": "Here the distractor-related probabilities are independent of r, i.e., P (D|r) = P (D), P (a 1 |D, r) = P (a 1 |D), etc. We estimate P (a 1 |D) according to their frequency in the generalization corpus. P (\u00acD|r, a1) = P (\u00acD|r)P (a1|r, \u00acD) P (a1|r) \u2248 P (\u00acD)PLDA(a1|r) P (D)P (a1|D) + P (\u00acD)PLDA(a1|r) Figure 3 plots the precision-recall curve for the pseudo-disambiguation experiment comparing the three different topic models. LDA-SP, which uses LinkLDA, substantially outperforms both Inde-pendentLDA and JointLDA.", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 307, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Prediction", "sec_num": "4.2.2" }, { "text": "Next, in figure 4, we compare LDA-SP with mutual information and Jaccard similarities using both the generalization and primary corpus for Figure 4 : Comparison to similarity-based selectional preference systems. LDA-SP obtains 85% higher recall at precision 0.9. computation of similarities. We find LDA-SP significantly outperforms these methods. Its edge is most noticed at high precisions; it obtains 85% more recall at 0.9 precision compared to mutual information. Overall LDA-SP obtains an 15% increase in the area under precision-recall curve over mutual information. All three systems' AUCs are shown in Table 2 ; LDA-SP's improvements over both Jaccard and mutual information are highly significant with a significance level less than 0.01 using a paired t-test.", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 147, "text": "Figure 4", "ref_id": null }, { "start": 612, "end": 619, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.2.3" }, { "text": "In addition to a superior performance in selectional preference evaluation LDA-SP also produces a set of coherent topics, which can be useful in their own right. For instance, one could use them for tasks such as set-expansion (Carlson et al., 2010) or automatic thesaurus induction (Et-LDA-SP MI-Sim Jaccard-Sim AUC 0.833 0.727 0.711 Table 2 : Area under the precision recall curve. LDA-SP's AUC is significantly higher than both similarity-based methods according to a paired ttest with a significance level below 0.01. zioni et al., 2005; Kozareva et al., 2008) .", "cite_spans": [ { "start": 227, "end": 249, "text": "(Carlson et al., 2010)", "ref_id": "BIBREF4" }, { "start": 522, "end": 541, "text": "zioni et al., 2005;", "ref_id": null }, { "start": 542, "end": 564, "text": "Kozareva et al., 2008)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 335, "end": 342, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.2.3" }, { "text": "We now evaluate LDA-SP's ability to improve performance at an end-task. We choose the task of improving textual entailment by learning selectional preferences for inference rules and filtering inferences that do not respect these. This application of selectional preferences was introduced by Pantel et. al. (2007) . For now we stick to inference rules of the form r 1 (a 1 , a 2 ) \u21d2 r 2 (a 1 , a 2 ), though our ideas are more generally applicable to more complex rules. As an example, the rule (X defeats Y) \u21d2 (X plays Y) holds when X and Y are both sports teams, however fails to produce a reasonable inference if X and Y are Britain and Nazi Germany respectively.", "cite_spans": [ { "start": 293, "end": 314, "text": "Pantel et. al. (2007)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "End Task Evaluation", "sec_num": "4.3" }, { "text": "In order for an inference to be plausible, both relations must have similar selectional preferences, and further, the arguments must obey the selectional preferences of both the antecedent r 1 and the consequent r 2 . 4 Pantel et al. (2007) made use of these intuitions by producing a set of classbased selectional preferences for each relation, then filtering out any inferences where the arguments were incompatible with the intersection of these preferences. In contrast, we take a probabilistic approach, evaluating the quality of a specific inference by measuring the probability that the arguments in both the antecedent and the consequent were drawn from the same hidden topic in our model. Note that this probability captures both the requirement that the antecedent and consequent have similar selectional preferences, and that the arguments from a particular instance of the rule's application match their overlap. We use z r i ,j to denote the topic that generates the j th argument of relation r i . The probability that the two arguments a 1 , a 2 were drawn from the same hidden topic factorizes as follows due to the conditional independences in our model: 5 P (zr 1 ,1 = zr 2 ,1, zr 1 ,2 = zr 2 ,2|a1, a2) = P (zr 1 ,1 = zr 2 ,1|a1)P (zr 1 ,2 = zr 2 ,2|a2)", "cite_spans": [ { "start": 218, "end": 240, "text": "4 Pantel et al. (2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Filtering Inferences", "sec_num": "4.3.1" }, { "text": "To compute each of these factors we simply marginalize over the hidden topics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Inferences", "sec_num": "4.3.1" }, { "text": "P (zr 1 ,j = zr 2 ,j |aj) = T X t=1 P (zr 1 ,j = t|aj)P (zr 2 ,j = t|aj)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Inferences", "sec_num": "4.3.1" }, { "text": "where P (z = t|a) can be computed using Bayes rule. For example, P (zr 1 ,1 = t|a1) = P (a1|zr 1 ,1 = t)P (zr 1 ,1 = t) P (a1) = \u03b2t(a1)\u03b8r 1 (t) P (a1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Inferences", "sec_num": "4.3.1" }, { "text": "In order to evaluate LDA-SP's ability to filter inferences based on selectional preferences we need a set of inference rules between the relations in our corpus. We therefore mapped the DIRT Inference rules (Lin and Pantel, 2001 ), (which consist of pairs of dependency paths) to TEXTRUN-NER relations as follows. We first gathered all instances in the generalization corpus, and for each r(a 1 , a 2 ) created a corresponding simple sentence by concatenating the arguments with the relation string between them. Each such simple sentence was parsed using Minipar (Lin, 1998) . From the parses we extracted all dependency paths between nouns that contain only words present in the TEXTRUNNER relation string. These dependency paths were then matched against each pair in the DIRT database, and all pairs of associated relations were collected producing about 26,000 inference rules. Following Pantel et al. (2007) we randomly sampled 100 inference rules. We then automatically filtered out any rules which contained a negation, or for which the antecedent and consequent contained a pair of antonyms found in WordNet (this left us with 85 rules). For each rule we collected 10 random instances of the antecedent, and generated the consequent. We randomly sampled 300 of these inferences to hand-label.", "cite_spans": [ { "start": 207, "end": 228, "text": "(Lin and Pantel, 2001", "ref_id": "BIBREF18" }, { "start": 564, "end": 575, "text": "(Lin, 1998)", "ref_id": "BIBREF19" }, { "start": 893, "end": 913, "text": "Pantel et al. (2007)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Conditions", "sec_num": "4.3.2" }, { "text": "In figure 5 we compare the precision and recall of LDA-SP against the top two performing systems described by Pantel et al. (ISP.IIM-\u2228 and ISP.JIM, both using the CBC clusters (Pantel, 2003) ). We find that LDA-SP achieves both higher precision and recall than ISP.IIM-\u2228. It is also able to achieve the high-precision point of ISP.JIM and can trade precision to get a much larger recall. Table 3 : Top 10 and Bottom 10 ranked inference rules ranked by LDA-SPafter automatically filtering out negations and antonyms (using WordNet).", "cite_spans": [ { "start": 176, "end": 190, "text": "(Pantel, 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 388, "end": 395, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3.3" }, { "text": "In addition we demonstrate LDA-SP's ability to rank inference rules by measuring the Kullback Leibler Divergence 6 between the topicdistributions of the antecedent and consequent, \u03b8 r 1 and \u03b8 r 2 respectively. Table 3 shows the top 10 and bottom 10 rules out of the 26,000 ranked by KL Divergence after automatically filtering antonyms (using WordNet) and negations. For slight variations in rules (e.g., symmetric pairs) we mention only one example to show more variety.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 217, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3.3" }, { "text": "Finally we explore LDA-SP's ability to produce a repository of human interpretable class-based selectional preferences. As an example, for the relation was born in, we would like to infer that the plausible arguments include (person, location) and (person, date).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Repository of Class-Based Preferences", "sec_num": "4.4" }, { "text": "Since we already have a set of topics, our task reduces to mapping the inferred topics to an equivalent class in a taxonomy (e.g., WordNet). We experimented with automatic methods such as Resnik's, but found them to have all the same problems as directly applying these approaches to the SP task. 7 Guided by the fact that we have a relatively small number of topics (600 total, 300 for each argument) we simply chose to label them manually. By labeling this small number of topics we can infer class-based preferences for an arbitrary number of relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Repository of Class-Based Preferences", "sec_num": "4.4" }, { "text": "In particular, we applied a semi-automatic scheme to map topics to WordNet. We first applied Resnik's approach to automatically shortlist a few candidate WordNet classes for each topic. We then manually picked the best class from the shortlist that best represented the 20 top arguments for a topic (similar to Table 1 ). We marked all incoherent topics with a special symbol \u2205. This process took one of the authors about 4 hours to complete.", "cite_spans": [], "ref_spans": [ { "start": 311, "end": 318, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "A Repository of Class-Based Preferences", "sec_num": "4.4" }, { "text": "To evaluate how well our topic-class associations carry over to unseen relations we used the same random sample of 100 relations from the pseudo-disambiguation experiment. 8 For each argument of each relation we picked the top two topics according to frequency in the 5 Gibbs samples. We then discarded any topics which were labeled with \u2205; this resulted in a set of 236 predictions. A few examples are displayed in table 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Repository of Class-Based Preferences", "sec_num": "4.4" }, { "text": "We evaluated these classes and found the accuracy to be around 0.88. We contrast this with Pantel's repository, 9 the only other released database of selectional preferences to our knowledge. We evaluated the same 100 relations from his website and tagged the top 2 classes for each argument and evaluated the accuracy to be roughly 0.55. 7 Perhaps recent work on automatic coherence ranking (Newman et al., 2010) and labeling (Mei et al., 2007) could produce better results.", "cite_spans": [ { "start": 339, "end": 340, "text": "7", "ref_id": null }, { "start": 392, "end": 413, "text": "(Newman et al., 2010)", "ref_id": "BIBREF23" }, { "start": 427, "end": 445, "text": "(Mei et al., 2007)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "A Repository of Class-Based Preferences", "sec_num": "4.4" }, { "text": "8 Recall that these 100 were not part of the original 3,000 in the generalization corpus, and are, therefore, representative of new \"unseen\" relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Repository of Class-Based Preferences", "sec_num": "4.4" }, { "text": "9 http://demo.patrickpantel.com/ Content/LexSem/paraphrase.htm arg1 class relation arg2 class politician#1 was running for leader#1 people#1 will love show#3 organization#1 has responded to accusation#2 administrative unit#1 has appointed administrator#3 Table 4 : Class-based Selectional Preferences.", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 262, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "A Repository of Class-Based Preferences", "sec_num": "4.4" }, { "text": "We emphasize that tagging a pair of class-based preferences is a highly subjective task, so these results should be treated as preliminary. Still, these early results are promising. We wish to undertake a larger scale study soon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Repository of Class-Based Preferences", "sec_num": "4.4" }, { "text": "We have presented an application of topic modeling to the problem of automatically computing selectional preferences. Our method, LDA-SP, learns a distribution over topics for each relation while simultaneously grouping related words into these topics. This approach is capable of producing human interpretable classes, however, avoids the drawbacks of traditional class-based approaches (poor lexical coverage and ambiguity). LDA-SP achieves state-of-the-art performance on predictive tasks such as pseudo-disambiguation, and filtering incorrect inferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Because LDA-SP generates a complete probabilistic model for our relation data, its results are easily applicable to many other tasks such as identifying similar relations, ranking inference rules, etc. In the future, we wish to apply our model to automatically discover new inference rules and paraphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Finally, our repository of selectional preferences for 10,000 relations is available at http://www.cs.washington.edu/ research/ldasp.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "We focus on binary relations, though the techniques presented in the paper are easily extensible to n-ary relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Many of the most frequent relations have very weak selectional preferences, and thus provide little signal for inferring meaningful topics. For example, the relations has and is can take just about any arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Similarity-based and discriminative methods are not applicable to this task as they offer no straightforward way to compare the similarity between selectional preferences of two relations.5 Note that all probabilities are conditioned on an estimate of the parameters \u03b8, \u03b2, \u03b3 from our model, which are omitted for compactness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "KL-Divergence is an information-theoretic measure of the similarity between two probability distributions, and defined as follows: KL(P ||Q) = P x P (x) log P (x) Q(x) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Tim Baldwin, Colin Cherry, Jesse Davis, Elena Erosheva, Stephen Soderland, Dan Weld, in addition to the anonymous reviewers for helpful comments on a previous draft. This research was supported in part by NSF grant IIS-0803481, ONR grant N00014-08-1-0431, DARPA contract FA8750-09-C-0179, a National Defense Science and Engineering Graduate (NDSEG) Fellowship 32 CFR 168a, and carried out at the University of Washington's Turing Center.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The tradeoffs between open and traditional relation extraction", "authors": [ { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Banko and Oren Etzioni. 2008. The tradeoffs between open and traditional relation extraction. In ACL-08: HLT.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Discriminative learning of selectional preference from unlabeled text", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Randy", "middle": [], "last": "Goebel", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shane Bergsma, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional pref- erence from unlabeled text. In EMNLP.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Latent dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "J. Mach. Learn. Res", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bayesian word sense induction", "authors": [ { "first": "Samuel", "middle": [], "last": "Brody", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2009, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "103--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In EACL, pages 103-111, Morristown, NJ, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Coupled semi-supervised learning for information extraction", "authors": [ { "first": "Andrew", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Betteridge", "suffix": "" }, { "first": "Richard", "middle": [ "C" ], "last": "Wang", "suffix": "" }, { "first": "Estevam", "middle": [ "R" ], "last": "Hruschka", "suffix": "" }, { "first": "Tom", "middle": [ "M" ], "last": "Mitchell", "suffix": "" } ], "year": 2010, "venue": "WSDM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Carlson, Justin Betteridge, Richard C. Wang, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Coupled semi-supervised learning for infor- mation extraction. In WSDM 2010.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Global models of document structure using latent permutations", "authors": [ { "first": "S", "middle": [ "R K" ], "last": "Harr Chen", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Branavan", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Barzilay", "suffix": "" }, { "first": "", "middle": [], "last": "Karger", "suffix": "" } ], "year": 2009, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harr Chen, S. R. K. Branavan, Regina Barzilay, and David R. Karger. 2009. Global models of document structure using latent permutations. In NAACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Class-based probability estimation using a semantic hierarchy", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2002, "venue": "Comput. Linguist", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and David Weir. 2002. Class-based probability estimation using a semantic hierarchy. Comput. Linguist.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Similarity-based models of word cooccurrence probabilities", "authors": [ { "first": "Lillian", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Lillian Lee, and Fernando C. N. Pereira. 1999. Similarity-based models of word cooccur- rence probabilities. In Machine Learning.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bayesian query-focused summarization", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2006. Bayesian query-focused summarization. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associ- ation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "hbc: Hierarchical bayes compiler", "authors": [ { "first": "Hal", "middle": [], "last": "Daume", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daume III. 2007. hbc: Hierarchical bayes com- piler. http://hal3.name/hbc.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A simple, similarity-based model for selectional preferences", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In Proceedings of the 45th Annual Meeting of the Association of Compu- tational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Mixed-membership models of scientific publications", "authors": [ { "first": "Elena", "middle": [], "last": "Erosheva", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Fienberg", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the National Academy of Sciences of the United States of America", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Erosheva, Stephen Fienberg, and John Lafferty. 2004. Mixed-membership models of scientific pub- lications. Proceedings of the National Academy of Sciences of the United States of America.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Unsupervised named-entity extraction from the web: An experimental study", "authors": [ { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Downey", "suffix": "" }, { "first": "Ana", "middle": [ "Maria" ], "last": "Popescu", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Shaked", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderl", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2005, "venue": "Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Etzioni, Michael Cafarella, Doug Downey, Ana maria Popescu, Tal Shaked, Stephen Soderl, Daniel S. Weld, and Alex Yates. 2005. Unsuper- vised named-entity extraction from the web: An ex- perimental study. Artificial Intelligence.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Comput. Linguist", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Comput. Linguist.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Finding scientific topics", "authors": [ { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2004, "venue": "Proc Natl Acad Sci", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. L. Griffiths and M. Steyvers. 2004. Finding scien- tific topics. Proc Natl Acad Sci U S A.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Using the web to obtain frequencies for unseen bigrams", "authors": [ { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2003, "venue": "Comput. Linguist", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank Keller and Mirella Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Comput. Linguist.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semantic class learning from the web with hyponym pattern linkage graphs", "authors": [ { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zornitsa Kozareva, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the web with hyponym pattern linkage graphs. In ACL-08: HLT.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Generalizing case frames using a thesaurus and the mdl principle", "authors": [ { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "" } ], "year": 1998, "venue": "Comput. Linguist", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hang Li and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the mdl principle. Comput. Linguist.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Dirt-discovery of inference rules from text", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. Dirt-discovery of inference rules from text. In KDD.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dependency-based evaluation of minipar", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proc. Workshop on the Evaluation of Parsing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. Dependency-based evaluation of minipar. In Proc. Workshop on the Evaluation of Parsing Systems.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Automatic labeling of multinomial topic models", "authors": [ { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Xuehua", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2007, "venue": "KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiaozhu Mei, Xuehua Shen, and ChengXiang Zhai. 2007. Automatic labeling of multinomial topic models. In KDD.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Polylingual topic models", "authors": [ { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Hanna", "middle": [ "M" ], "last": "Wallach", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Naradowsky", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. 2009. Polylingual topic models. In EMNLP.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distributed algorithms for topic models", "authors": [ { "first": "David", "middle": [], "last": "Newman", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Asuncion", "suffix": "" }, { "first": "Padhraic", "middle": [], "last": "Smyth", "suffix": "" }, { "first": "Max", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Newman, Arthur Asuncion, Padhraic Smyth, and Max Welling. 2009. Distributed algorithms for topic models. JMLR.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Automatic evaluation of topic coherence", "authors": [ { "first": "David", "middle": [], "last": "Newman", "suffix": "" }, { "first": "Jey", "middle": [ "Han" ], "last": "Lau", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Grieser", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2010, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Newman, Jey Han Lau, Karl Grieser, and Tim- othy Baldwin. 2010. Automatic evaluation of topic coherence. In NAACL-HLT.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Latent variable models of selectional preference", "authors": [ { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diarmuid\u00d3 S\u00e9aghdha. 2010. Latent variable mod- els of selectional preference. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Isp: Learning inferential selectional preferences", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Bhagat", "suffix": "" }, { "first": "Bonaventura", "middle": [], "last": "Coppola", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2007, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard H. Hovy. 2007. Isp: Learning inferential selectional preferences. In HLT-NAACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Clustering by committee", "authors": [ { "first": "Pantel", "middle": [], "last": "Patrick Andre", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Andre Pantel. 2003. Clustering by commit- tee. Ph.D. thesis, University of Alberta, Edmonton, Alta., Canada.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Latent variable models of concept-attribute attachment", "authors": [ { "first": "Joseph", "middle": [], "last": "Reisinger", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pasca", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Reisinger and Marius Pasca. 2009. Latent vari- able models of concept-attribute attachment. In Pro- ceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Selectional constraints: an information-theoretic model and its computational realization", "authors": [ { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1996, "venue": "Cognition", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Resnik. 1996. Selectional constraints: an information-theoretic model and its computational realization. Cognition.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Selectional preference and sense disambiguation", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1997, "venue": "Proc. of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1997. Selectional preference and sense disambiguation. In Proc. of the ACL SIGLEX Work- shop on Tagging Text with Lexical Semantics: Why, What, and How?", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Inducing a semantically annotated lexicon via em-based clustering", "authors": [ { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Detlef", "middle": [], "last": "Prescher", "suffix": "" }, { "first": "Glenn", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Franz", "middle": [], "last": "Beil", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a semanti- cally annotated lexicon via em-based clustering. In Proceedings of the 37th annual meeting of the Asso- ciation for Computational Linguistics on Computa- tional Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Extracting and evaluating general world knowledge from the brown corpus", "authors": [ { "first": "Lenhart", "middle": [], "last": "Schubert", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Tong", "suffix": "" } ], "year": 2003, "venue": "Proc. of the HLT-NAACL Workshop on Text Meaning", "volume": "", "issue": "", "pages": "7--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lenhart Schubert and Matthew Tong. 2003. Extract- ing and evaluating general world knowledge from the brown corpus. In In Proc. of the HLT-NAACL Workshop on Text Meaning, pages 7-13.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Topic models for corpus-centric knowledge generalization", "authors": [ { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Van Durme and Daniel Gildea. 2009. Topic models for corpus-centric knowledge generalization. In Technical Report TR-946, Department of Com- puter Science, University of Rochester, Rochester.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Predicting response to political blog posts with topic models", "authors": [ { "first": "Tae", "middle": [], "last": "Yano", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tae Yano, William W. Cohen, and Noah A. Smith. 2009. Predicting response to political blog posts with topic models. In NAACL.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Efficient methods for topic model inference on streaming document collections", "authors": [ { "first": "L", "middle": [], "last": "Yao", "suffix": "" }, { "first": "D", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Yao, D. Mimno, and A. Mccallum. 2009. Effi- cient methods for topic model inference on stream- ing document collections. In KDD.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Figure 1: JointLDA", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Comparison of LDA-based approaches on the pseudo-disambiguation task. LDA-SP (Lin-kLDA) substantially outperforms the other models.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Precision and recall on the inference filtering task.", "uris": null, "type_str": "figure" } } } }