{ "paper_id": "J02-2003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:44:28.698564Z" }, "title": "Class-Based Probability Estimation Using a Semantic Hierarchy", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "2 Buccleuch Place", "postCode": "EH8 9LW", "settlement": "Edinburgh", "country": "UK" } }, "email": "stephenc@cogsci.ed.ac.uk" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sussex", "location": { "postCode": "BN1 9QH", "settlement": "Brighton", "country": "UK" } }, "email": "david.weir@cogs.susx.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This article concerns the estimation of a particular kind of probability, namely, the probability of a noun sense appearing as a particular argument of a predicate. In order to overcome the accompanying sparse-data problem, the proposal here is to define the probabilities in terms of senses from a semantic hierarchy and exploit the fact that the senses can be grouped into classes consisting of semantically similar senses. There is a particular focus on the problem of how to determine a suitable class for a given sense, or, alternatively, how to determine a suitable level of generalization in the hierarchy. A procedure is developed that uses a chi-square test to determine a suitable level of generalization. In order to test the performance of the estimation method, a pseudo-disambiguation task is used, together with two alternative estimation methods. Each method uses a different generalization procedure; the first alternative uses the minimum description length principle, and the second uses Resnik's measure of selectional preference. In addition, the performance of our method is investigated using both the standard Pearson chisquare statistic and the log-likelihood chi-square statistic.", "pdf_parse": { "paper_id": "J02-2003", "_pdf_hash": "", "abstract": [ { "text": "This article concerns the estimation of a particular kind of probability, namely, the probability of a noun sense appearing as a particular argument of a predicate. In order to overcome the accompanying sparse-data problem, the proposal here is to define the probabilities in terms of senses from a semantic hierarchy and exploit the fact that the senses can be grouped into classes consisting of semantically similar senses. There is a particular focus on the problem of how to determine a suitable class for a given sense, or, alternatively, how to determine a suitable level of generalization in the hierarchy. A procedure is developed that uses a chi-square test to determine a suitable level of generalization. In order to test the performance of the estimation method, a pseudo-disambiguation task is used, together with two alternative estimation methods. Each method uses a different generalization procedure; the first alternative uses the minimum description length principle, and the second uses Resnik's measure of selectional preference. In addition, the performance of our method is investigated using both the standard Pearson chisquare statistic and the log-likelihood chi-square statistic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This article concerns the problem of how to estimate the probabilities of noun senses appearing as particular arguments of predicates. Such probabilities can be useful for a variety of natural language processing (NLP) tasks, such as structural disambiguation and statistical parsing, word sense disambiguation, anaphora resolution, and language modeling. To see how such knowledge can be used to resolve structural ambiguities, consider the following prepositional phrase attachment ambiguity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Fred ate strawberries with a spoon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1", "sec_num": null }, { "text": "The ambiguity arises because the prepositional phrase with a spoon can attach to either strawberries or ate. The ambiguity can be resolved by noting that the correct sense of spoon is more likely to be an argument of \"ate-with\" than \"strawberries-with\" (Li and Abe 1998; Clark and Weir 2000) .", "cite_spans": [ { "start": 253, "end": 270, "text": "(Li and Abe 1998;", "ref_id": "BIBREF14" }, { "start": 271, "end": 291, "text": "Clark and Weir 2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Example 1", "sec_num": null }, { "text": "The problem with estimating a probability model defined over a large vocabulary of predicates and noun senses is that this involves a huge number of parameters, which results in a sparse-data problem. In order to reduce the number of parameters, we propose to define a probability model over senses in a semantic hierarchy and to exploit the fact that senses can be grouped into classes consisting of semantically similar senses. The assumption underlying this approach is that the probability of a particular noun sense can be approximated by a probability based on a suitably chosen class. For example, it seems reasonable to suppose that the probability of (the food sense of) chicken appearing as an object of the verb eat can be approximated in some way by a probability based on a class such as FOOD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1", "sec_num": null }, { "text": "There are two elements involved in the problem of using a class to estimate the probability of a noun sense. First, given a suitably chosen class, how can that class be used to estimate the probability of the sense? And second, given a particular noun sense, how can a suitable class be determined? This article offers novel solutions to both problems, and there is a particular focus on the second question, which can be thought of as how to find a suitable level of generalization in the hierarchy. 1 The semantic hierarchy used here is the noun hierarchy of WordNet (Fellbaum 1998), version 1.6. Previous work has considered how to estimate probabilities using classes from WordNet in the context of acquiring selectional preferences (Resnik 1998; Ribas 1995; Li and Abe 1998; McCarthy 2000) , and this previous work has also addressed the question of how to determine a suitable level of generalization in the hierarchy. Li and Abe use the minimum description length principle to obtain a level of generalization, and Resnik uses a simple technique based on a statistical measure of selectional preference. (The work by Ribas builds on that by Resnik, and the work by McCarthy builds on that by Li and Abe.) We compare our estimation method with those of Resnik and Li and Abe, using a pseudo-disambiguation task. Our method outperforms these alternatives on the pseudo-disambiguation task, and an analysis of the results shows that the generalization methods of Resnik and Li and Abe appear to be overgeneralizing, at least for this task.", "cite_spans": [ { "start": 501, "end": 502, "text": "1", "ref_id": null }, { "start": 737, "end": 750, "text": "(Resnik 1998;", "ref_id": "BIBREF22" }, { "start": 751, "end": 762, "text": "Ribas 1995;", "ref_id": "BIBREF23" }, { "start": 763, "end": 779, "text": "Li and Abe 1998;", "ref_id": "BIBREF14" }, { "start": 780, "end": 794, "text": "McCarthy 2000)", "ref_id": "BIBREF16" }, { "start": 1199, "end": 1211, "text": "Li and Abe.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Example 1", "sec_num": null }, { "text": "Note that the problem being addressed here is the engineering problem of estimating predicate argument probabilities, with the aim of producing estimates that will be useful for NLP applications. In particular, we are not addressing the problem of acquiring selectional restrictions in the way this is usually construed (Resnik 1993; Ribas 1995; McCarthy 1997; Li and Abe 1998; Wagner 2000) . The purpose of using a semantic hierarchy for generalization is to overcome the sparse data problem, rather than find a level of abstraction that best represents the selectional restrictions of some predicate. This point is considered further in Section 5.", "cite_spans": [ { "start": 320, "end": 333, "text": "(Resnik 1993;", "ref_id": "BIBREF21" }, { "start": 334, "end": 345, "text": "Ribas 1995;", "ref_id": "BIBREF23" }, { "start": 346, "end": 360, "text": "McCarthy 1997;", "ref_id": "BIBREF15" }, { "start": 361, "end": 377, "text": "Li and Abe 1998;", "ref_id": "BIBREF14" }, { "start": 378, "end": 390, "text": "Wagner 2000)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Example 1", "sec_num": null }, { "text": "The next section describes the noun hierarchy from WordNet and gives a more precise description of the probabilities to be estimated. Section 3 shows how a class from WordNet can be used to estimate the probability of a noun sense. Section 4 shows how a chi-square test is used as part of the generalization procedure, and Section 5 describes the generalization procedure. Section 6 describes the alternative class-based estimation methods used in the pseudo-disambiguation experiments, and Section 7 presents those experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1", "sec_num": null }, { "text": "The noun hierarchy of WordNet consists of senses, or what Miller (1998) calls lexicalized concepts, organized according to the \"is-a-kind-of\" relation. Note that we are using concept to refer to a lexicalized concept or sense and not to a set of senses; we use class to refer to a set of senses. There are around 66,000 different concepts in the noun hierarchy of WordNet version 1.6. A concept in WordNet is represented by a \"synset,\" which is the set of synonymous words that can be used to denote that concept. For example, the synset for the concept cocaine 2 is { cocaine, cocain, coke, snow, C }. Let syn(c) be the synset for concept c, and let cn(n) = { c | n \u2208 syn(c) } be the set of concepts that can be denoted by noun n.", "cite_spans": [ { "start": 58, "end": 71, "text": "Miller (1998)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "The Semantic Hierarchy", "sec_num": "2." }, { "text": "The hierarchy has the structure of a directed acyclic graph (although only around 1% of the nodes have more than one parent), where the edges of the graph constitute what we call the \"direct-isa\" relation. Let isa be the transitive, reflexive closure of direct-isa; then c isa c implies c is a kind of c. If c isa c, then c is a hypernym of c and c is a hyponym of c. In fact, the hierarchy is not a single hierarchy but instead consists of nine separate subhierarchies, each headed by the most general kind of concept, such as entity , abstraction , event , and psychological feature . For the purposes of this work we add a common root dominating the nine subhierarchies, which we denote root . There are some important points that need to be clarified regarding the hierarchy. First, every concept in the hierarchy has a nonempty synset (except the notional concept root ). Even the most general concepts, such as entity , can be denoted by some noun; the synset for entity is { entity, something }. Second, there is an important distinction between an individual concept and a set of concepts. For example, the individual concept entity should not be confused with the set or class consisting of concepts denoting kinds of entities. To make this distinction clear, we use c = { c | c isa c } to denote the set of concepts dominated by concept c, including c itself. For example, animal is the set consisting of those concepts corresponding to kinds of animals (including animal itself).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Semantic Hierarchy", "sec_num": "2." }, { "text": "The probability of a concept appearing as an argument of a predicate is written p (c | v, r) , where c is a concept in WordNet, v is a predicate, and r is an argument position. 3 The focus in this article is on the arguments of verbs, but the techniques discussed can be applied to any predicate that takes nominal arguments, such as adjectives. The probability p(c | v, r) is to be interpreted as follows: This is the probability that some noun n in syn(c), when denoting concept c, appears in position r of verb v (given v and r). The example used throughout the article is p( dog | run, subj), which is the conditional probability that some noun in the synset of dog , when denoting the concept dog , appears in the subject position of the verb run. Note that, in practice, no distinction is made between the different senses of a verb (although the techniques do allow such a distinction) and that each use of a noun is assumed to correspond to exactly one concept. 4", "cite_spans": [ { "start": 82, "end": 92, "text": "(c | v, r)", "ref_id": null }, { "start": 177, "end": 178, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Semantic Hierarchy", "sec_num": "2." }, { "text": "This section explains how a set of concepts, or class, from WordNet can be used to estimate the probability of an individual concept. More specifically, we explain how a set of concepts c , where c is some hypernym of concept c, can be used to estimate p (c | v, r) . (Recall that c denotes the set of concepts dominated by c , including c itself.) One possible approach would be simply to substitute c for the individual concept c. This is a poor solution, however, since p (c | v, r) is the conditional probability that some noun denoting a concept in c appears in position r of verb v. For example, p( animal | run, subj) is the probability that some noun denoting a kind of animal appears in the subject position of the verb run. Probabilities of sets of concepts are obtained by summing over the concepts in the set: ", "cite_spans": [ { "start": 255, "end": 265, "text": "(c | v, r)", "ref_id": null }, { "start": 475, "end": 485, "text": "(c | v, r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(c | v, r) = c \u2208c p(c | v, r)", "eq_num": "( 1)" } ], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(c | v, r) = p(v | c, r) p(c | r) p(v | r)", "eq_num": "(2)" } ], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "Since p(c | r) and p(v | r) are conditioned on the argument slot only, we assume these can be estimated satisfactorily using relative frequency estimates. Alternatively, a standard smoothing technique such as Good-Turing could be used. 5 This leaves p (v | c, r) . Continuing with the dog example, the proposal is to estimate p(run | dog , subj) using a relative-frequency estimate of p(run | animal , subj) or an estimate based on a similar, suitably chosen class. Thus, assuming this choice of class, p( dog | run, subj) would be approximated as follows:", "cite_spans": [ { "start": 236, "end": 237, "text": "5", "ref_id": null }, { "start": 252, "end": 262, "text": "(v | c, r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p( dog | run, subj) \u2248 p(run | animal , subj) p( dog | subj) p(run | subj)", "eq_num": "(3)" } ], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "The following derivation shows that if p(v | c i , r) = k for each child c i of c , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(v | c , r) = k, then p(v | c , r) is also equal to k: p(v | c , r) = p(c | v, r) p(v | r) p(c | r) (4) = p(v | r) p(c | r) i p(c i | v, r) + p(c | v, r) (5) = p(v | r) p(c | r) i p(v | c i , r) p(c i | r) p(v | r) + p(v | c , r) p(c | r) p(v | r) (6) = 1 p(c | r) i k p(c i | r) + k p(c | r) (7) = k p(c | r) i p(c i | r) + p(c | r) (8) = k", "eq_num": "(9)" } ], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "Note that the proof applies only to a tree, since the proof assumes that c is partitioned by c and the sets of concepts dominated by each of the daughters of c , which is not necessarily true for a directed acyclic graph (DAG). WordNet is a DAG but is a close approximation to a tree, and so we assume this will not be a problem in practice. 6 The derivation in (4)-(9) shows how probabilities conditioned on sets of concepts can remain constant when moving up the hierarchy, and this suggests a way of finding a suitable set, c , as a generalization for concept c: Initially set c equal to c and move up the hierarchy, changing the value of c , until there is a significant change in p (v | c , r) . Estimates of p(v | c i , r), for each child c i of c , can be compared to see whether p(v | c , r) has significantly changed. (We ignore the probability p(v | c , r) and consider the probabilities p(v | c i , r) only.) Note that this procedure rests on the assumption that", "cite_spans": [ { "start": 342, "end": 343, "text": "6", "ref_id": null }, { "start": 687, "end": 698, "text": "(v | c , r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "p(v | c, r) is close to p(v | c, r). (In fact, p(v | c, r) is equal to p(v | c, r)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "when c is a leaf node.) So when finding a suitable level for the estimation of p( sandwich | eat, obj), for example, we first assume that p(eat | sandwich , obj) is a good approximation of p(eat | sandwich , obj) and then apply the procedure to p(eat | sandwich , obj).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "A feature of the proposed generalization procedure is that comparing probabilities of the form p (v | C, r) , where C is a class, is closely related to comparing ratios of probabilities of the form p(C | v, r)/p(C | r) (for a given verb and argument position):", "cite_spans": [ { "start": 97, "end": 107, "text": "(v | C, r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(v | C, r) = p(C | v, r) p(C | r) p(v | r)", "eq_num": "( 10)" } ], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "Note that, for a given verb and argument position, p(v | r) is constant across classes. Equation 10is of interest because the ratio p(C | v, r)/p(C | r) can be interpreted as a measure of association between the verb v and class C. This ratio is similar to pointwise mutual information (Church and Hanks 1990) and also forms part of Resnik's association score, which will be introduced in Section 6. Thus the generalization procedure can be thought of as one that finds \"homogeneous\" areas of the hierarchy, that is, areas consisting of classes that are associated to a similar degree with the verb (Clark and Weir 1999) . Finally, we note that the proposed estimation method does not guarantee that the estimates form a probability distribution over the concepts in the hierarchy, and so a normalization factor is required:", "cite_spans": [ { "start": 599, "end": 620, "text": "(Clark and Weir 1999)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "p sc (c | v, r) =p (v | [c, v, r], r)p (c|r) p(v|r) c \u2208Cp (v | [c , v, r], r)p (c |r) p(v|r) (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "We use p sc to denote an estimate obtained using our method (since the technique finds sets of semantically similar senses, or \"similarity classes\") and [c, v, r] to denote the class chosen for concept c in position r of verb v;p denotes a relative frequency estimate, and C denotes the set of concepts in the hierarchy.", "cite_spans": [ { "start": 153, "end": 162, "text": "[c, v, r]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "Before providing the details of the generalization procedure, we give the relativefrequency estimates of the relevant probabilities and deal with the problem of am-biguous data. The relative-frequency estimates are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(c | r) = f (c,r) f (r) = v \u2208V f (c, v , r) v \u2208V c \u2208C f (c , v , r) (12) p(v | r) = f (v,r) f (r) = c \u2208C f (c , v, r) v \u2208V c \u2208C f (c , v , r) (13) p(v | c , r) = f (c ,v,r) f (c ,r) = c \u2208c f (c , v, r) v \u2208V c \u2208c f (c , v , r)", "eq_num": "(14)" } ], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "f (c, v, r)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "is the number of (n, v, r) triples in the data in which n is being used to denote c, and V is the set of verbs in the data. The problem is that the estimates are defined in terms of frequencies of senses, whereas the data are assumed to be in the form of (n, v, r) triples: a noun, verb, and argument position. All the data used in this work have been obtained from the British National Corpus (BNC), using the system of Briscoe and Carroll (1997) , which consists of a shallow-parsing component that is able to identify verbal arguments. We take a simple approach to the problem of estimating the frequencies of senses, by distributing the count for each noun in the data evenly among all senses of the noun:f", "cite_spans": [ { "start": 421, "end": 447, "text": "Briscoe and Carroll (1997)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "(c, v, r) = n\u2208syn(c) f (n, v, r) |cn(n)| (15) wheref (c, v, r)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "is an estimate of the number of times that concept c appears in position r of verb v, and |cn(n)| is the cardinality of cn(n). This is the approach taken by Li and Abe (1998) , Ribas (1995) , and McCarthy (2000). 7 Resnik (1998) explains how this apparently crude technique works surprisingly well. Alternative approaches are described in Clark and Weir (1999) (see also Clark [2001] ), Abney and Light (1999) , and Ciaramita and Johnson (2000) .", "cite_spans": [ { "start": 157, "end": 174, "text": "Li and Abe (1998)", "ref_id": "BIBREF14" }, { "start": 177, "end": 189, "text": "Ribas (1995)", "ref_id": "BIBREF23" }, { "start": 215, "end": 228, "text": "Resnik (1998)", "ref_id": "BIBREF22" }, { "start": 339, "end": 360, "text": "Clark and Weir (1999)", "ref_id": "BIBREF7" }, { "start": 371, "end": 383, "text": "Clark [2001]", "ref_id": "BIBREF6" }, { "start": 387, "end": 409, "text": "Abney and Light (1999)", "ref_id": "BIBREF0" }, { "start": 416, "end": 444, "text": "Ciaramita and Johnson (2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Class-Based Probability Estimation", "sec_num": "3." }, { "text": "In this section we show how to test whether p(v | c , r) changes significantly when considering a node higher in the hierarchy. Consider the problem of deciding whether p(run | canine , subj) is a good approximation of p(run | dog , subj). ( canine is the parent of dog in WordNet.) To do this, the probabilities p(run | c i , subj) are compared using a chi-square test, where the c i are the children of canine . In this case, the null hypothesis of the test is that the probabilities p(run | c i , subj) are the same for each child c i . By judging the strength of the evidence against the null hypothesis, how similar the true probabilities are likely to be can be determined. If the test indicates that the probabilities are sufficiently unlikely to be the same, then the null hypothesis is rejected, and the conclusion is that p(run | canine , subj) is not a good approximation of p(run | dog , subj). An example contingency table, based on counts obtained from a subset of the BNC using the system of Briscoe and Carroll, is given in Table 1 . (Recall that the frequencies are estimated by distributing the count for a noun equally among the noun's senses; this explains the fractional counts.) One column contains estimates of counts arising Table 1 Contingency table for the children of canine in the subject position of run. from concepts in c i appearing in the subject position of the verb run:f (c i , run, subj). A second column presents estimates of counts arising from concepts in c i appearing in the subject position of a verb other than run. The figures in brackets are the expected values if the null hypothesis is true. There is a choice of which statistic to use in conjunction with the chi-square test. The usual statistic encountered in textbooks is the Pearson chi-square statistic, denoted X 2 :", "cite_spans": [], "ref_spans": [ { "start": 1040, "end": 1047, "text": "Table 1", "ref_id": null }, { "start": 1249, "end": 1256, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Using a Chi-Square Test to Compare Probabilities", "sec_num": "4." }, { "text": "c if (c i , run, subj)f(c i , subj) \u2212f(c i , run, subj)f(c i , subj) = v\u2208Vf (c i , v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using a Chi-Square Test to Compare Probabilities", "sec_num": "4." }, { "text": "X 2 = i,j (o ij \u2212 e ij ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using a Chi-Square Test to Compare Probabilities", "sec_num": "4." }, { "text": "e ij (16) where o ij is the observed value for the cell in row i and column j, and e ij is the corresponding expected value. An alternative statistic is the log-likelihood chi-square statistic, denoted G 2 : 8", "cite_spans": [ { "start": 5, "end": 9, "text": "(16)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Using a Chi-Square Test to Compare Probabilities", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G 2 = 2 i,j o ij log e o ij e ij", "eq_num": "(17)" } ], "section": "Using a Chi-Square Test to Compare Probabilities", "sec_num": "4." }, { "text": "The two statistics have similar values when the counts in the contingency table are large (Agresti 1996) . The statistics behave differently, however, when the table contains low counts, and, since corpus data are likely to lead to some low counts, the question of which statistic to use is an important one. Dunning (1993) argues for the use of G 2 rather than X 2 , based on an analysis of the sampling distributions of G 2 and X 2 , and results obtained when using the statistics to acquire highly associated bigrams. We consider Dunning's analysis at the end of this section, and the question of whether to use G 2 or X 2 will be discussed further there. For now, we continue with the discussion of how the chi-square test is used in the generalization procedure. For Table 1 , the value of G 2 is 3.8, and the value of X 2 is 2.5. Assuming a level of significance of \u03b1 = 0.05, the critical value is 12.6 (for six degrees of freedom). Thus, for this \u03b1 value, the null hypothesis would not be rejected for either statistic, and the conclusion would be that there is no reason to suppose that p(run | canine , subj) is not a reasonable approximation of p(run | dog , subj).", "cite_spans": [ { "start": 90, "end": 104, "text": "(Agresti 1996)", "ref_id": "BIBREF2" }, { "start": 309, "end": 323, "text": "Dunning (1993)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 772, "end": 779, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Using a Chi-Square Test to Compare Probabilities", "sec_num": "4." }, { "text": "Contingency table for the children of liquid in the object position of drink.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "c if (c i , drink, obj)f(c i , obj) \u2212f (c i , drink, obj)f(c i , obj) = v\u2208Vf (c i , v, obj) beverage", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "261.0 (238.7) 2,367.7 (2,390.0) 2,628.7 supernatant 0.0 (0.1) 1.0 (0.9) 1.0 alcohol 11.5 (9.4) 92.0 (94.1) 103.5 ammonia 0.0 (0.8) 8.5 (7.7) 8.5 antifreeze 0.0 (0.1) 1.0 (0.9) 1.0 distillate 0.0 (0.5) 6.0 (5 As a further example, Table 2 gives counts for the children of liquid in the object position of drink. Again, the counts have been obtained from a subset of the BNC using the system of Briscoe and Carroll. Not all the sets dominated by the children of liquid are shown, as some, such as sheep dip , never appear in the object position of a verb in the data. This example is designed to show a case in which the null hypothesis is rejected. The value of G 2 for this table is 29.0, and the value of X 2 is 21.2. So for G 2 , even if an \u03b1 value as low as 0.0005 were being used (for which the critical value is 27.9 for eight degrees of freedom), the null hypothesis would still be rejected. For X 2 , the null hypothesis is rejected for \u03b1 values greater than 0.005. This seems reasonable, since the probabilities associated with the children of liquid and the object position of drink would be expected to show a lot of variation across the children.", "cite_spans": [], "ref_spans": [ { "start": 230, "end": 237, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "A key question is how to select the appropriate value for \u03b1. One solution is to treat \u03b1 as a parameter and set it empirically by taking a held-out test set and choosing the value of \u03b1 that maximizes performance on the relevant task. For example, Clark and Weir (2000) describes a prepositional phrase attachment algorithm that employs probability estimates obtained using the WordNet method described here. To set the value of \u03b1, the performance of the algorithm on a development set could be compared across different values of \u03b1, and the value that leads to the best performance could be chosen. Note that this approach sets no constraints on the value of \u03b1: The value could be as high as 0.995 or as low as 0.0005, depending on the particular application.", "cite_spans": [ { "start": 246, "end": 267, "text": "Clark and Weir (2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "There may be cases in which the conditions for the appropriate application of a chisquare test are not met. One condition that is likely to be violated is the requirement that expected values in the contingency table not be too small. (A rule of thumb often found in textbooks is that the expected values should be greater than five.) One response to this problem is to apply some kind of thresholding and either ignore counts below the threshold, or apply the test only to tables that do not contain low counts. Ribas (1995) , Abe (1998), McCarthy (2000) , and Wagner (2000) all use some kind of thresholding when dealing with counts in the hierarchy (although not in the context of a chi-square test). Another approach would be to use Fisher's exact test (Agresti 1996; Pedersen 1996) , which can be applied to tables regardless of the size of the counts they contain. The main problem with this test is that it is computationally expensive, especially for large contingency tables.", "cite_spans": [ { "start": 513, "end": 525, "text": "Ribas (1995)", "ref_id": "BIBREF23" }, { "start": 528, "end": 555, "text": "Abe (1998), McCarthy (2000)", "ref_id": null }, { "start": 562, "end": 575, "text": "Wagner (2000)", "ref_id": "BIBREF25" }, { "start": 757, "end": 771, "text": "(Agresti 1996;", "ref_id": "BIBREF2" }, { "start": 772, "end": 786, "text": "Pedersen 1996)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "What we have found in practice is that applying the chi-square test to tables dominated by low counts tends to produce an insignificant result, and the null hypothesis is not rejected. The consequences of this for the generalization procedure are that low-count tables tend to result in the procedure moving up to the next node in the hierarchy. But given that the purpose of the generalization is to overcome the sparsedata problem, moving up a node is desirable, and therefore we do not modify the test for tables with low counts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "The final issue to consider is which chi-square statistic to use. Dunning (1993) argues for the use of G 2 rather than X 2 , based on the claim that the sampling distribution of G 2 approaches the true chi-square distribution quicker than the sampling distribution of X 2 . However, Agresti (1996, page 34 ) makes the opposite claim: \"The sampling distributions of X 2 and G 2 get closer to chi-squared as the sample size n increases. . . . The convergence is quicker for X 2 than G 2 .\"", "cite_spans": [ { "start": 66, "end": 80, "text": "Dunning (1993)", "ref_id": "BIBREF11" }, { "start": 283, "end": 305, "text": "Agresti (1996, page 34", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "In addition, Pedersen (2001) questions whether one statistic should be preferred over the other for the bigram acquisition task and cites Cressie and Read (1984) , who argue that there are some cases where the Pearson statistic is more reliable than the log-likelihood statistic. Finally, the results of the pseudo-disambiguation experiments presented in Section 7 are at least as good, if not better, when using X 2 rather than G 2 , and so we conclude that the question of which statistic to use should be answered on a per application basis.", "cite_spans": [ { "start": 13, "end": 28, "text": "Pedersen (2001)", "ref_id": "BIBREF19" }, { "start": 138, "end": 161, "text": "Cressie and Read (1984)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "The procedure for finding a suitable class, c , to generalize concept c in position r of verb v works as follows. (We refer to c as the \"similarity class\" of c with respect to v and r and the hypernym c as top (c, v, r) , since the chosen hypernym sits at the \"top\" of the similarity class.) Initially, concept c is assigned to a variable top. Then, by working up the hierarchy, successive hypernyms of c are assigned to top, and this process continues until the probabilities associated with the sets of concepts dominated by top and the siblings of top are significantly different. Once a node is reached that results in a significant result for the chi-square test, the procedure stops, and top is returned as top (c, v, r) . In cases where a concept has more than one parent, the parent is chosen that results in the lowest value of the chi-square statistic, as this indicates the probabilities are the most similar. The set top(c, v, r) is the similarity class of c for verb v and position r. Figure 1 gives an algorithm for determining top (c, v, r) . Figure 2 gives an example of the procedure at work. Here, top( soup , stir, obj) is being determined. The example is based on data from a subset of the BNC, with 303 cases of an argument in the object position of stir. The G 2 statistic is used, together with an \u03b1 value of 0.05. Initially, top is set to soup , and the probabilities corresponding to the children of dish are compared: p(stir | soup , obj), p(stir | lasagne , obj), p(stir | haggis , obj), and so on for the rest of the children. The chi-square test results in a G 2 value of 14.5, compared to a critical value of 55.8. Since G 2 is less than the critical value, the procedure moves up to the next node. This process continues until a significant result is obtained, which first occurs at substance when comparing the children of object . Thus substance is the chosen level of generalization. Now we show how the chosen level of generalization varies with \u03b1 and how it varies with the size of the data set. A note of clarification is required before presenting the results. In related work on acquiring selectional preferences (Ribas 1995 An algorithm for determining top (c, v, r) . 1997; Li and Abe 1998; Wagner 2000) , the level of generalization is often determined for a small number of hand-picked verbs and the result compared with the researcher's intuition about the most appropriate level for representing a selectional preference. According to this approach, if sandwich were chosen to represent hotdog in the object position of eat, this might be considered an undergeneralization, since food might be considered more appropriate. For this work we argue that such an evaluation is not appropriate; since the purpose of this work is probability estimation, the most appropriate level is the one that leads to the most accurate estimate, and this may or may not agree with intuition. Furthermore, we show in Section 7 that to generalize unnecessarily can be harmful for some tasks: If we already have lots of data regarding sandwich , why generalize any higher? Thus the purpose of this section is not to show that the acquired levels are \"correct,\" but simply to show how the levels vary with \u03b1 and the sample size.", "cite_spans": [ { "start": 210, "end": 219, "text": "(c, v, r)", "ref_id": null }, { "start": 717, "end": 726, "text": "(c, v, r)", "ref_id": null }, { "start": 1046, "end": 1055, "text": "(c, v, r)", "ref_id": null }, { "start": 2152, "end": 2163, "text": "(Ribas 1995", "ref_id": "BIBREF23" }, { "start": 2197, "end": 2206, "text": "(c, v, r)", "ref_id": null }, { "start": 2215, "end": 2231, "text": "Li and Abe 1998;", "ref_id": "BIBREF14" }, { "start": 2232, "end": 2244, "text": "Wagner 2000)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 998, "end": 1006, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1058, "end": 1066, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Generalization Procedure", "sec_num": "5." }, { "text": "\u00d7 \u00d0 \u00d7 \u00d2 \u00d7 \u00d2\u00d3\u00d9\u00d6 \u00d7 \u00d1 \u00d2\u00d8 \u00d3\u00d3 \u00d6 \u00da \u00d6 \u00d3\u00d9\u00d6\u00d7 \u00d1 \u00d0 \u00d7\u00d9 \u00d7\u00d8 \u00d2 \u00d3 \u00d8 \u00d9 \u00d4\u00d3 \u00d7\u00d3\u00d2 \u00d6\u00d8 \u00d8 \u00d6\u00d3\u00d9\u00d2 \u00d2\u00d8 \u00d8\u00dd \u00d7\u00d3\u00d9\u00d4 \u00be \u00bd \u00b8 \u00d6 \u00d8 \u00d0 \u00da \u00d0\u00d9 \u00be \u00b8 \u00d6 \u00d8 \u00da \u00d0 \u00bd \u00be \u00b8 \u00d6 \u00d8 \u00da \u00d0 \u00bd \u00be \u00be \u00b8 \u00d6 \u00d8 \u00da \u00d0 \u00bd \u00be \u00bd \u00bd \u00bd\u00b8 \u00d6 \u00d8 \u00da \u00d0 \u00bf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generalization Procedure", "sec_num": "5." }, { "text": "To show how the level of generalization varies with changes in \u03b1, top(c, v, obj) was determined for a number of hand-picked (c, v, obj) triples over a range of values for \u03b1. The triples were chosen to give a range of strongly and weakly selecting verbs and a range of verb frequencies. The data were again extracted from a subset of the BNC using the system of Briscoe and Carroll (1997) , and the G 2 statistic was used in the chi-square test. The results are shown in Table 3 . The number of times the verb occurred with some object is also given in the table.", "cite_spans": [ { "start": 361, "end": 387, "text": "Briscoe and Carroll (1997)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 470, "end": 477, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "The Generalization Procedure", "sec_num": "5." }, { "text": "The results suggest that the generalization level becomes more specific as \u03b1 increases. This is to be expected, since, given a contingency table chosen at random, a higher value of \u03b1 is more likely to lead to a significant result than a lower value of \u03b1. We also see that, for some cases, the value of \u03b1 has little effect on the level. We would expect there to be less change in the level of generalization for strongly selecting verbs, such as drink and eat, and a greater range of levels for weakly selecting verbs such as see. This is because any significant difference in probabilities is likely to be more marked for a strongly selecting verb, and likely to be significant over a wider range of \u03b1 values. The table only provides anecdotal evidence, but provides some support to this argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generalization Procedure", "sec_num": "5." }, { "text": "To investigate more generally how the level of generalization varies with changes in \u03b1, and also with changes in sample size, we took 6, 000 (c, v, obj) triples and calculated the difference in depth between c and top(c, v, r) for each triple. The 6, 000 triples were taken from the first experimental test set described in Section 7, and the training data from this experiment were used to provide the counts. (The test set contains nouns, rather than noun senses, and so the sense of the noun that is most probable given the verb and object slot was used.) An average difference in depth was then calculated. To give an example of how the difference in depth was calculated, suppose dog generalized to placental mammal via canine and carnivore ; in this case the difference would be three.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Generalization Procedure", "sec_num": "5." }, { "text": "The results for various levels of \u03b1 and different sample sizes are shown in Table 4 . The figures in each column arise from using the contingency tables based on the complete training data, but with each count in the table multiplied by the percentage at the head of the column. Thus the 50% column is based on contingency tables in which each original count is multiplied by 50%, which is equivalent to using a sample one-half the size of the original training set. Reading across a row shows how the generalization varies with sample size, and reading down a column shows how it varies with \u03b1. The results show clearly that the extent of generalization decreases with an increase in the value of \u03b1, supporting the trend observed in Table 3 . The results also show that the extent of generalization increases with a decrease in sample size. Again, this is to be expected, since any difference in probability estimates is less likely to be significant for tables with low counts.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 734, "end": 741, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "The Generalization Procedure", "sec_num": "5." }, { "text": "The approaches used for comparison are that of Resnik (1993 Resnik ( , 1998 , subsequently developed by Ribas (1995) , and that of Li and Abe (1998) , which has been adopted by McCarthy (2000) . These have been chosen because they directly address the question of how to find a suitable level of generalization in WordNet.", "cite_spans": [ { "start": 47, "end": 59, "text": "Resnik (1993", "ref_id": "BIBREF21" }, { "start": 60, "end": 75, "text": "Resnik ( , 1998", "ref_id": "BIBREF22" }, { "start": 104, "end": 116, "text": "Ribas (1995)", "ref_id": "BIBREF23" }, { "start": 131, "end": 148, "text": "Li and Abe (1998)", "ref_id": "BIBREF14" }, { "start": 177, "end": 192, "text": "McCarthy (2000)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "The first alternative uses the \"association score,\" which is a measure of how well a set of concepts, C, satisfies the selectional preferences of a verb, v, for an argument position, r:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "9 A(C, v, r) = p(C | v, r) log 2 p(C | v, r) p(C | r)", "eq_num": "(18)" } ], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "An estimate of the association score,\u00c2 (C, v, r) , can be obtained using relative frequency estimates of the probabilities. The key question is how to determine a suitable level of generalization for concept c, or, alternatively, how to find a suitable class to represent concept c (assuming the choice is from those classes that contain all concepts dominated by some hypernym of c). Resnik's solution to this problem (which he neatly refers to as the \"vertical-ambiguity\" problem) is to choose the class that maximizes the association score. It is not clear that the class with the highest association score is always the most appropriate level of generalization. For example, this approach does not always generalize appropriately for arguments that are negatively associated with some verb. To see why, consider the problem of deciding how well the concept location satisfies the preferences of the verb eat for its object. Since locations are not the kinds of things that are typically eaten, a suitable level of generalization would correspond to a class that has a low association score with respect to eat. However, location is a kind of entity in WordNet, 10 and choosing the class with the highest association score is likely to produce entity as the chosen class. This is a problem, because the association score of entity with respect to eat may be too high to reflect the fact that location is a very unlikely object of the verb.", "cite_spans": [ { "start": 39, "end": 48, "text": "(C, v, r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "Note that the solution to the vertical-ambiguity problem presented in the previous sections is able to generalize appropriately in such cases. Continuing with the eat location example, our generalization procedure is unlikely to get as high as entity (assuming a reasonable number of examples of eat in the training data), since the probabilities corresponding to the daughters of entity are likely to be very different with respect to the object position of eat.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "The second alternative uses the minimum description length (MDL) principle. Li and Abe use MDL to select a set of classes from a hierarchy, together with their associated probabilities, to represent the selectional preferences of a particular verb. The preferences and class-based probabilities are then used to estimate probabilities of the form p (n | v, r) , where n is a noun, v is a verb, and r is an argument slot.", "cite_spans": [ { "start": 349, "end": 359, "text": "(n | v, r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "Li and Abe's application of MDL requires the hierarchy to be in the form of a thesaurus, in which each leaf node represents a noun and internal nodes represent the class of nouns that the node dominates. The hierarchy is also assumed to be in the form of a tree. The class-based models consist of a partition of the set of nouns (leaf nodes) and a probability associated with each class in the partition. The probabilities are the conditional probabilities of each class, given the relevant verb and argument position. Li and Abe refer to such a partition as a \"cut\" and the cut together with the probabilities as a \"tree cut model.\" The probabilities of the classes in a cut, \u0393, satisfy the following constraint: In order to determine the probability of a noun, the probability of a class is assumed to be distributed uniformly among the members of that class:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "p(n | v, r) = 1 |C| p(C | v, r) for all n \u2208 C (20)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "Since WordNet is a hierarchy with noun senses, rather than nouns, at the nodes, Li and Abe deal with the issue of word sense ambiguity using the method described in Section 3, by dividing the count for a noun equally among the concepts whose synsets contain the noun. Also, since WordNet is a DAG, Li and Abe turn WordNet into a tree by copying each subgraph with multiple parents. And so that each noun in the data appears (in a synset) at a leaf node, Li and Abe remove those parts of the hierarchy dominated by a noun in the data (but only for that instance of WordNet corresponding to the relevant verb).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "An example cut showing part of the WordNet hierarchy is shown in Figure 3 (based on an example from Li and Abe [1998] ; the dashed lines indicate parts of the hierarchy that are not shown in the diagram). This is a possible cut for the object position of the verb eat, and the cut consists of the following classes: life form , solid , fluid , food , artifact , space , time , set . (The particular choice of classes for the cut in this example is not too important; the example is designed to show how probabilities of senses are estimated from class probabilities.) Since the class in the cut containing pizza is food , the probability p( pizza | eat, obj) would be estimated as p( food | eat, obj)/| food |. Similarly, since the class in the cut containing mushroom is life form , the probability p( mushroom | eat, obj) would be estimated as p( life form | eat, obj)/| life form |.", "cite_spans": [ { "start": 100, "end": 117, "text": "Li and Abe [1998]", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 65, "end": 73, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "The uniform-distribution assumption (20) means that cuts close to the root of the hierarchy result in a greater smoothing of the probability estimates than cuts near the leaves. Thus there is a trade-off between choosing a model that has a cut near the leaves, which is likely to overfit the data, and a more general (simple) model near the root, which is likely to underfit the data. MDL looks ideally suited to the task of model selection, since it is designed to deal with precisely this trade-off. The simplicity of a model is measured using the model description length, which is an information-theoretic term and denotes the number of bits required to encode the model. The fit to the data is measured using the data description length, which is the number of bits required to encode the data (relative to the model). The overall description length is the sum of the model description length and the data description length, and the MDL principle is to select the model with the shortest description length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "We used McCarthy's (2000) implementation of MDL. So that every noun is represented at a leaf node, McCarthy does not remove parts of the hierarchy, as Li and Abe do, but instead creates new leaf nodes for each synset at an internal node. McCarthy also does not transform WordNet into a tree, which is strictly required for Li and Abe's application of MDL. This did create a problem with overgeneralization: Many of the cuts returned by MDL were overgeneralizing at the entity node. The reason is that person , which is close to entity and dominated by entity , has two parents:", "cite_spans": [ { "start": 8, "end": 25, "text": "McCarthy's (2000)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "life form and causal agent . This DAG-like property was responsible for the overgeneralization, and so we removed the link between person and causal agent . This appeared to solve the problem, and the results presented later for the average degree of generalization do not show an overgeneralization compared with those given in Li and Abe (1998) .", "cite_spans": [ { "start": 329, "end": 346, "text": "Li and Abe (1998)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Class-Based Estimation Methods", "sec_num": "6." }, { "text": "The task we used to compare the class-based estimation techniques is a decision task previously used by Pereira, Tishby, and Lee (1993) and Rooth et al. (1999) . The task is to decide which of two verbs, v and v , is more likely to take a given noun, n, as an object. The test and training data were obtained as follows. A number of verb-direct object pairs were extracted from a subset of the BNC, using the system of Briscoe and Carroll. All those pairs containing a noun not in WordNet were removed, and each verb and argument was lemmatized. This resulted in a data set of around 1.3 million (v, n) pairs. To form a test set, 3,000 of these pairs were randomly selected such that each selected pair contained a fairly frequent verb. (Following Pereira, Tishby, and Lee, only those verbs that occurred between 500 and 5,000 times in the data were considered.) Each instance of a selected pair was then deleted from the data to ensure that the test data were unseen. The remaining pairs formed the training data. To complete the test set, a further fairly frequent verb, v , was randomly chosen for each (v, n) pair. The random choice was made according to the verb's frequency in the original data set, subject to the condition that the pair (v , n) did not occur in the training data. Given the set of (v, n, v ) triples, the task is to decide whether (v, n) or (v , n) is the correct pair. 11 We acknowledge that the task is somewhat artificial, but pseudo-disambiguation tasks of this kind are becoming popular in statistical NLP because of the ease with which training and test data can be created. We also feel that the pseudo-disambiguation task is useful for evaluating the different estimation methods, since it directly addresses the question of how likely a particular predicate is to take a given noun as an argument. An evaluation using a PP attachment task was attempted in Clark and Weir (2000) , but the evaluation was limited by the relatively small size of the Penn Treebank.", "cite_spans": [ { "start": 104, "end": 135, "text": "Pereira, Tishby, and Lee (1993)", "ref_id": "BIBREF20" }, { "start": 140, "end": 159, "text": "Rooth et al. (1999)", "ref_id": "BIBREF24" }, { "start": 1395, "end": 1397, "text": "11", "ref_id": null }, { "start": 1890, "end": 1911, "text": "Clark and Weir (2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Pseudo-Disambiguation Experiments", "sec_num": "7." }, { "text": "Results for the pseudo-disambiguation task with one-fifth training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "Generalization technique % correct av.gen. sd.gen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "Similarity class \u03b1 = 0.0005 66.7 4 .5 1 .9 \u03b1 = 0.05 68.4 4 .1 1 .9 \u03b1 = 0.3 7 0 .2 3 .7 1 .9 \u03b1 = 0.75 72.3 3 .0 1 .9 \u03b1 = 0.995 72.4 1 .9 1 .6 Low class 71.9 1 .1 1 .1 MDL 62.9 4 .7 1 .9 Assoc 62.6 4 .1 2 .0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "Note: av.gen. is the average number of generalized levels; sd.gen. is the standard deviation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "to as \"Assoc.\" The results are given for a range of \u03b1 values and demonstrate clearly that the performance of similarity class varies little with changes in \u03b1 and that similarity class outperforms both MDL and Assoc. 12 We also give a score for our approach using a simple generalization procedure, which we call \"low class.\" The procedure is to select the first class that has a count greater than zero (relative to the verb and argument position), which is likely to return a low level of generalization, on the whole. The results show that our generalization technique only narrowly outperforms the simple alternative. Note that, although low class is based on a very simple generalization method, the estimation method is still using our class-based technique, by applying Bayes' theorem and conditioning on a class, as described in Section 3; the difference is in how the class is chosen.", "cite_spans": [ { "start": 216, "end": 218, "text": "12", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "To investigate the results, we calculated the average number of generalized levels for each approach. The number of generalized levels for a concept c (relative to a verb v and argument position r) is the difference in depth between c and top(c, v, r), as explained in Section 5. For each test case, the number of generalized levels for both verbs, v and v , was calculated, but only for the chosen sense of n. The results are given in the third column of Table 5 and demonstrate clearly that both MDL and Assoc are generalizing to a greater extent than similarity class. (The fourth column gives a standard deviation figure.) These results suggest that MDL and Assoc are overgeneralizing, at least for the purposes of this task.", "cite_spans": [], "ref_spans": [ { "start": 456, "end": 463, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "To investigate why the value for \u03b1 had no impact on the results, we repeated the experiment, but with one fifth of the data. A new data set was created by taking every fifth pair of the original 1.3 million pairs. A test set of 3,000 triples was created from this new data set, as before, but this time only verbs that occurred between 100 and 1,000 times were considered. The results using these test and training data are given in Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 433, "end": 440, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "These results show a variation in performance across values for \u03b1, with an optimal performance when \u03b1 is around 0.75. (Of course, in practice, the value for \u03b1 would need to be optimized on a held-out set.) But even with this variation, similarity class is still outperforming MDL and Assoc across the whole range of \u03b1 values. Note that the \u03b1 values corresponding to the lowest scores lead to a significant amount of generalization, which provides additional evidence that MDL and Assoc are overgeneralizing for this task. The low-class method scores highly for this data set also, but given that the task is one that apparently favors a low level of generalization, the high score is not too surprising.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "As a final experiment, we compared the task performance using the X 2 , rather than G 2 , statistic in the chi-square test. The results are given in Table 7 for the complete data set. 13 The figures in brackets give the average number of generalized levels. The X 2 statistic is performing at least as well as G 2 , and the results show that the average level of generalization is slightly higher for G 2 than X 2 . This suggests a possible explanation for the results presented here and those in Dunning (1993) : that the X 2 statistic provides a less conservative test when counts in the contingency table are low. (By a conservative test we mean one in which the null hypothesis is not easily rejected.) A less conservative test is better suited to the pseudo-disambiguation task, since it results in a lower level of generalization, on the whole, which is good for this task. In contrast, the task that Dunning considers, the discovery of bigrams, is better served by a more conservative test.", "cite_spans": [ { "start": 184, "end": 186, "text": "13", "ref_id": null }, { "start": 497, "end": 511, "text": "Dunning (1993)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "We have presented a class-based estimation method that incorporates a procedure for finding a suitable level of generalization in WordNet. This method has been shown to provide superior performance on a pseudo-disambiguation task, compared with two alternative approaches. An analysis of the results has shown that the other approaches appear to be overgeneralizing, at least for this task. One of the features of the generalization procedure is the way that \u03b1, the level of significance in the chi-square test, is treated as a parameter. This allows some control over the extent of generalization, which can be tailored to particular tasks. We have also shown that the task performance is at least as good when using the Pearson chi-square statistic as when using the log-likelihood chi-square statistic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "There are a number of ways in which this work could be extended. One possibility would be to use all the classes dominated by the hypernyms of a concept, rather than just one, to estimate the probability of the concept. An estimate would be obtained for each hypernym, and the estimates combined in a linear interpolation. An approach similar to this is taken by Bikel (2000) , in the context of statistical parsing.", "cite_spans": [ { "start": 363, "end": 375, "text": "Bikel (2000)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "There is still room for investigation of the hidden-data problem when data are used that have not been sense disambiguated. In this article, a very simple approach is taken, which is to split the count for a noun evenly among the noun's senses. Abney and Light (1999) have tried a more motivated approach, using the expectation maximization algorithm, but with little success. The approach described in Clark and Weir (1999) is shown in Clark (2001) to have some impact on the pseudo-disambiguation task, but only with certain values of the \u03b1 parameter, and ultimately does not improve on the best performance.", "cite_spans": [ { "start": 245, "end": 267, "text": "Abney and Light (1999)", "ref_id": "BIBREF0" }, { "start": 403, "end": 424, "text": "Clark and Weir (1999)", "ref_id": "BIBREF7" }, { "start": 437, "end": 449, "text": "Clark (2001)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "Finally, an issue that has not been much addressed in the literature (except by Li and Abe [1996] ) is how the accuracy of class-based estimation techniques compare when automatically acquired classes, as opposed to the manually created classes from WordNet, are used. The pseudo-disambiguation task described here has also been used to evaluate clustering algorithms (Pereira, Tishby, and Lee, 1993; Rooth et al., 1999) , but with different data, and so it is difficult to compare the results. A related issue is how the structure of WordNet affects the accuracy of the probability estimates. We have taken the structure of the hierarchy for granted, without any analysis, but it may be that an alternative design could be more conducive to probability estimation.", "cite_spans": [ { "start": 80, "end": 97, "text": "Li and Abe [1996]", "ref_id": "BIBREF13" }, { "start": 368, "end": 400, "text": "(Pereira, Tishby, and Lee, 1993;", "ref_id": "BIBREF20" }, { "start": 401, "end": 420, "text": "Rooth et al., 1999)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "Angled brackets are used to denote concepts in the hierarchy. 3 The term predicate is used loosely here, in that the predicate does not have to be a semantic object but can simply be a word form. 4 A recent paper that extends the acquisition of selectional preferences to sense-sense relationships isAgirre and Martinez (2001).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Unsmoothed estimates were used in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Li and Abe (1998) also develop a theoretical framework that applies only to a tree and turn WordNet into a tree by copying each subgraph with multiple parents. One way to extend the experiments in Section 7 would be to investigate whether this transformation has an impact on the results of those experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Resnik takes a similar approach but divides the count evenly among the noun's senses and all the hypernyms of those senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "An alternative formula for G 2 is given inDunning (1993), but the two are equivalent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The definition used here is that given byRibas (1995).10 For example, the hypernyms of the concept Dallas are as follows: city , municipality , urban area , geographical area , region , location , object , entity .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We note that this procedure does not guarantee that the correct pair is more likely than the incorrect pair, because of noise in the data from the parser and also because a highly plausible incorrect pair could be generated by chance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The results given for similarity class are different from those given inClark and Weir (2001) because the probability estimates used inClark and Weir (2001) were not normalized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u03c7 2 performed slightly better than G 2 using the smaller data set also.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This article is an extended and updated version of a paper that appeared in the proceedings of NAACL 2001. The work on which it is based was carried out while the first author was a D.Phil. student at the University of Sussex and was supported by an EPSRC studentship. We would like to thank Diana McCarthy for suggesting the pseudo-disambiguation task and providing the MDL software, John Carroll for supplying the data, and Ted Briscoe, Geoff Sampson, Gerald Gazdar, Bill Keller, Ted Pedersen, and the anonymous reviewers for their helpful comments. We would also like to thank Ted Briscoe for presenting an earlier version of this article on our behalf at NAACL 2001.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Results for the pseudo-disambiguation task.Generalization technique % correct av.gen. sd.gen. Using our approach, the disambiguation decision for each (v, n, v ) triple was made according to the following procedure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 5", "sec_num": null }, { "text": "else choose at random If n has more than one sense, the sense is chosen that maximizes the relevant probability estimate; this explains the maximization over cn(n). The probability estimates were obtained using our class-based method, and the G 2 statistic was used for the chi-square test. This procedure was also used for the MDL alternative, but using the MDL method to estimate the probabilities.Using the association score for each test triple, the decision was made according to the following procedure:else choose at randomWe use h(c) to denote the set consisting of the hypernyms of c. The inner maximization is over h(c), assuming c is the chosen sense of n, which corresponds to Resnik's method of choosing a set to represent c. The outer maximization is over the senses of n, cn(n), which determines the sense of n by choosing the sense that maximizes the association score.The first set of results is given in Table 5 . Our technique is referred to as the \"similarity class\" technique, and the approach using the association score is referred", "cite_spans": [], "ref_spans": [ { "start": 922, "end": 929, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Similarity class", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Hiding a semantic hierarchy in a Markov model", "authors": [ { "first": "Steven", "middle": [ "P" ], "last": "Abney", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Light", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the ACL Workshop on Unsupervised Learning in Natural Language Processing", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abney, Steven P. and Marc Light. 1999. Hiding a semantic hierarchy in a Markov model. In Proceedings of the ACL Workshop on Unsupervised Learning in Natural Language Processing, University of Maryland, College Park, pages 1-8.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning class-to-class selectional preferences", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "David", "middle": [], "last": "Martinez", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Fifth ACL Workshop on Computational Language Learning", "volume": "", "issue": "", "pages": "15--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agirre, Eneko and David Martinez. 2001. Learning class-to-class selectional preferences. In Proceedings of the Fifth ACL Workshop on Computational Language Learning, Toulouse, France, pages 15-22.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An Introduction to Categorical Data Analysis", "authors": [ { "first": "Alan", "middle": [], "last": "Agresti", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agresti, Alan. 1996. An Introduction to Categorical Data Analysis. Wiley.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A statistical model for parsing and word-sense disambiguation", "authors": [ { "first": "Daniel", "middle": [ "M" ], "last": "Bikel", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "155--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bikel, Daniel M. 2000. A statistical model for parsing and word-sense disambiguation. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 155-163, Hong Kong.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word association norms, mutual information, and lexicography", "authors": [ { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the Fifth ACL Conference on Applied Natural Language Processing", "volume": "16", "issue": "", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Briscoe, Ted and John Carroll. 1997. Automatic extraction of subcategorization from corpora. In Proceedings of the Fifth ACL Conference on Applied Natural Language Processing, pages 356-363, Washington, DC. Church, Kenneth W. and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Explaining away ambiguity: Learning verb selectional preference with Bayesian networks", "authors": [ { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "187--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciaramita, Massimiliano and Mark Johnson. 2000. Explaining away ambiguity: Learning verb selectional preference with Bayesian networks. In Proceedings of the 18th International Conference on Computational Linguistics, pages 187-193, Saarbrucken, Germany.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Class-Based Statistical Models for Lexical Knowledge Acquisition", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, Stephen. 2001. Class-Based Statistical Models for Lexical Knowledge Acquisition. Ph.D. dissertation, University of Sussex.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An iterative approach to estimating frequencies over a semantic hierarchy", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "258--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, Stephen and David Weir. 1999. An iterative approach to estimating frequencies over a semantic hierarchy. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 258-265, University of Maryland, College Park.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A class-based probabilistic approach to structural disambiguation", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "194--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, Stephen and David Weir. 2000. A class-based probabilistic approach to structural disambiguation. In Proceedings of the 18th International Conference on Computational Linguistics, pages 194-200, Saarbrucken, Germany.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Class-based probability estimation using a semantic hierarchy", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "95--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, Stephen and David Weir. 2001. Class-based probability estimation using a semantic hierarchy. In Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics, pages 95-102, Pittsburgh.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multinomial goodness of fit tests", "authors": [ { "first": "Noel", "middle": [ "A C" ], "last": "Cressie", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Timothy", "suffix": "" }, { "first": "", "middle": [], "last": "Read", "suffix": "" } ], "year": 1984, "venue": "Journal of the Royal Statistics Society Series B", "volume": "46", "issue": "", "pages": "440--464", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cressie, Noel A. C. and Timothy R. C. Read. 1984. Multinomial goodness of fit tests. Journal of the Royal Statistics Society Series B, 46:440-464.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dunning, Ted. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61-74.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, Christiane, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Clustering words with the MDL principle", "authors": [ { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "4--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Hang and Naoki Abe. 1996. Clustering words with the MDL principle. In Proceedings of the 16th International Conference on Computational Linguistics, pages 4-9, Copenhagen, Denmark.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Generalizing case frames using a thesaurus and the MDL principle", "authors": [ { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "2", "pages": "217--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Hang and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the MDL principle. Computational Linguistics, 24(2):217-244.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Word sense disambiguation for acquisition of selectional preferences", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ACL/EACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications", "volume": "", "issue": "", "pages": "52--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, Diana. 1997. Word sense disambiguation for acquisition of selectional preferences. In Proceedings of the ACL/EACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 52-61, Madrid.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Using semantic preferences to identify verbal participation in role switching", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the First Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "256--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, Diana. 2000. Using semantic preferences to identify verbal participation in role switching. In Proceedings of the First Conference of the North American Chapter of the Association for Computational Linguistics, pages 256-263, Seattle.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "23--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, George A. 1998. Nouns in WordNet. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database. MIT Press, pages 23-46.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Fishing for exactness", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the South-Central SAS Users Group Conference", "volume": "", "issue": "", "pages": "188--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedersen, Ted. 1996. Fishing for exactness. In Proceedings of the South-Central SAS Users Group Conference, Austin, pages 188-200.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A decision tree of bigrams is an accurate predictor of word sense", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedersen, Ted. 2001. A decision tree of bigrams is an accurate predictor of word sense. In Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics, pages 79-86, Pittsburgh.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Distributional clustering of English words", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Naftali", "middle": [], "last": "Tishby", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "183--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, Fernando, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 183-190, Columbus, OH.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Selection and Information: A Class-Based Approach to Lexical Relationships", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1993. Selection and Information: A Class-Based Approach to Lexical Relationships. Ph.D. dissertation, University of Pennsylvania.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "239--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1998. WordNet and class-based probabilities. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database. MIT Press, pages 239-263.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "On learning more appropriate selectional restrictions", "authors": [ { "first": "Francesc", "middle": [], "last": "Ribas", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Seventh Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "112--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ribas, Francesc. 1995. On learning more appropriate selectional restrictions. In Proceedings of the Seventh Conference of the European Chapter of the Association for Computational Linguistics, pages 112-118, Dublin.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Inducing a semantically annotated lexicon via EM-based clustering", "authors": [ { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Detlef", "middle": [], "last": "Prescher", "suffix": "" }, { "first": "Glenn", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Franz", "middle": [], "last": "Beil", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rooth, Mats, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 104-111, University of Maryland, College Park.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Enriching a lexical semantic net with selectional preferences by means of statistical corpus analysis", "authors": [ { "first": "Andreas", "middle": [], "last": "Wagner", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the ECAI-2000 Workshop on Ontology Learning", "volume": "", "issue": "", "pages": "37--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wagner, Andreas. 2000. Enriching a lexical semantic net with selectional preferences by means of statistical corpus analysis. In Proceedings of the ECAI-2000 Workshop on Ontology Learning, Berlin, pages 37-42.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Figure 1", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "An example generalization: Determining top( soup , stir, obj).", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Possible cut returned by MDL.", "num": null, "uris": null }, "TABREF4": { "type_str": "table", "text": "Example levels of generalization for different values of \u03b1. coffee BEVERAGE food . . . object entity 0.995 coffee BEVERAGE food . . . object entity ( hotdog , eat, obj) 0 .0005 hotdog sandwich snack food DISH . . . food . . . entity 0.05 hotdog sandwich snack food DISH . . . food . . . entity f (eat, obj) = 1,703 0.5 hotdog sandwich snack food DISH . . . food . . . entity 0.995 hotdog SANDWICH snack food dish . . . food . . . entity ( Socrates , kiss, obj) 0.0005 Socrates . . . person life form CAUSAL AGENT entity Socrates . . . person life form CAUSAL AGENT entity 0.995 Socrates . . . PERSON life form causal agent entity ( dream , remember, obj) 0.0005 dream . . . preoccupation cognitive state STATE 0.05 dream . . . preoccupation cognitive state STATE f (remember, obj) = 1,982 0.5 dream . . . preoccupation COGNITIVE STATE state 0.995 dream . . . PREOCCUPATION cognitive state state ( man , see, obj) 0 .0005 man . . . mammal . . . ANIMAL life form entity 0.05 man . . . MAMMAL . . . animal life form entity f (see, obj) = 16,757 0.5 man . . . MAMMAL . . . animal life form entity 0.995 MAN . . . mammal . . . animal life form entity ( belief , abandon, obj) 0.0005 belief mental object cognition PSYCHOLOGICAL FEATURE nightmare , have, obj) 0.0005 nightmare dreaming IMAGINATION . . . psychological feature 0.05 nightmare dreaming IMAGINATION . . . psychological feature f (have, obj) = 93,683 0.5 nightmare DREAMING imagination . . . psychological feature 0.995 nightmare DREAMING imagination . . . psychological feature Note: The selected level is shown in upper case.", "content": "
(c, v, r), f(v, r)\u03b1
( coffee , drink, obj)0.0005 coffee BEVERAGE food . . . object entity
0.05coffee BEVERAGE food . . . object entity
f (drink, obj) = 8490.5
0.05Socrates . . . person life form CAUSAL AGENT entity
f (kiss, obj) = 3450.5
0.05belief MENTAL OBJECT cognition psychological feature
f (abandon, obj) = 6730.5BELIEF mental object cognition psychological feature
0.995 BELIEF mental object cognition psychological feature
", "num": null, "html": null }, "TABREF5": { "type_str": "table", "text": "Extent of generalization for different values of \u03b1 and sample sizes.", "content": "
\u03b1100% 50% 10% 1%
0.00053.33.95.0 5.6
0.052.83.54.6 5.6
0.52 .12.94.1 5.4
0.9951.21.52.6 3.9
", "num": null, "html": null }, "TABREF6": { "type_str": "table", "text": "Disambiguation results for G 2 and X 2 .", "content": "
\u03b1 value % correct (G 2 ) % correct (X 2 )
0.000573.8 (3.3)74.1 (3.0)
0.0573.4 (2.8)73.8 (2.5)
0.37 3 .0 (2.4)74.1 (2.2)
0.7573.9 (1.9)74.3 (1.8)
0.99573.8 (1.2)73.3 (1.2)
", "num": null, "html": null } } } }