{ "paper_id": "J98-2002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:53:05.574234Z" }, "title": "Generalizing Case Frames Using a Thesaurus and the MDL Principle", "authors": [ { "first": "Hang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "NEC Corporation", "location": { "addrLine": "4-1-1 Miyazaki Miyamae-ku, Kawasaki 216", "country": "Japan" } }, "email": "|ihang@ccm.cl.nec.co.jp" }, { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "", "affiliation": { "laboratory": "", "institution": "NEC Corporation", "location": { "addrLine": "4-1-1 Miyazaki Miyamae-ku, Kawasaki 216", "country": "Japan" } }, "email": "abe@ccm.cl.nec.co.jp" }, { "first": "Nec", "middle": [], "last": "Corporation", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A new method for automatically acquiring case frame patterns from large corpora is proposed. In particular, the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words, and a new generalization method based on the Minimum Description Length (MDL) principle is proposed. In order to assist with efficiency, the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. An efficient algorithm is given, which provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. Case frame patterns obtained by the method were used to resolve PP-attachment ambiguity. Experimental results indicate that the proposed method improves upon or is at least comparable with existing methods.", "pdf_parse": { "paper_id": "J98-2002", "_pdf_hash": "", "abstract": [ { "text": "A new method for automatically acquiring case frame patterns from large corpora is proposed. In particular, the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words, and a new generalization method based on the Minimum Description Length (MDL) principle is proposed. In order to assist with efficiency, the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. An efficient algorithm is given, which provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. Case frame patterns obtained by the method were used to resolve PP-attachment ambiguity. Experimental results indicate that the proposed method improves upon or is at least comparable with existing methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We address the problem of automatically acquiring case frame patterns (selectional patterns, subcategorization patterns) from large corpora. A satisfactory solution to this problem would have a great impact on various tasks in natural language processing, including the structural disambiguation problem in parsing. The acquired knowledge would also be helpful for building a lexicon, as it would provide lexicographers with word usage descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In our view, the problem of acquiring case frame patterns involves the following two issues: (a) acquiring patterns of individual case frame slots; and (b) learning dependencies that may exist between different slots. In this paper, we confine ourselves to the former issue, and refer the interested reader to Li and Abe (1996) , which deals with the latter issue.", "cite_spans": [ { "start": 310, "end": 327, "text": "Li and Abe (1996)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The case frame (case slot) pattern acquisition process consists of two phases: extraction of case frame instances from corpus data, and generalization of those instances to case frame patterns. The generalization step is needed in order to represent the input case frame instances more compactly as well as to judge the (degree of) acceptability of unseen case frame instances. For the extraction problem, there have been various methods proposed to date, which are quite adequate (Hindle and Rooth 1991; Grishman and Sterling 1992; Manning 1992; Utsuro, Matsumoto, and Nagao 1992; Brent 1993; Smadja 1993; Grefenstette 1994; Briscoe and Carroll 1997) . The generalization problem, in contrast, is a more challenging one and has not been solved completely. A number of methods for generalizing values of a case frame slot for a verb have been proposed. Some of these methods make use of prior knowledge in the form of an existing thesaurus (Resnik 1993a (Resnik , 1993b Framis 1994; Almuallim et al. 1994; Tanaka 1996; Utsuro and Matsumoto 1997) , while others do not rely on any prior knowledge (Pereira, Tishby, and Lee 1993; Grishman and Sterling 1994; Tanaka 1994) . In this paper, we propose a new generalization method, belonging to the first of these two categories, which is both theoretically well-motivated and computationally efficient.", "cite_spans": [ { "start": 481, "end": 504, "text": "(Hindle and Rooth 1991;", "ref_id": "BIBREF26" }, { "start": 505, "end": 532, "text": "Grishman and Sterling 1992;", "ref_id": "BIBREF23" }, { "start": 533, "end": 546, "text": "Manning 1992;", "ref_id": "BIBREF32" }, { "start": 547, "end": 581, "text": "Utsuro, Matsumoto, and Nagao 1992;", "ref_id": "BIBREF58" }, { "start": 582, "end": 593, "text": "Brent 1993;", "ref_id": "BIBREF4" }, { "start": 594, "end": 606, "text": "Smadja 1993;", "ref_id": "BIBREF52" }, { "start": 607, "end": 625, "text": "Grefenstette 1994;", "ref_id": "BIBREF22" }, { "start": 626, "end": 651, "text": "Briscoe and Carroll 1997)", "ref_id": "BIBREF10" }, { "start": 940, "end": 953, "text": "(Resnik 1993a", "ref_id": "BIBREF40" }, { "start": 954, "end": 969, "text": "(Resnik , 1993b", "ref_id": "BIBREF41" }, { "start": 970, "end": 982, "text": "Framis 1994;", "ref_id": "BIBREF21" }, { "start": 983, "end": 1005, "text": "Almuallim et al. 1994;", "ref_id": "BIBREF1" }, { "start": 1006, "end": 1018, "text": "Tanaka 1996;", "ref_id": "BIBREF56" }, { "start": 1019, "end": 1045, "text": "Utsuro and Matsumoto 1997)", "ref_id": "BIBREF57" }, { "start": 1096, "end": 1127, "text": "(Pereira, Tishby, and Lee 1993;", "ref_id": "BIBREF36" }, { "start": 1128, "end": 1155, "text": "Grishman and Sterling 1994;", "ref_id": "BIBREF24" }, { "start": 1156, "end": 1168, "text": "Tanaka 1994)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Specifically, we formalize the problem of generalizing values of a case frame slot for a given verb as that of estimating a conditional probability distribution over a partition of words, and propose a new generalization method based on the Minimum Description Length principle (MDL): a principle of data compression and statistical estimation from information theory. 1 In order to assist with efficiency, our method makes use of an existing thesaurus and restricts its attention on those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. We then give an efficient algorithm that provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. In order to test the effectiveness of our method, we conducted PP-attachment disambiguation experiments using the case frame patterns obtained by our method. Our experimental results indicate that the proposed method improves upon or is at least comparable to existing methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The remainder of this paper is organized as follows: In Section 2, we formalize the problem of generalizing values of a case frame slot as that of estimating a conditional distribution. In Section 3, we describe our MDL-based generalization method. In Section 4, we present our experimental results. We then give some concluding remarks in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Suppose that the data available to us are of the type shown in Table 1 , which are slot values for a given verb (verb,slot_name,slot_value triples) automatically extracted from a corpus using existing techniques. By counting the frequency of occurrence of each noun at a given slot of a verb, the frequency data shown in Figure 1 can be obtained. We will refer to this type of data as co-occurrence data. The problem of generalizing values of a case frame slot for a verb (or, in general, a head) can be viewed as the problem of learning the underlying conditional probability distribution that gives rise to such co-occurrence data. Such a conditional distribution can be represented by a probability model that specifies the conditional probability P (n I v, r) for each n in the set of nouns .M = {nl, n2 ..... nN}, V in the set of verbs V = {vl, v2 ..... Vv}, and r in the set of slot names T~ = {rl, r2 ..... rR}, satisfying: P(n Iv, r) = 1.", "cite_spans": [ { "start": 753, "end": 763, "text": "(n I v, r)", "ref_id": null } ], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 321, "end": 329, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Data Sparseness Problem", "sec_num": "2.1" }, { "text": "(1) nGM This type of probability model is often referred to as a word-based model. Since the number of probability parameters in word-based models is large (O(N. V. R)), accurate 1 Recently, MDL and related techniques have become popular in corpus-based natural language processing and other related fields (Ellison 1991 (Ellison , 1992 Cartwright and Brent 1994; Stolcke and Omohundro 1994; Brent, Murthy, and Lundberg 1995; Ristad and Thomas 1995; Brent and Cartwright 1996; Grunwald 1996) . In this paper, we introduce MDL into the context of case frame pattern acquisition. ", "cite_spans": [ { "start": 307, "end": 320, "text": "(Ellison 1991", "ref_id": "BIBREF19" }, { "start": 321, "end": 336, "text": "(Ellison , 1992", "ref_id": "BIBREF20" }, { "start": 337, "end": 363, "text": "Cartwright and Brent 1994;", "ref_id": "BIBREF12" }, { "start": 364, "end": 391, "text": "Stolcke and Omohundro 1994;", "ref_id": "BIBREF54" }, { "start": 392, "end": 425, "text": "Brent, Murthy, and Lundberg 1995;", "ref_id": "BIBREF6" }, { "start": 426, "end": 449, "text": "Ristad and Thomas 1995;", "ref_id": "BIBREF49" }, { "start": 450, "end": 476, "text": "Brent and Cartwright 1996;", "ref_id": "BIBREF5" }, { "start": 477, "end": 491, "text": "Grunwald 1996)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "The Data Sparseness Problem", "sec_num": "2.1" }, { "text": "Frequency data for the subject slot of verb fly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "estimation of a word-based model is difficult with the data size that is available in practice--a problem usually referred to as the data sparseness problem. For example, suppose that we employ the maximum-likelihood estimation (or MLE for short) to estimate the probability parameters of a conditional probability distribution, as described above, given the co-occurrence data in Figure 1 . In this case, MLE amounts to estimating the parameters by simply normalizing the frequencies so that they sum to one, giving, for example, the estimated probabilities of 0, 0.2, and 0.4 for swallow, eagle, and bird, respectively (see Figure 2 ). Since in general the number of parameters exceeds the size of data that is typically available, MLE will result in estimating most of the probability parameters to be zero. To address this problem, Grishman and Sterling (1994) proposed a method of smoothing conditional probabilities using the probability values of similar words, where the similarity between words is judged based on co-occurrence data (see also Dagan, Marcus, and Makovitch [1992] and Dagan, Pereira, and Lee [1994] ). More specifically, conditional probabilities of words are smoothed by taking the weighted average of those of similar words using the similarity measure as the weights. The advantage of this approach is that it does not rely on any prior knowledge, but it appears difficult to find a smoothing method that is both efficient and theoretically sound. As an alternative, a number of authors have proposed the use of class-based ", "cite_spans": [ { "start": 836, "end": 864, "text": "Grishman and Sterling (1994)", "ref_id": "BIBREF24" }, { "start": 1052, "end": 1087, "text": "Dagan, Marcus, and Makovitch [1992]", "ref_id": "BIBREF17" }, { "start": 1092, "end": 1122, "text": "Dagan, Pereira, and Lee [1994]", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 381, "end": 389, "text": "Figure 1", "ref_id": null }, { "start": 626, "end": 634, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Word-based distribution estimated using MLE. models, which assign (conditional) probability values to (existing) classes of words, rather than individual words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "An example of the class-based approach is Resnik's method of generalizing values of a case frame slot using a thesaurus and the so-called selectional association measure (Resnik 1993a (Resnik , 1993b . The selectional association, denoted A (C I v, r) , is defined as follows:", "cite_spans": [ { "start": 170, "end": 183, "text": "(Resnik 1993a", "ref_id": "BIBREF40" }, { "start": 184, "end": 199, "text": "(Resnik , 1993b", "ref_id": "BIBREF41" }, { "start": 241, "end": 251, "text": "(C I v, r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Class-based Models", "sec_num": "2.2" }, { "text": "P(CIv, r) (2) A(C I v, F) = P(C I v, F) x log P(C)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based Models", "sec_num": "2.2" }, { "text": "where C is a class of nouns present in a given thesaurus, v is a verb and r is a slot name, as described earlier. In generalizing a given noun n to a noun class, this method selects the noun class C having the maximum A(C I v, r), among all super classes of n in a given thesaurus. This method is based on an interesting intuition, but its interpretation as a method of estimation is not clear. We propose a class-based generalization method whose performance as a method of estimation is guaranteed to be near optimal. We define the class-based model as a model that consists of a partition of the set .N\" of nouns, and a parameter associated with each member of the partition. Here, a partition F of .M is any collection of mutually disjoint subsets of iV\" that exhaustively cover N. The parameters specify the conditional probability P(C I v, r) for each class (subset) C in that partition, such that P(CIv, r) = 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based Models", "sec_num": "2.2" }, { "text": "(3) CEF Within a given class C, it is assumed that each noun is generated with equal probability, namely 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based Models", "sec_num": "2.2" }, { "text": "Vn E C:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class-based Models", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(n l v, r) = ~ x P(C I v, F).", "eq_num": "(4)" } ], "section": "Class-based Models", "sec_num": "2.2" }, { "text": "Here, we assume that a word belongs to a single class. In practice, however, many words have sense ambiguity and a word can belong to several different classes, e.g., bird is a member of both BIRD and MEAT. Thorough treatment of this problem is beyond the scope of the present paper; we simply note that one can employ an existing word-sense disambiguation technique (e.g., Yarowsky 1992 Yarowsky , 1994 in preprocessing, and use the disambiguated word senses as virtual words in the following ANIMAL BIRD INSECT swallow crow eagle bird bug bee insect", "cite_spans": [ { "start": 374, "end": 387, "text": "Yarowsky 1992", "ref_id": "BIBREF65" }, { "start": 388, "end": 403, "text": "Yarowsky , 1994", "ref_id": "BIBREF66" } ], "ref_spans": [], "eq_spans": [], "section": "Class-based Models", "sec_num": "2.2" }, { "text": "An example thesaurus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "case-pattern acquisition process. It is also possible to extend our model so that each word probabilistically belongs to several different classes, which would allow us to resolve both structural and word-sense ambiguities at the time of disambiguation. 2 Employing probabilistic membership, however, would make the estimation process significantly more computationally demanding. We therefore leave this issue as a future topic, and employ a simple heuristic of equally distributing each word occurrence in the data to all of its potential word senses in our experiments. Since our learning method based on MDL is robust against noise, this should not significantly degrade performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "Since the number of partitions for a given set of nouns is extremely large, the problem of selecting the best model from among all possible class-based models is most likely intractable. In this paper, we reduce the number of possible partitions to consider by using a thesaurus as prior knowledge, following a basic idea of Resnik's (1992) .", "cite_spans": [ { "start": 325, "end": 340, "text": "Resnik's (1992)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "The Tree Cut Model", "sec_num": "2.3" }, { "text": "In particular, we restrict our attention to those partitions that exist within the thesaurus in the form of a cut. By thesaurus, we mean a tree in which each leaf node stands for a noun, while each internal node represents a noun class, and domination stands for set inclusion (see Figure 3) . A cut in a tree is any set of nodes in the tree that defines a partition of the leaf nodes, viewing each node as representing the set of all leaf nodes it dominates. For example, in the thesaurus of Figure 3 , there are five cuts: [ANIMAL] , [BIRD, INSECT] , [BIRD, bug, bee, insect] , [swallow, crow, eagle, bird, INSECT] , and [swallow, crow, eagle, bird, bug, bee, insect] . The class of tree cut models of a fixed thesaurus tree is then obtained by restricting the partition P in the definition of a class-based model to be those partitions that are present as a cut in that thesaurus tree.", "cite_spans": [ { "start": 525, "end": 533, "text": "[ANIMAL]", "ref_id": null }, { "start": 536, "end": 550, "text": "[BIRD, INSECT]", "ref_id": null }, { "start": 553, "end": 577, "text": "[BIRD, bug, bee, insect]", "ref_id": null }, { "start": 580, "end": 616, "text": "[swallow, crow, eagle, bird, INSECT]", "ref_id": null }, { "start": 623, "end": 669, "text": "[swallow, crow, eagle, bird, bug, bee, insect]", "ref_id": null } ], "ref_spans": [ { "start": 282, "end": 291, "text": "Figure 3)", "ref_id": null }, { "start": 493, "end": 501, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "The Tree Cut Model", "sec_num": "2.3" }, { "text": "Formally, a tree cut model M can be represented by a pair consisting of a tree cut lP and a probability parameter vector 0 of the same length, that is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Tree Cut Model", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V = (r, e)", "eq_num": "(5)" } ], "section": "The Tree Cut Model", "sec_num": "2.3" }, { "text": "where lP and 0 are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Tree Cut Model", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r = [C1, C2 ..... Ck+l], e = [P(C1), P(C2) ..... P(Ck+l)]", "eq_num": "(6)" } ], "section": "The Tree Cut Model", "sec_num": "2.3" }, { "text": "k+l where C1, C2 ..... Ck+l is a cut in the thesaurus tree and ~i=1 P(Ci) = 1 is satisfied. For simplicity we sometimes write P(Ci), i = 1 ..... (k + 1) for P (Ci [ v, r) .", "cite_spans": [ { "start": 159, "end": 170, "text": "(Ci [ v, r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Tree Cut Model", "sec_num": "2.3" }, { "text": "If we use MLE for the parameter estimation, we can obtain five tree cut models from the co-occurrence data in Figure 1 We have thus formalized the problem of generalizing values of a case frame slot as that of estimating a model from the class of tree cut models for some fixed thesaurus tree; namely, selecting a model that best explains the data from among the class of tree cut models.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 118, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Tree Cut Model", "sec_num": "2.3" }, { "text": "The question now becomes what strategy (criterion) we should employ to select the best tree-cut model. We adopt the Minimum Description Length principle (Rissanen 1978, A tree cut model with [BIRD, INSECT] .", "cite_spans": [ { "start": 153, "end": 168, "text": "(Rissanen 1978,", "ref_id": "BIBREF42" }, { "start": 191, "end": 205, "text": "[BIRD, INSECT]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Generalization Method Based On MDL", "sec_num": "3." }, { "text": "\"Prob.\" --", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization Method Based On MDL", "sec_num": "3." }, { "text": "Number of parameters and KL distance from the empirical distribution for the five tree cut models. 1983, 1984, 1986, 1989) , which has various desirable properties, as will be described later. 3 MDL is a principle of data compression and statistical estimation from information theory, which states that the best probability model for given data is that which requires the least code length in bits for the encoding of the model itself and the given data observed through it. 4 The former is the model description length and the latter the data description length.", "cite_spans": [ { "start": 99, "end": 104, "text": "1983,", "ref_id": null }, { "start": 105, "end": 110, "text": "1984,", "ref_id": null }, { "start": 111, "end": 116, "text": "1986,", "ref_id": null }, { "start": 117, "end": 122, "text": "1989)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "In our current problem, it tends to be the case, in general, that a model nearer the root of the thesaurus tree, such as that in Figure 6 , is simpler (in terms of the number of parameters), but tends to have a poorer fit to the data. In contrast, a model nearer the leaves of the thesaurus tree, such as that in Figure 4 , is more complex, but tends to have a better fit to the data. Table 2 shows the number of free parameters and the KL distance from the empirical distribution of the data (namely, the word-based distribution estimated by MLE) shown in Figure 2 for each of the five tree cut models. 5 In the table, one can see that there is a trade-off between the simplicity of a model and the goodness of fit to the data.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 137, "text": "Figure 6", "ref_id": null }, { "start": 313, "end": 321, "text": "Figure 4", "ref_id": null }, { "start": 385, "end": 392, "text": "Table 2", "ref_id": null }, { "start": 557, "end": 565, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "In the MDL framework, the model description length is an indicator of model 3 Estimation strategies related to MDL have been independently proposed and studied by various authors (Solomonoff 1964; Wallace and Boulton 1968; Schwarz 1978; Wallace and Freeman 1992) . 4 We refer the interested reader to Quinlan and Rivest (1989) for an introduction to the MDL principle. 5 The KL distance (alsO known as KL-divergence or relative entropy), which is widely used in information theory and statistics, is a measure of distance between two distributions (e.g., Cover and Thomas 1991) . It is always normegative and is zero if and only if the two distributions are identical, but is asymmetric and hence not a metric (the usual notion of distance).", "cite_spans": [ { "start": 179, "end": 196, "text": "(Solomonoff 1964;", "ref_id": "BIBREF53" }, { "start": 197, "end": 222, "text": "Wallace and Boulton 1968;", "ref_id": "BIBREF60" }, { "start": 223, "end": 236, "text": "Schwarz 1978;", "ref_id": "BIBREF50" }, { "start": 237, "end": 262, "text": "Wallace and Freeman 1992)", "ref_id": "BIBREF61" }, { "start": 301, "end": 326, "text": "Quinlan and Rivest (1989)", "ref_id": "BIBREF37" }, { "start": 555, "end": 577, "text": "Cover and Thomas 1991)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "complexity, while the data description length indicates goodness of fit to the data. The MDL principle stipulates that the model that minimizes the sum total of the description lengths should be the best model (both for data compression and statistical estimation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "In the remainder of this section, we will describe how we apply MDL to our current problem. We will then discuss the rationale behind using MDL in our present context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "We first show how the description length for a model is calculated. We use S to denote a sample (or set of data), which is a multiset of examples, each of which is an occurrence of a noun at a given slot r of a given verb v (i.e., duplication is allowed). We let ISI denote the size of S as a multiset, and n E S indicate the inclusion of n in S as a multiset. For example, the column labeled slot_value in Table 1 represents a sample S for the subject slot offly, and in this case ISI = 10.", "cite_spans": [], "ref_spans": [ { "start": 407, "end": 414, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "Given a sample S and a tree cut F, we employ MLE to estimate the parameters of the corresponding tree cut model ~,I = (F, 0), where 6 denotes the estimated parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "The total description length L(/~,I, S) of the tree cut model/vl and the sample S observed through M is computed as the sum of the model description length L(P), parameter description length L(0 I P), and data description length L(S I F, 6):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "L(M,S) = L((F,6),S) = L(r) + L(6 I r) +L(Str,6). (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "Note that we sometimes refer to L(F) + L(0 I F) as the model description length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "The model description length L(F) is a subjective quantity, which depends on the coding scheme employed. Here, we choose to assign the same code length to each cut and let:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "L(F) = log IG[ (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "where ~ denotes the set of all cuts in the thesaurus tree T. 6 This corresponds to assuming that each tree cut model is equally likely a priori, in the Bayesian interpretation of MDL. (See Section 3.4.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "The parameter description length L(O I F) is calculated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "k L(0 I r) = ~ x log IsI (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "where ISI denotes the sample size and k denotes the number of free parameters in the tree cut model, i.e., k equals the number of nodes in P minus one. It is known to be best to use this number of bits to describe probability parameters in order to minimize the expected total description length (Rissanen 1984 (Rissanen , 1986 ). An intuitive explanation of this is that the standard deviation of the maximum-likelihood estimator of each parameter is of the order ~, and hence describing each parameter using more than 1 1 log ISI bits would be wasteful for the estimation accuracy possible with -log x/~ -2 the given sample size.", "cite_spans": [ { "start": 296, "end": 310, "text": "(Rissanen 1984", "ref_id": "BIBREF44" }, { "start": 311, "end": 327, "text": "(Rissanen , 1986", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "Finally, the data description length L(S I F, 0) is calculated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(S I r, 0) = -~ log P(n)", "eq_num": "(10)" } ], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "nES Table 3 Calculating the description length for the model of Figure 5 .", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 3", "ref_id": null }, { "start": 64, "end": 72, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "C BIRD bug bee insect f(C) 8 0 2 0 ICI 4 1 1 1 P(C) 0.8 0.0 0.2 0.0 P(n) 0.2 0.0 0.2 0.0 P [BIRD, bug, bee, insect] L(0 1 r) (47l) x log 10 = 4.98 L(S I P,~) -(2+4+2+2) x log0.2 = 23.22", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "where for simplicity we write P(n) for PM (n [ v, r) . Recall that P(n) is obtained by MLE, namely, by normalizing the frequencies:", "cite_spans": [ { "start": 42, "end": 52, "text": "(n [ v, r)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 P(n) = ~ x P(C)", "eq_num": "(11)" } ], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "for each C c P and each n E C, where for each C c P:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "= d(C) (12) ISI", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "wheref(C) denotes the total frequency of nouns in class C in the sample S, and F is a tree cut. We note that, in fact, the maximum-likelihood estimate is one that minimizes the data description length L(S I F, 0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "With description length defined in the above manner, we wish to select a model with the minimum description length and output it as the result of generalization. Since we assume here that every tree cut has an equal L(P), technically we need only calculate and compare L'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "(/[d, S) = L(~ I F) + L(S t F, ~)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "as the description length. For simplicity, we will sometimes write just L'(F) for L'(7[/I, S), where I ~ is the tree cut of M, when ~,I and S are clear from context. The description lengths for the data in Figure 1 using various tree cut models of the thesaurus tree in Figure 3 are shown in Table 4 . (Table 3 shows how the description length is calculated for the model of tree cut ]BIRD, bug, bee, insect].) These figures indicate that the model in Figure 6 is the best model, according to MDL. Thus, given the data in Table 1 as input, the generalization result shown in Table 5 is obtained.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 214, "text": "Figure 1", "ref_id": null }, { "start": 270, "end": 278, "text": "Figure 3", "ref_id": null }, { "start": 292, "end": 299, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 302, "end": 310, "text": "(Table 3", "ref_id": null }, { "start": 452, "end": 460, "text": "Figure 6", "ref_id": null }, { "start": 522, "end": 529, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 575, "end": 582, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Calculating Description Length", "sec_num": "3.1" }, { "text": "In generalizing values of a case flame slot using MDL, we could, in principle, calculate the description length of every possible tree cut model and output a model with the minimum description length as the generalization result, if computation time were of no concern. But since the number of cuts in a thesaurus tree is exponential in the size of the tree (for example, it is easy to verify that for a complete b-ary tree of depth d it is of the order o(2ba-1)), it is impractical to do so. Nonetheless, we were able to devise a 5. else 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Efficient Algorithm", "sec_num": "3.2" }, { "text": "For each child tree ti of t ci :=Find-MDL(ti) 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Efficient Algorithm", "sec_num": "3.2" }, { "text": "c:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Efficient Algorithm", "sec_num": "3.2" }, { "text": "= append(ci) 8. if 9. L'([root(t)]) < L'(c) 10. then 11. return([root(t)])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Efficient Algorithm", "sec_num": "3.2" }, { "text": "12. else 13.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Efficient Algorithm", "sec_num": "3.2" }, { "text": "return(c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Efficient Algorithm", "sec_num": "3.2" }, { "text": "The algorithm: Find-MDL. simple and efficient algorithm based on dynamic programming, which is guaranteed to find a model with the minimum description length. Our algorithm, which we call Find-MDL, recursively finds the optimal MDL model for each child subtree of a given tree and appends all the optimal models of these subtrees and returns the appended models, unless collapsing all the lowerqevel optimal models into a model consisting of a single node (the root node of the given tree) reduces the total description length, in which case it does so. The details of the algorithm are given in Figure 7 . Note that for simplicity we describe Find-MDL as outputting a tree cut, rather than a complete tree cut model. Note in the above algorithm that the parameter description length is calculated as ", "cite_spans": [], "ref_spans": [ { "start": 596, "end": 604, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "f(swallow)=4,f(crow)=4,f(eagle)=4,f(bird)=6,f(bee)=8,f(car)=l ,f(jet)=4,f(airplane)=4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "An example application of Find-MDL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "log ISI, where k + 1 is the number of nodes in the current cut, both when t is the 2 entire tree and when it is a proper subtree. This contrasts with the fact that the number of free parameters is k for the former, while it is k + 1 for the latter. For the purpose of finding a tree cut with the minimum description length, however, this distinction can be ignored (see Appendix A). Concerning the above algorithm, we show that the following proposition holds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Proposition 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "The algorithm Find-MDL terminates in time O(N x ISI), where N denotes the number of leaf nodes in the input thesaurus tree T and ISI denotes the input sample size, and outputs a tree cut model of T with the minimum description length (with respect to the encoding scheme described in Section 3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Here we will give an intuitive explanation of why the proposition holds, and give the formal proof in Appendix A. The MLE of each node (class) is obtained simply by dividing the frequency of nouns within that class by the total sample size. Thus, the parameter estimation for each subtree can be done independently from the estimation of the parameters outside the subtree. The data description length for a subtree thus depends solely on the tree cut within that subtree, and its calculation can be performed independently for each subtree. As for the parameter description length for a subtree, it depends only on the number of classes in the tree cut within that subtree, and hence can be computed independently as well. The formal proof proceeds by mathematical induction, which verifies that the optimal model in any (sub)tree is either the model consisting of the root of the tree or the model obtained by appending the optimal submodels for its child subtrees. 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "When a discrete model (a partition F of the set of nouns W\" in our present context) is fixed, and the estimation problem involves only the estimation of probability parameters, the classic maximum-likelihood estimation (MLE) is known to be satisfactory. In particular, the estimation of a word-based model is one such problem, since the partition is fixed and the size of the partition equals [.M[. Furthermore, for a fixed discrete model, it is known that MLE coincides with MDL: Given data S = {xi : i = 1 ..... m}, MLE estimates parameter P, which maximizes the likelihood with respect to the data; that is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "m = arg mpax H P(xi). (13) i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "It is easy to see that P also satisfies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m = arg nun ~ -log P(xi).", "eq_num": "(14)" } ], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "i=1 This is nothing but the MDL estimate in this case, since ~i~1 -log P(xi) is the data description length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "When the estimation problem involves model selection, i.e., the choice of a tree cut in the present context, MDUs behavior significantly deviates from that of MLE. This is because MDL insists on minimizing the sum total of the data description length and the model description length, while MLE is still equivalent to minimizing the data description length only. So, for our problem of estimating a tree cut model, MDL tends to select a model that is reasonably simple yet fits the data quite well, whereas the model selected by MLE will be a word-based model (or a tree cut model equivalent to the word-based modelS), as it will always manage to fit the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "In statistical terms, the superiority of MDL as an estimation method is related to the fact we noted earlier that even though MLE can provide the best fit to the given data, the estimation accuracy of the parameters is poor, when applied on a sample of modest size, as there are too many parameters to estimate. MLE is likely to estimate most parameters to be zero, and thus suffers from the data sparseness problem. Note in Table 4 , that MDL avoids this problem by taking into account the model complexity as well as the fit to the data.", "cite_spans": [], "ref_spans": [ { "start": 425, "end": 432, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "MDL stipulates that the model with the minimum description length should be selected both for data compression and estimation. This intimate connection between estimation and data compression can also be thought of as that between estimation and generalization, since in order to compress information, generalization is necessary. In our current problem, this corresponds to the generalization of individual nouns present in case frame instances in the data as classes of nouns present in a given thesaurus. For example, given the thesaurus in Figure 3 and frequency data in Figure 1 , we would like our system to judge that the class BIRD and the noun bee can be the subject slot of the verb fly. The problem of deciding whether to stop generalizing at BIRD and bee, or generalizing further to ANIMAL has been addressed by a number of authors (Webster and Marcus 1989; Velardi, Pazienza, and Fasolo 1991; Nomiyama 1992) . Minimization of the total description length provides a disciplined criterion to do this.", "cite_spans": [ { "start": 844, "end": 869, "text": "(Webster and Marcus 1989;", "ref_id": "BIBREF62" }, { "start": 870, "end": 905, "text": "Velardi, Pazienza, and Fasolo 1991;", "ref_id": "BIBREF59" }, { "start": 906, "end": 920, "text": "Nomiyama 1992)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 544, "end": 552, "text": "Figure 3", "ref_id": null }, { "start": 575, "end": 583, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "A remarkable fact about MDL is that theoretical findings have indeed verified that MDL, as an estimation strategy, is near optimal in terms of the rate of convergence of its estimated models to the true model as data size increases. When the true model is included in the class of models considered, the models selected by MDL converge to the true model at the rate of O/~C:~9~_i~!~ where k* is the number of parameters in 2.1Sl J'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "the true model, and [S] the data size, which is near optimal (Barron and Cover 1991; Yamanishi 1992) .", "cite_spans": [ { "start": 61, "end": 84, "text": "(Barron and Cover 1991;", "ref_id": "BIBREF3" }, { "start": 85, "end": 100, "text": "Yamanishi 1992)", "ref_id": "BIBREF64" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "Thus, in the current problem, MDL provides (a) a way of smoothing probability parameters to solve the data sparseness problem, and at the same time, (b) a way of generalizing nouns in the data to noun classes of an appropriate level, both as a corollary to the near optimal estimation of the distribution of the given data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation, Generalization, and MDL", "sec_num": "3.3" }, { "text": "There is a Bayesian interpretation of MDL: MDL is essentially equivalent to the \"posterior mode\" in the Bayesian terminology (Rissanen 1989) . Given data S and a number of models, the Bayesian estimator (posterior mode) selects a model M that maximizes the posterior probability: becomes equivalent to (13), giving the maximum-likelihood estimate.) Recall, that in our definition of parameter description length, we assign a shorter parameter description length to a model with a smaller number of parameters k, which admits the above interpretation. As for the model description length (for tree cuts) we assigned an equal code length to each tree cut, which translates to placing no bias on any cut. We could have employed a different coding scheme assigning shorter code lengths to cuts nearer the root. We chose not to do so partly because, for sufficiently large sample sizes, the parameter description length starts dominating the model description length anyway.", "cite_spans": [ { "start": 125, "end": 140, "text": "(Rissanen 1989)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "The Bayesian Interpretation of MDL and the Choice of Encoding Scheme", "sec_num": "3.4" }, { "text": "Another important property of the definition of description length is that it affects not only the effective prior probabilities on the models, but also the procedure for computing the model minimizing the measure. Indeed, our definition of model description length was chosen to be compatible with the dynamic programming technique, namely, its calculation is performable locally for each subtree. For a different choice of coding scheme, it is possible that a simple and efficient MDL algorithm like Find-MDL may not exist. We believe that our choice of model description length is derived from a natural encoding scheme with reasonable interpretation as Bayesian prior, and at the same time allows an efficient algorithm for finding a model with the minimum description length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bayesian Interpretation of MDL and the Choice of Encoding Scheme", "sec_num": "3.4" }, { "text": "The uniform distribution assumption made in (4), namely that all nouns belonging to a class contained in the tree cut model are assigned the same probability, seems to be rather stringent. If one were to insist that the model be exactly accurate, then it would seem that the true model would be the word-based model resulting from no generalization at all. If we allow approximations, however, it is likely that some reasonable tree cut model with the uniform probability assumption will be a good approximation of the true distribution; in fact, a best model for a given data size. As we remarked earlier, as MDL balances between the fit to the data and the simplicity of the model, one can expect that the model selected by MDL will be a reasonable compromise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Uniform Distribution Assumption and the Level of Generalization", "sec_num": "3.5" }, { "text": "Nonetheless, it is still a shortcoming of our model that it contains an oversimplified assumption, and the problem is especially pressing when rare words are involved. Rare words may not be observed at a slot of interest in the data simply because they are rare, and not because they are unfit for that particular slot. 9 To see how rare is too rare for our method, consider the following example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Uniform Distribution Assumption and the Level of Generalization", "sec_num": "3.5" }, { "text": "Suppose that the class BIRD contains 10 words, bird, swallow, crow, eagle, parrot, waxwing, etc. Consider co-occurrence data having 8 occurrences of bird, 2 occurrences of swallow, 1 occurrence of crow, 1 occurrence of eagle, and 0 occurrence of all other words, as part of, say, 100 data obtained for the subject slot of verb fly. For this data set, our method would select the model that generalizes bird, swallow, etc. to the class BIRD, since the sum of the data and parameter description lengths for the BIRD subtree is 76.57 + 3.32 = 79.89 if generalized, and 53.73 + 33.22 = 86.95 if not generalized. For comparison, consider the data with 10 occurrences of bird, 3 occurrences of swallow and 1 occurrence of crow, and 0 occurrence of all other words, also as part of 100 data for the subject slot of fly. In this case, our method would select the model that stops generalizing at bird, swallow, eagle, etc., because the description length for the same subtree now is 86.22 + 3.32 = 89.54 if generalized, and 55.04 + 33.22 = 88.26 if not generalized. These examples seem to indicate that our MDL-based method would choose to generalize, even when there are relatively large differences in frequencies of words within a class, but knows enough to stop generalizing when the discrepancy in frequencies is especially noticeable (relative to the given sample size).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Uniform Distribution Assumption and the Level of Generalization", "sec_num": "3.5" }, { "text": "We applied our generalization method to large corpora and inspected the obtained tree cut models to see if they agreed with human intuition. In our experiments, we extracted verbs and their case frame slots (verb, slot_name, slot_value triples) from the tagged texts of the Wall Street Journal corpus (ACL/DCI CD-ROM1) consisting of 126,084 sentences, using existing techniques (specifically, those in Smadja [1993] ), then 9 There are several possible measures that one could take to address this issue, including the incorporation of absolute frequencies of the words (inside and outside the particular slot in question). This is outside the scope of the present paper, and we simply refer the interested reader to one possible approach (Abe and Li 1996) . Example input data (for the direct object slot of eat). 3 eat arg2 lobster 1 eat arg2 seed 1 eat arg2 heart 2 eat arg2 liver 1 eat arg2 plant 1 eat arg2 sandwich 2 eat arg2 crab 1 eat arg2 elephant 1 eat arg2 meal 2 eat arg2 rope 1 eat arg2 applied our method to generalize the slot_values. Table 6 shows some example triple data for the direct object slot of the verb eat. There were some extraction errors present in the data, but we chose not to remove them, because in general there will always be extraction errors and realistic evaluation should leave them in.", "cite_spans": [ { "start": 402, "end": 415, "text": "Smadja [1993]", "ref_id": "BIBREF52" }, { "start": 739, "end": 756, "text": "(Abe and Li 1996)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 815, "end": 1020, "text": "3 eat arg2 lobster 1 eat arg2 seed 1 eat arg2 heart 2 eat arg2 liver 1 eat arg2 plant 1 eat arg2 sandwich 2 eat arg2 crab 1 eat arg2 elephant 1 eat arg2 meal 2 eat arg2 rope 1 eat arg2", "ref_id": "TABREF0" }, { "start": 1071, "end": 1078, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiment 1: A Qualitative Evaluation", "sec_num": "4.1" }, { "text": "When generalizing, we used the noun taxonomy of WordNet (version 1.4) (Miller 1995) as our thesaurus. The noun taxonomy of WordNet has a structure of directed acyclic graph (DAG), and its nodes stand for a word sense (a concept) and often contain several words having the same word sense. WordNet thus deviates from our notion of thesaurus--a tree in which each leaf node stands for a noun, each internal node stands for the class of nouns below it, and a noun is uniquely represented by a leaf node--so we took a few measures to deal with this.", "cite_spans": [ { "start": 70, "end": 83, "text": "(Miller 1995)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "eat arg2 food", "sec_num": null }, { "text": "First, we modified our algorithm FInd-MDL so that it can be applied to a DAG; now, Find-MDL effectively copies each subgraph having multiple parents (and its associated data) so that the DAG is transformed to a tree structure. Note that with this modification it is no longer guaranteed that the output model is optimal. Next, we dealt heuristically with the issue of word-sense ambiguity by equally dividing the observed frequency of a noun between all the nodes containing that noun. Finally, when an internal node contained nouns actually occurring in the data, we assigned the .frequencies of all the nodes below it to that internal node, and excised the whole subtree (subgraph) below it. The last of these measures, in effect, defines the \"starting cut\" of the thesaurus from which to begin generalizing. Since (word senses of) nouns that occur in natural language tend to concentrate in the middle of a taxonomy, the starting cut given by this method usually falls around the middle of the thesaurus. 1\u00b0 Figure 9 shows the starting cut and the resulting cut in WordNet for the direct object slot of eat with respect to the data in Table 6 , where /.../ denotes a node in WordNet. The starting cut consists of nodes/plant.../,/food/,etc, which are the highest nodes containing values of the direct object slot of eat. Since/food/has significantly higher frequencies than its neighbors/solid/and/fluid/, the generalization stops there according to MDL. In contrast, the nodes under/life_form.../have relatively small differences in their frequencies, and thus they are generalized to the node/life_form.../. The same is true of the nodes under /artifact/. Since /..-amount.../ has a much 10 Cognitive scientists have observed that concepts in the middle of a taxonomy tend to be more important with respect to learning, recognition, and memory, and their linguistic expressions occur more frequently in natural language--a phenomenon known as basic level primacy. See Lakoff (1987) . ;, k :mushroom> ", "cite_spans": [ { "start": 1973, "end": 1986, "text": "Lakoff (1987)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 1011, "end": 1019, "text": "Figure 9", "ref_id": null }, { "start": 1138, "end": 1145, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "eat arg2 food", "sec_num": null }, { "text": "An example generalization, result (for the direct object slot of eat).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "higher frequency than its neighbors /time/ and {space), the generalization does not go up higher. All of these results seem to agree with human intuition, indicating that our method results in an appropriate level of generalization. Table 7 shows generalization results for the direct object slot of eat and some other arbitrarily selected verbs, where classes are sorted in descending order of their probability values. (Classes with probabilities less than 0.05 are discarded due to space limitations.) Table 8 shows the computation time required (on a SPARC \"Ultra 1\" work station) to obtain the results shown in Table 7 . (The computation time for loading the WordNet was excluded since it need be done only once.) Even though the noun taxonomy of WordNet is a large thesaurus containing approximately 50,000 nodes, our method still manages to efficiently generalize case slots using it. The table also shows the average number of levels generalized for each slot, namely, the average number of links between a node in the starting cut and its ancestor node in the resulting cut. (For example, the number of levels generalized for/plant..-/ is one in Figure 9 .) One can see that a significant amount of generalization is performed by our method--the resulting tree cut is about 5 levels higher than the starting cut, on the average.", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 240, "text": "Table 7", "ref_id": "TABREF7" }, { "start": 505, "end": 512, "text": "Table 8", "ref_id": "TABREF8" }, { "start": 616, "end": 623, "text": "Table 7", "ref_id": "TABREF7" }, { "start": 1155, "end": 1163, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "Case frame patterns obtained by our method can be used in various tasks in natural language processing. In this paper, we test its effectiveness in a structural (PPattachment) disambiguation experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "Disambiguation Methods. It has been empirically verified that the use of lexical semantic knowledge is effective in structural disambiguation, such as the PP-attachment problem (Hobbs and Bear 1990; Whittemore, Ferrara, and Brunner 1990) . There have been many probabilistic methods proposed in the literature to address the PP-attachment problem using lexical semantic knowledge which, in our view, can be classified into three types. The first approach Rooth 1991, 1993) takes doubles of the form (verb, prep) and (nounl, prep) , like those in Table 9 , as training data to acquire semantic knowledge and judges the attachment sites of the prepositional phrases in quadruples of the form (verb, nounl, prep, noun2) e.g., (see, girl, with, telescope)--based on the acquired knowledge. Hindle and Rooth (1991) proposed the use of the lexical association measure calculated based on such doubles. More specifically, they estimate P (prep I verb) and P(prep [ noun1) , and calculate the so-called t-score, which is a measure of the statistical significance of the difference between P(prep I verb) and P(prep [ nounl) . If the t-score indicates that the former probability is significantly larger, Example input data as doubles.", "cite_spans": [ { "start": 177, "end": 198, "text": "(Hobbs and Bear 1990;", "ref_id": "BIBREF28" }, { "start": 199, "end": 237, "text": "Whittemore, Ferrara, and Brunner 1990)", "ref_id": "BIBREF63" }, { "start": 455, "end": 472, "text": "Rooth 1991, 1993)", "ref_id": null }, { "start": 499, "end": 511, "text": "(verb, prep)", "ref_id": null }, { "start": 516, "end": 529, "text": "(nounl, prep)", "ref_id": null }, { "start": 690, "end": 716, "text": "(verb, nounl, prep, noun2)", "ref_id": null }, { "start": 786, "end": 809, "text": "Hindle and Rooth (1991)", "ref_id": "BIBREF26" }, { "start": 931, "end": 944, "text": "(prep I verb)", "ref_id": null }, { "start": 949, "end": 964, "text": "P(prep [ noun1)", "ref_id": null }, { "start": 1100, "end": 1115, "text": "P(prep [ nounl)", "ref_id": null } ], "ref_spans": [ { "start": 546, "end": 553, "text": "Table 9", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "see in see with girl with man with Table 10 Example input data as triples. see in park see with telescope girl with scarf see with friend man with hat Table 11 Example input data as quadruples and labels.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 43, "text": "Table 10", "ref_id": "TABREF0" }, { "start": 151, "end": 159, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "see girl in park ADV see man with telescope ADV see girl with scarf ADN then the prepositional phrase is attached to verb, if the latter probability is significantly larger, it is attached to nounl, and otherwise no decision is made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "The second approach (Sekine et al. 1992; Chang, Luo, and Su 1992; Resnik 1993a; Grishman and Sterling 1994; Alshawi and Carter 1994) takes triples (verb, prep, noun2) and (nounl, prep, noun2) , like those in Table 10 , as training data for acquiring semantic knowledge and performs PP-attachment disambiguation on quadruples. For example, Resnik (1993a) proposes the use of the selectional association measure calculated based on such triples, as described in Section 2. More specifically, his method compares maxclassi~noun2 A (Classi [ verb, prep) and maxclassi~no,m2 A (Classi I nounl,prep) to make disambiguation decisions. The third approach (Brill and Resnik 1994; Ratnaparkhi, Reynar, and Roukos 1994; Collins and Brooks 1995) receives quadruples (verb, noun1, prep, noun2) and labels indicating which way the PP-attachment goes, like those in Table 11 , and learns a disambiguation rule for resolving PP-attachment ambiguities. For example, Brill and Resnik, (1994) propose a method they call transformation-based error-driven learning (see also Brill [1995] ). Their method first learns IF-THEN type rules, where the IF parts represent conditions like (prep is with) and (verb is see), and the THEN parts represent transformations from (attach to verb) to (attach to nounl), or vice versa. The first rule is always a default decision, and all the other rules indicate transformations (changes of attachment sites) subject to various IF conditions. We note that, for the disambiguation problem, the first two approaches are basically unsupervised learning methods, in the sense that the training data are merely positive examples for both types of attachments, which could in principle be extracted from pure corpus data with no human intervention. (For example, one could just use unambiguous sentences.) The third approach, on the other hand, is a supervised learning method, which requires labeled data prepared by a human being. The generalization method we propose falls into the second category, although it can also be used as a component in a combined scheme with many of the above methods (see Brill and Resnik [1994] , Alshawi and Carter [1994] ). We estimate P(noun2 I verb, prep) and P(noun2 I nount, prep) from training data consisting of triples, and compare them: If the former exceeds the latter (by a certain margin) we attach it to verb, else if the latter exceeds the former (by the same margin) we attach it to noun1.", "cite_spans": [ { "start": 20, "end": 40, "text": "(Sekine et al. 1992;", "ref_id": "BIBREF51" }, { "start": 41, "end": 65, "text": "Chang, Luo, and Su 1992;", "ref_id": "BIBREF13" }, { "start": 66, "end": 79, "text": "Resnik 1993a;", "ref_id": "BIBREF40" }, { "start": 80, "end": 107, "text": "Grishman and Sterling 1994;", "ref_id": "BIBREF24" }, { "start": 108, "end": 132, "text": "Alshawi and Carter 1994)", "ref_id": "BIBREF2" }, { "start": 147, "end": 166, "text": "(verb, prep, noun2)", "ref_id": null }, { "start": 171, "end": 191, "text": "(nounl, prep, noun2)", "ref_id": null }, { "start": 339, "end": 353, "text": "Resnik (1993a)", "ref_id": "BIBREF40" }, { "start": 528, "end": 549, "text": "(Classi [ verb, prep)", "ref_id": null }, { "start": 572, "end": 593, "text": "(Classi I nounl,prep)", "ref_id": null }, { "start": 647, "end": 670, "text": "(Brill and Resnik 1994;", "ref_id": "BIBREF8" }, { "start": 671, "end": 708, "text": "Ratnaparkhi, Reynar, and Roukos 1994;", "ref_id": "BIBREF38" }, { "start": 709, "end": 733, "text": "Collins and Brooks 1995)", "ref_id": "BIBREF14" }, { "start": 754, "end": 780, "text": "(verb, noun1, prep, noun2)", "ref_id": null }, { "start": 949, "end": 973, "text": "Brill and Resnik, (1994)", "ref_id": "BIBREF8" }, { "start": 1054, "end": 1066, "text": "Brill [1995]", "ref_id": "BIBREF7" }, { "start": 2111, "end": 2134, "text": "Brill and Resnik [1994]", "ref_id": "BIBREF8" }, { "start": 2137, "end": 2162, "text": "Alshawi and Carter [1994]", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 208, "end": 216, "text": "Table 10", "ref_id": "TABREF0" }, { "start": 851, "end": 859, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "In our experiments, described below, we compare the performance of our proposed method, which we refer to as MDL, against the methods proposed by Hindle and Rooth (1991) , Resnik (1993b) , and Brill and Resnik (1994) , referred to respectively as LA, SA, and TEL.", "cite_spans": [ { "start": 146, "end": 169, "text": "Hindle and Rooth (1991)", "ref_id": "BIBREF26" }, { "start": 172, "end": 186, "text": "Resnik (1993b)", "ref_id": "BIBREF41" }, { "start": 193, "end": 216, "text": "Brill and Resnik (1994)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "Data Set. We used the bracketed corpus of the Penn Treebank (Wall Street Journal corpus) (Marcus, Santorini, and Marcinkiewicz 1993) as our data. First we randomly selected one of the 26 directories of the WSJ files as the test data and what remains as the training data. We repeated this process 10 times and obtained 10 sets of data consisting of different training data and test data. We used these 10 data sets to conduct cross-validation as described below.", "cite_spans": [ { "start": 89, "end": 132, "text": "(Marcus, Santorini, and Marcinkiewicz 1993)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "From the test data in each data set, we extracted (verb, noun1, prep, noun2) quadruples using the extraction tool provided by the Penn Treebank called \"tgrep.\" At the same time, we obtained the answer for the PP-attachment site for each quadruple. We did not double-check if the answers provided in the Penn Treebank were actually correct or not. Then from the training data of each data set, we extracted (verb, prep) and (noun, prep) doubles, and (verb, prep, noun2) and (nounl,prep, noun2) triples using tools we developed ourselves. We also extracted quadruples from the training data as before. We then applied 12 heuristic rules to further preprocess the data, which include (1) changing the inflected form of a word to its stem form, (2) replacing numerals with the word number, (3) replacing integers between 1,900 and 2,999 with the word year, (4) replacing co., ltd., etc. with the words company, limited, etc. 11 After preprocessing there still remained some minor errors, which we did not remove further, due to the lack of a good method for doing so automatically. Table 12 shows the number of different types of data obtained by the above process.", "cite_spans": [ { "start": 50, "end": 76, "text": "(verb, noun1, prep, noun2)", "ref_id": null }, { "start": 406, "end": 418, "text": "(verb, prep)", "ref_id": null }, { "start": 423, "end": 435, "text": "(noun, prep)", "ref_id": null }, { "start": 449, "end": 468, "text": "(verb, prep, noun2)", "ref_id": null }, { "start": 473, "end": 492, "text": "(nounl,prep, noun2)", "ref_id": null } ], "ref_spans": [ { "start": 1078, "end": 1086, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "Experimental Procedure. We first compared the accuracy and coverage for each of the three disambiguation methods based on unsupervised learning: MDL, SA, and LA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "11 The experimental results obtained here are better than those obtained in our preliminary experiment (Li and Abe 1995) , in part because we only adopted rule (1) in the past. \"~t \"El", "cite_spans": [ { "start": 103, "end": 120, "text": "(Li and Abe 1995)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "For MDL, we generalized noun2 given (verb, prep, noun2) and (nounl,prep, noun2) triples as training data for each data set, using WordNet as the thesaurus in the same manner as in experiment 1. When disambiguating, we actually compared P (Classl [ verb, prep) and P(Class2 I noun1, prep) , where Class1 and Class2 are classes in the output tree cut models dominating noun2 in place of P(noun2 ] verb, prep) and P(noun2 ] nounl,prep). 12 We found that doing so gives a slightly better result. For SA, we employed a somewhat simplified version in which noun2 is generalized given (verb, prep, noun2) and (nounl,prep, noun2) triples using WordNet, and maxcl~ss,~,o,,2 A (Classi I verb, prep) and maxctass,~no,n2 A(Classi l nounl, prep) are compared for disambiguation: If the former exceeds the latter then the prepositional phrase is attached to verb, and otherwise to noun1. For LA, we estimated P(prep ] verb) and P(prep ] noun1) from the training data of each data set and compared them for disambiguation. We then evaluated the results achieved by the three methods in terms of accuracy and coverage.", "cite_spans": [ { "start": 36, "end": 55, "text": "(verb, prep, noun2)", "ref_id": null }, { "start": 60, "end": 79, "text": "(nounl,prep, noun2)", "ref_id": null }, { "start": 238, "end": 259, "text": "(Classl [ verb, prep)", "ref_id": null }, { "start": 264, "end": 281, "text": "P(Class2 I noun1,", "ref_id": null }, { "start": 282, "end": 287, "text": "prep)", "ref_id": null }, { "start": 578, "end": 597, "text": "(verb, prep, noun2)", "ref_id": null }, { "start": 602, "end": 621, "text": "(nounl,prep, noun2)", "ref_id": null }, { "start": 667, "end": 688, "text": "(Classi I verb, prep)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "Here, coverage refers to the proportion as a percentage, of the test quadruples on which the disambiguation method could make a decision, and accuracy refers to the proportion of correct decisions among them. In Figure 10 , we plot the accuracy-coverage curves for the three methods. In plotting these curves, the attachment site is determined by simply seeing if the difference between the appropriate measures for the two alternatives, be it probabilities or selectional association values, exceeds a threshold. For each method, the threshold was set successively to 0, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, and 0.75. When the difference between the two measures is less than a threshold, we rule that no decision can be made. These curves were obtained by averaging over the 10 data sets. We also implemented the exact method proposed by Hindle and Rooth (1991) , which makes disambiguation judgement using the t-score. Figure 10 shows the result as LA.t, where the threshold for t-score is set to 1.28 (significance level of 90 percent.) From Figure 10 we see that with respect to accuracy-coverage curves, MDL outperforms both SA and LA throughout, while SA is better than LA.", "cite_spans": [ { "start": 836, "end": 859, "text": "Hindle and Rooth (1991)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 212, "end": 221, "text": "Figure 10", "ref_id": "FIGREF8" }, { "start": 918, "end": 927, "text": "Figure 10", "ref_id": "FIGREF8" }, { "start": 1042, "end": 1051, "text": "Figure 10", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "Next, we tested the method of applying a default rule after applying each method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "That is, attaching (prep, noun2) to verb for the part of the test data for which no decision was made by the method in question. 13 We refer to these combined methods as MDL+Default, SA+Default, LA+Default, and LA.t+Default. Table 13 shows the results, again averaged over the 10 data sets. Finally, we used the transformation-based error-driven learning (TEL) to acquire transformation rules for each data set and applied the obtained rules to disambiguate the test data. The average number of obtained rules for a data set was 2,752.3. Table 13 shows the disambiguation result averaged over the 10 data sets. From Table 13 , we see that TEL performs the best, edging over the second place MDL+Default by a small margin, and then followed by LA+Default, and SA+Default. Below we discuss further observations concerning these results.", "cite_spans": [ { "start": 19, "end": 32, "text": "(prep, noun2)", "ref_id": null } ], "ref_spans": [ { "start": 225, "end": 233, "text": "Table 13", "ref_id": "TABREF0" }, { "start": 538, "end": 546, "text": "Table 13", "ref_id": "TABREF0" }, { "start": 616, "end": 624, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "MDL and SA. According to our experimental results, the accuracy and coverage of MDL appear to be somewhat better than those of SA. As Resnik (1993b) pointed ~ P(qv,r) out, the use of selectional association Iu~ ~ seems to be appropriate for cognitive modeling. Our experiments show, however, that the generalization method currently employed by Resnik has a tendency to overfit the data. Table 14 shows example generalization results for MDL (with classes with probability less than 0.05 discarded) and SA. Note that MDL tends to select a tree cut closer to the root of the thesaurus tree. This is probably the key reason why MDL has a wider coverage than SA for the same degree of accuracy. One may be concerned that MDL is \"overgeneralizing\" here, 14 but as shown in Figure 10 , its disambiguation accuracy does not seem to be degraded.", "cite_spans": [ { "start": 134, "end": 166, "text": "Resnik (1993b) pointed ~ P(qv,r)", "ref_id": null } ], "ref_spans": [ { "start": 388, "end": 396, "text": "Table 14", "ref_id": "TABREF0" }, { "start": 769, "end": 778, "text": "Figure 10", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "Another problem that must be dealt with concerning SA is how to remove noise (resulting, for example, from erroneous extraction) from the generalization results. P(Clv,r) Since SA estimates the ratio between two probability values, namely -~y-, the generalization result may be lead astray if one of the estimates of P (C I v, r) and P(C) is unreliable. For instance, a high estimated value for/drop, bead, pearl / at protect against Table 14 is rather odd, and is because the estimate of P(C) is unreliable (too small). This problem apparently costs SA a nonnegligible drop in disambiguation accuracy. In contrast, MDL does not suffer from this problem since a high estimated probability value is only possible with high frequency, which cannot result just from extraction errors. Consider, for example, the occurrence of car in the data shown in Figure 8 , which has supposedly resulted from an erroneous extraction. The effect of this datum gets washed away, as the estimated probability for VEHICLE, to which car has been generalized, is negligible. On the other hand, SA has a merit not shared by MDL, namely its use of the association ratio factors out the effect of absolute frequencies of words, and focuses Some hard examples for LA.", "cite_spans": [ { "start": 319, "end": 329, "text": "(C I v, r)", "ref_id": null } ], "ref_spans": [ { "start": 434, "end": 442, "text": "Table 14", "ref_id": "TABREF0" }, { "start": 848, "end": 856, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Experiment 2: PP-Attachment Disambiguation", "sec_num": "4.2" }, { "text": "Attached to noun1 acquire interest in year buy stock in trade ease restriction on export forecast sale for year make payment on million meet standard for resistance reach agreement in august show interest in session win verdict in winter acquire interest in firm buy stock in index ease restriction on type forecast sale for venture make payment on debt meet standard for car reach agreement in principle show interest in stock win verdict in case on their co-occurrence relation. Since both MDL and SA have pros and cons, it would be desirable to develop a methodology that combines the merits of the two methods (cf. Abe and Li [1996] ).", "cite_spans": [ { "start": 619, "end": 636, "text": "Abe and Li [1996]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Attached to verb", "sec_num": null }, { "text": "MDL and LA. LA makes its disambiguation decision completely ignoring noun2. As Resnik (1993b) pointed out, if we hope to improve disambiguation performance by increasing training data, we need a richer model such as those used in MDL and SA. We found that 8.8% of the quadruples in our entire test data were such that they shared the same verb, prep, noun1 but had different noun2, and their PP-attachment sites go both ways in the same data, i.e., both to verb and to noun1. Clearly, for these examples, the PP-attachment site cannot be reliably determined without knowing noun2. Table 15 shows some of these examples. (We adopted the attachment sites given in the Penn Tree Bank, without correcting apparently wrong judgements.)", "cite_spans": [ { "start": 79, "end": 93, "text": "Resnik (1993b)", "ref_id": "BIBREF41" } ], "ref_spans": [ { "start": 581, "end": 589, "text": "Table 15", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Attached to verb", "sec_num": null }, { "text": "MDL and TEL. We chose TEL as an example of the quadruple approach. This method was designed specifically for the purpose of resolving PP-attachment ambiguities, and seems to perform slightly better than ours. As we remarked earlier, however, the input data required by our method (triples) could be generated automatically from unparsed corpora making use of existing heuristic rules (Brent 1993; Smadja 1993) , although for the experiments we report here we used a parsed corpus. Thus it would seem to be easier to obtain more data in the future for MDL and other methods based on unsupervised learning. Also note that our method of generalizing values of a case slot can be used for purposes other than disambiguation.", "cite_spans": [ { "start": 384, "end": 396, "text": "(Brent 1993;", "ref_id": "BIBREF4" }, { "start": 397, "end": 409, "text": "Smadja 1993)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Attached to verb", "sec_num": null }, { "text": "We proposed a new method of generalizing case frames. Our approach of applying MDL to estimate a tree cut model in an existing thesaurus is not limited to just the problem of generalizing values of a case frame slot. It is potentially useful in other natural language processing tasks, such as the problem of estimating n-gram models (Brown et al. 1992) or the problem of semantic tagging (Cucchiarelli and Velardi 1997) . We believe that our method has the following merits: (1) it is theoretically sound; (2) it is computationally efficient; (3) it is robust against noise. Our experimental results indicate that the performance of our method is better than, or at least comparable to, existing methods. One of the disadvantages of our method is that its performance depends on the structure of the particular thesaurus used. This, however, is a problem commonly shared by any generalization method that uses a thesaurus as prior knowledge.", "cite_spans": [ { "start": 334, "end": 353, "text": "(Brown et al. 1992)", "ref_id": "BIBREF11" }, { "start": 407, "end": 420, "text": "Velardi 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "Appendix A: Proof of Proposition 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "For an arbitrary subtree T' of a thesaurus tree T and an arbitrary tree cut model M = (F,0) of T, let MT, = (FT,,0T,) denote the submodel of M that is contained in T'. Also for any sample S and any subtree T' of T, let ST, denote the subsample of S contained in T'. (Note that MT = M, ST = S.) Then define, in general for any submodel MT, and subsample ST,, L(ST, [ FT,, ~T') to be the data description length of subsample ST, using submodel MT,, L(~T, [ FT,) to be the parameter description length for the submodel MT,, and L' (MT,,ST,) to be L(ST, I FT',~T') q-L(~T, [ FT,) . (Note that, when calculating the parameter description length for a submodel, the sample size of the entire sample ]S] is used.) First note that for any (sub)tree T, (sub)model MT = (FT, ~T) contained in T, and (sub)sample ST contained in T, and T's child subtrees Ti : i = 1,..., k, we have: k L(ST I PT, g) = L(ST, I PT,,g,) 17i=1 provided that Fz is not a single node (root node of T). This follows from the mutual disjointness of the Ti, and the independence of the parameters in the Ti.", "cite_spans": [ { "start": 353, "end": 375, "text": "ST,, L(ST, [ FT,, ~T')", "ref_id": null }, { "start": 442, "end": 459, "text": "MT,, L(~T, [ FT,)", "ref_id": null }, { "start": 528, "end": 537, "text": "(MT,,ST,)", "ref_id": null }, { "start": 569, "end": 575, "text": "[ FT,)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "We also have, when T is a proper subtree of the thesaurus tree: k L(OT I FT) = ~ L(OT, I FT,).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Since the number of free parameters of a model in the entire thesaurus tree equals the number of nodes in the model minus one due to the stochastic condition (that the probability parameters must sum to one), when T equals the entire thesaurus tree, theoretically the parameter description length for a tree cut model of T should be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(g I rr) = L r) k = L(0r, I rr,) i=1 log Isl", "eq_num": "(19)" } ], "section": "Proof", "sec_num": null }, { "text": "where ISI is the size of the entire sample. Since the second term -~ in (19) is constant once the input sample S is fixed, for the purpose of finding a model with the minimum description length, it is irrelevant. We will thus use the identity (18) both when T is the entire tree and when it is a proper subtree. (This allows us to use the same recursive algorithm, Find-MDL, in all cases.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "It follows from (17) and (18) that the minimization of description length can be done essentially independently for each subtree. Namely, if we let Clmin (MT, ST) denote the minimum description length (as defined by [17] and [18] ) achievable for (sub)model", "cite_spans": [ { "start": 216, "end": 229, "text": "[17] and [18]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Mr on (sub)sample ST contained in (sub)tree T, [)s(~) the MLE estimate for node ~] using the entire sample S, and root(T) the root node of tree T, then we have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "L~nin(MT, ST) min L~nin (MTi, ST i ), k i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "L' ( ([root(T) ], [Ps(root(T) ", "cite_spans": [ { "start": 3, "end": 14, "text": "( ([root(T)", "ref_id": null }, { "start": 18, "end": 29, "text": "[Ps(root(T)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ")]), ST) }", "eq_num": "(20)" } ], "section": "Proof", "sec_num": null }, { "text": "The rest of the proof proceeds by induction. First, when T is of a single leaf node, the submodel consisting solely of the node and the MLE of the generation probability for the class represented by T is returned, which is clearly a submodel with minimum description length in the subtree T. Next, inductively assume that Find-MDL(T ~) correctly outputs a (sub)model with the minimum description length for any tree T' of size less than n. Then, given a tree T of size n whose root node has at least two children, say Ti : i = 1 ..... k, for each Ti, Find-MDL(Ti) returns a (sub)model with the minimum description length by the inductive hypothesis. Then, since 20holds, whichever way the if-clause on lines 8, 9 of Find-MDL evaluates to, what is returned on line 11 or line 13 will still be a (sub)model with the minimum description length, completing the inductive step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "It is easy to see that the running time of the algorithm is linear in both the number of leaf nodes of the input thesaurus tree and the input sample size. \u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "The model used byPereira, Tishby, and Lee (1993) is indeed along this direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Here and throughout, log denotes the logarithm to the base 2. For reasons why Equation 8 holds, see, for example,Quinlan and Rivest (1989).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The process of finding the MDL model tends to be computationally demanding and is often intractable. When the model class under consideration is restricted to tree structures, however, dynamic programming is often applicable and the MDL model can be efficiently found. For example,Rissanen (1995) has devised an algorithm for learning decision trees. 8 Consider, for example, the case when the co-occurrence data is given as f(swallow) = 2,f(crow) = 2,f(eagle) = 2,f(bird) = 2 for the problem in Section 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Recall that a node in WordNet represents a word sense and not a word; noun2 can belong to several classes in the thesaurus. We thus use maxciassignou,2(P(Classi [ verb, prep)) and maxclassi gno,m2( P( Classi [ nounl, prep) ) in place of P( Classl ] verb, prep) and P( Class2[ nounl, prep).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Interestingly, for the entire data set it is more favorable to attach (prep, noun2) to noun1, but for what remains after applying LA and MDL, it turns out to be more favorable to attach(prep, noun2) to verb.14 Note that in Experiment 1, there were more data available, and thus the data were more appropriately generalized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to K. Nakamura and T. Fujita of NEC C&C Res. Labs. for their constant encouragement. We thank K. Yaminishi and J. Takeuchi of C&C Res. Labs. for their suggestions and comments. We thank T. Futagami of NIS for his programming efforts. We also express our special appreciation to the two anonymous reviewers who have provided many valuable comments. We acknowledge the ACL for providing the ACL/DCI CD-ROM, LDC of the University of Pennsylvania for providing the Penn Treebank corpus data, and Princeton University for providing WordNet, and E. Brill and P. Resnik for providing their PP-attachment disambiguation program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning word association norms using tree cut pair models", "authors": [ { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Thirteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "3--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abe, Naoki and Hang Li. 1996. Learning word association norms using tree cut pair models. Proceedings of the Thirteenth International Conference on Machine Learning, pages 3-11.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Two methods for ALT-J/E translation rules from examples and a semantic hierarchy", "authors": [ { "first": "Hussein", "middle": [], "last": "Almuallim", "suffix": "" }, { "first": "Yasuhiro", "middle": [], "last": "Akiba", "suffix": "" }, { "first": "Takefumi", "middle": [], "last": "Yamazaki", "suffix": "" }, { "first": "Akio", "middle": [], "last": "Yokoo", "suffix": "" }, { "first": "Shigeo", "middle": [], "last": "Kaneda", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the Fifteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "57--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Almuallim, Hussein, Yasuhiro Akiba, Takefumi Yamazaki, Akio Yokoo, and Shigeo Kaneda. 1994. Two methods for ALT-J/E translation rules from examples and a semantic hierarchy. Proceedings of the Fifteenth International Conference on Computational Linguistics, pages 57-63.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Training and scaling preference functions for disambiguation", "authors": [ { "first": "Hiyan", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "David", "middle": [], "last": "Carter", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "4", "pages": "635--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alshawi, Hiyan and David Carter. 1994. Training and scaling preference functions for disambiguation. Computational Linguistics, 20(4):635-648.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Minimum complexity density estimation", "authors": [ { "first": "Andrew", "middle": [ "R" ], "last": "Barron", "suffix": "" }, { "first": "Thomas", "middle": [ "M" ], "last": "Cover", "suffix": "" } ], "year": 1991, "venue": "IEEE Transaction on Information Theory", "volume": "37", "issue": "4", "pages": "1034--1054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barron, Andrew R. and Thomas M. Cover. 1991. Minimum complexity density estimation. IEEE Transaction on Information Theory, 37(4):1034--1054.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "From grammar to lexicon: Unsupervised learning of lexical syntax", "authors": [ { "first": "Michael", "middle": [ "R" ], "last": "Brent", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "243--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brent, Michael R. 1993. From grammar to lexicon: Unsupervised learning of lexical syntax. Computational Linguistics, 19(2):243-262.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Distributional regularity and phonotactic constraints are useful for segmentation", "authors": [ { "first": "Michael", "middle": [ "R" ], "last": "Brent", "suffix": "" }, { "first": "Timothy", "middle": [ "A" ], "last": "Cartwright", "suffix": "" } ], "year": 1996, "venue": "Cognition", "volume": "61", "issue": "", "pages": "93--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brent, Michael R. and Timothy A. Cartwright. 1996. Distributional regularity and phonotactic constraints are useful for segmentation. Cognition, 61:93-125.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Discovering morphemic suffixes: A case study in minimum description length induction", "authors": [ { "first": "Michael", "middle": [ "R" ], "last": "Brent", "suffix": "" }, { "first": "K", "middle": [], "last": "Sreerama", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Murthy", "suffix": "" }, { "first": "", "middle": [], "last": "Lundberg", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brent, Michael R., Sreerama K. Murthy, and Andrew Lundberg. 1995. Discovering morphemic suffixes: A case study in minimum description length induction. Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "4", "pages": "543--565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, Eric. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543-565.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A rule-based approach to prepositional phrase attachment disambiguation", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, Eric and Philip Resnik. 1994. A rule-based approach to prepositional phrase attachment disambiguation.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Proceedings of the Fifteenth International Conference on Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1198--1204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Proceedings of the Fifteenth International Conference on Computational Linguistics, pages 1198-1204.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic extraction of subcategorization from corpora", "authors": [ { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Briscoe, Ted and John Carroll. 1997. Automatic extraction of subcategorization from corpora. Proceedings of the Fifth Conference on Applied Natural Language Processing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Class-based n-gram models of natural language", "authors": [ { "first": "Peter", "middle": [ "E" ], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Peter", "middle": [ "V" ], "last": "Pietra", "suffix": "" }, { "first": "Jenifer", "middle": [ "C" ], "last": "Desouza", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Lai", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "4", "pages": "283--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, Peter E, Vincent J. Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):283-298.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Segmenting speech without a lexicon: The roles of phonotactics and speech source", "authors": [ { "first": "Timothy", "middle": [ "A" ], "last": "Cartwright", "suffix": "" }, { "first": "Michael", "middle": [ "R" ], "last": "Brent", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the First Meeting of the ACL Special Interest Group in Computational Phonology", "volume": "", "issue": "", "pages": "83--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cartwright, Timothy A. and Michael R. Brent. 1994. Segmenting speech without a lexicon: The roles of phonotactics and speech source. Proceedings of the First Meeting of the ACL Special Interest Group in Computational Phonology, pages 83-90.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "GPSM: A generalized probabilistic semantic model for ambiguity resolution", "authors": [ { "first": "Jing-Shin", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yih-Fen", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 30th Annual Meeting", "volume": "", "issue": "", "pages": "177--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, Jing-Shin, Yih-Fen Luo, and Keh-Yih Su. 1992. GPSM: A generalized probabilistic semantic model for ambiguity resolution. Proceedings of the 30th Annual Meeting, pages 177-184. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Prepositional phrase attachment through a backed-off model", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "James", "middle": [], "last": "Brooks", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Third Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, Michael and James Brooks. 1995. Prepositional phrase attachment through a backed-off model. Proceedings of the Third Workshop on Very Large Corpora.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Elements of Information Theory", "authors": [ { "first": "Thomas", "middle": [ "M" ], "last": "Cover", "suffix": "" }, { "first": "Joy", "middle": [ "A" ], "last": "Thomas", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cover, Thomas M. and Joy A. Thomas. 1991. Elements of Information Theory. John Wiley & Sons Inc., New York.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic selection of class labels from a thesaurus for an effective semantic tagging of corpora", "authors": [ { "first": "Alessandro", "middle": [], "last": "Cucchiareui", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Velardi", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "380--387", "other_ids": {}, "num": null, "urls": [], "raw_text": "CucchiareUi, Alessandro and Paola Velardi. 1997. Automatic selection of class labels from a thesaurus for an effective semantic tagging of corpora. Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 380-387.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Contextual word similarity and estimation from sparse data", "authors": [ { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Makovitch", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 30th Annual Meeting", "volume": "", "issue": "", "pages": "164--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagan, Ido, Shaul Marcus, and Shaul Makovitch. 1992. Contextual word similarity and estimation from sparse data. Proceedings of the 30th Annual Meeting, pages 164-171. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Similarity-based estimation of word cooccurrence probabilities", "authors": [ { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd Annual Meeting", "volume": "", "issue": "", "pages": "272--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagan, Ido, Fernando Pereira, and Lillian Lee. 1994. Similarity-based estimation of word cooccurrence probabilities. Proceedings of the 32nd Annual Meeting, pages 272-278. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Discovering planar segregations", "authors": [ { "first": "T", "middle": [], "last": "Ellison", "suffix": "" }, { "first": "", "middle": [], "last": "Mark", "suffix": "" } ], "year": 1991, "venue": "Proceedings of AAAI Spring Symposium on Machine Learning of Natural Language and Ontology", "volume": "", "issue": "", "pages": "42--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellison, T. Mark. 1991. Discovering planar segregations. Proceedings of AAAI Spring Symposium on Machine Learning of Natural Language and Ontology, pages 42-47.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Discovering vowel harmony", "authors": [ { "first": "T", "middle": [], "last": "Ellison", "suffix": "" }, { "first": "", "middle": [], "last": "Mark", "suffix": "" } ], "year": 1992, "venue": "Machine Learning of Natural Language", "volume": "", "issue": "", "pages": "205--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellison, T. Mark. 1992. Discovering vowel harmony. In Walter Daelmans and David Powers, editors, Background and Experiments in Machine Learning of Natural Language, pages 205-207.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "An experiment on learning appropriate selectional restrictions from a parsed corpus", "authors": [ { "first": "Francesc", "middle": [], "last": "Framis", "suffix": "" }, { "first": "", "middle": [], "last": "Ribas", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the Fifteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "769--774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Framis, Francesc Ribas. 1994. An experiment on learning appropriate selectional restrictions from a parsed corpus. Proceedings of the Fifteenth International Conference on Computational Linguistics, pages 769-774.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Explorations in Automatic Thesaurus Discovery", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grefenstette, Gregory. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, Boston.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Acquisition of selectional patterns", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "John", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fourteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "658--664", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, Ralph and John Sterling. 1992. Acquisition of selectional patterns. Proceedings of the Fourteenth International Conference on Computational Linguistics, pages 658-664.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Generalizing automatically generated selectional patterns", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "John", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the \u2022 Fifteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "742--747", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, Ralph and John Sterling. 1994. Generalizing automatically generated selectional patterns. Proceedings of the \u2022 Fifteenth International Conference on Computational Linguistics, pages 742-747.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A minimum description length approach to grammar inference", "authors": [ { "first": "Peter", "middle": [], "last": "Grunwald", "suffix": "" } ], "year": 1996, "venue": "Symbolic, Connectionist and Statistical Approaches to Learning for Natural Language Processing", "volume": "", "issue": "", "pages": "203--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grunwald, Peter. 1996. A minimum description length approach to grammar inference. In S. Wemter, E. Riloff, and G. Scheler, editors, Symbolic, Connectionist and Statistical Approaches to Learning for Natural Language Processing, Lecture Note in AI. Springer Verlag, pages 203-216.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Structural ambiguity and lexical relations", "authors": [ { "first": "Donald", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Annual Meeting", "volume": "", "issue": "", "pages": "229--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, Donald and Mats Rooth. 1991. Structural ambiguity and lexical relations. Proceedings of the 29th Annual Meeting, pages 229-236. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Structural ambiguity and lexical relations", "authors": [ { "first": "Donald", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "103--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, Donald and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19(1):103-120.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Two principles of parse preference", "authors": [ { "first": "Jerry", "middle": [ "R" ], "last": "Hobbs", "suffix": "" }, { "first": "John", "middle": [], "last": "Bear", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the Thirteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "162--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hobbs, Jerry R. and John Bear. 1990. Two principles of parse preference. Proceedings of the Thirteenth International Conference on Computational Linguistics, pages 162-167.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Women, Fire, and Dangerous Things: What Categories Reveal about the Mind", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakoff, George. 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. The University of Chicago Press.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Generalizing case frames using a thesaurus and the MDL principle", "authors": [ { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "" } ], "year": 1995, "venue": "Proceedings of Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "239--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Hang and Naoki Abe. 1995. Generalizing case frames using a thesaurus and the MDL principle. Proceedings of Recent Advances in Natural Language Processing, pages 239-248.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning dependencies between case frame slots", "authors": [ { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Sixteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "10--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Hang and Naoki Abe. 1996. Learning dependencies between case frame slots. Proceedings of the Sixteenth International Conference on Computational Linguistics, pages 10-15.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Automatic acquisition of a large subcategorization dictionary from corpora", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 30th Annual Meeting", "volume": "", "issue": "", "pages": "235--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, Christopher D. 1992. Automatic acquisition of a large subcategorization dictionary from corpora. Proceedings of the 30th Annual Meeting, pages 235-242. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(1):313-330.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "WordNet: A lexical database for English", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "", "issue": "", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, George A. 1995. WordNet: A lexical database for English. Communications of the ACM, pages 39-41.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Machine translation by case generalization", "authors": [ { "first": "Hiroshi", "middle": [], "last": "Nomiyama", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fourteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "714--720", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nomiyama, Hiroshi. 1992. Machine translation by case generalization. Proceedings of the Fourteenth International Conference on Computational Linguistics, pages 714-720.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Distributional clustering of English words", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Naftali", "middle": [], "last": "Tishby", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting", "volume": "", "issue": "", "pages": "183--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, Fernando, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. Proceedings of the 31st Annual Meeting, pages 183-190.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Inferring decision trees using the minimum description length principle", "authors": [ { "first": "J", "middle": [], "last": "Quinlan", "suffix": "" }, { "first": "Ronald", "middle": [ "L" ], "last": "Ross", "suffix": "" }, { "first": "", "middle": [], "last": "Rivest", "suffix": "" } ], "year": 1989, "venue": "Information and Computation", "volume": "80", "issue": "", "pages": "227--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, J. Ross and Ronald L. Rivest. 1989. Inferring decision trees using the minimum description length principle. Information and Computation, 80:227-248.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A maximum entropy model for prepositional phrase attachment", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Reynar", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 1994, "venue": "Proceedings of ARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "250--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, Adwait, Jeff Reynar, and Salim Roukos. 1994. A maximum entropy model for prepositional phrase attachment. Proceedings of ARPA Workshop on Human Language Technology, pages 250-255.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "WordNet and distributional analysis: A class-based approach to lexical discovery", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1992, "venue": "Proceedings of AAAI Workshop on Statistically-based NLP Techniques", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1992. WordNet and distributional analysis: A class-based approach to lexical discovery. Proceedings of AAAI Workshop on Statistically-based NLP Techniques.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Selection and Information: A Class-based Approach to Lexical Relationships", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1993a. Selection and Information: A Class-based Approach to Lexical Relationships. Ph.D. Thesis, Univ. of Pennsylvania.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Semantic classes and syntactic ambiguity", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1993, "venue": "Proceedings of ARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, Philip. 1993b. Semantic classes and syntactic ambiguity. Proceedings of ARPA Workshop on Human Language Technology.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Modeling by shortest data description. Automatic", "authors": [ { "first": "Jorma", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1978, "venue": "", "volume": "14", "issue": "", "pages": "37--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rissanen, Jorma. 1978. Modeling by shortest data description. Automatic, 14:37-38.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A universal prior for integers and estimation by minimum description length", "authors": [ { "first": "Jorma", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1983, "venue": "The Annals of Statistics", "volume": "11", "issue": "2", "pages": "416--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rissanen, Jorma. 1983. A universal prior for integers and estimation by minimum description length. The Annals of Statistics, 11(2):416--431.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Universal coding, information, predication and estimation", "authors": [ { "first": "Jorma", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rissanen, Jorma. 1984. Universal coding, information, predication and estimation.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Stochastic complexity and modeling", "authors": [ { "first": "Jorma", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1986, "venue": "The Annals of Statistics", "volume": "14", "issue": "3", "pages": "1080--1100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rissanen, Jorma. 1986. Stochastic complexity and modeling. The Annals of Statistics, 14(3):1080-1100.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Stochastic Complexity in Statistical Inquiry", "authors": [ { "first": "Jorma", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rissanen, Jorma. 1989. Stochastic Complexity in Statistical Inquiry. World Scientific Publishing Co., Singapore.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Stochastic complexity in learning", "authors": [ { "first": "Jorma", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Second European Conference on Computational Learning Theory (Euro Colt'95)", "volume": "", "issue": "", "pages": "196--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rissanen, Jorma. 1995. Stochastic complexity in learning. Proceedings of the Second European Conference on Computational Learning Theory (Euro Colt'95), pages 196-210.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "New techniques for context modeling", "authors": [ { "first": "Eric", "middle": [], "last": "Ristad", "suffix": "" }, { "first": "Robert", "middle": [ "G" ], "last": "Sven", "suffix": "" }, { "first": "", "middle": [], "last": "Thomas", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ristad, Eric Sven and Robert G. Thomas. 1995. New techniques for context modeling. Proceedings of the 33rd Annual Meeting. Association for Computational Linguistics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Estimation of the dimension of a model", "authors": [ { "first": "G", "middle": [], "last": "Schwarz", "suffix": "" } ], "year": 1978, "venue": "Annals of Statistics", "volume": "6", "issue": "", "pages": "416--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schwarz, G. 1978. Estimation of the dimension of a model. Annals of Statistics, 6:416-446.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Automatic learning for semantic collocation", "authors": [ { "first": "", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Jeremy", "middle": [ "J" ], "last": "Satoshi", "suffix": "" }, { "first": "Sofia", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Ananiadou", "suffix": "" }, { "first": "", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Third Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "104--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sekine, Satoshi, Jeremy J. Carroll, Sofia Ananiadou, and Jun'ichi Tsujii. 1992. Automatic learning for semantic collocation. Proceedings of the Third Conference on Applied Natural Language Processing, pages 104-110.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Retrieving collocations from text: Xtract", "authors": [ { "first": "Frank", "middle": [], "last": "Smadja", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "143--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smadja, Frank. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1):143-177.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "A formal theory of inductive inference 1 and 2. Information and Control", "authors": [ { "first": "R", "middle": [ "J" ], "last": "Solomonoff", "suffix": "" } ], "year": 1964, "venue": "", "volume": "7", "issue": "", "pages": "1--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Solomonoff, R.J. 1964. A formal theory of inductive inference 1 and 2. Information and Control, 7:1-22;224-254.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Inducing probabilistic grammars by bayesian model merging", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Omohundro", "suffix": "" } ], "year": 1994, "venue": "Grammatical Inference and Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, Andreas and Stephen Omohundro. 1994. Inducing probabilistic grammars by bayesian model merging. In Rafael C. Carrasco and Jose Oncina, editors, Grammatical Inference and Applications.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Verbal case frame acquisition from a bilingual corpus: Gradual knowledge acquisition", "authors": [ { "first": "Hideki", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the Fifteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "727--731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tanaka, Hideki. 1994. Verbal case frame acquisition from a bilingual corpus: Gradual knowledge acquisition. Proceedings of the Fifteenth International Conference on Computational Linguistics, pages 727-731.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Decision tree learning algorithm with structured attributes: Application to verbal case frame acquisition", "authors": [ { "first": "Hideki", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Sixteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "943--948", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tanaka, Hideki. 1996. Decision tree learning algorithm with structured attributes: Application to verbal case frame acquisition. Proceedings of the Sixteenth International Conference on Computational Linguistics, pages 943-948.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Learning probabilistic subcategorization preference by identifying case dependencies and optimal noun class generalization level", "authors": [ { "first": "Takehito", "middle": [], "last": "Utsuro", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "364--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Utsuro, Takehito and Yuji Matsumoto. 1997. Learning probabilistic subcategorization preference by identifying case dependencies and optimal noun class generalization level. Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 364-371.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Lexical knowledge acquisition from bilingual corpora", "authors": [ { "first": "Takehito", "middle": [], "last": "Utsuro", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fourteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "581--587", "other_ids": {}, "num": null, "urls": [], "raw_text": "Utsuro, Takehito, Yuji Matsumoto, and Makoto Nagao. 1992. Lexical knowledge acquisition from bilingual corpora. Proceedings of the Fourteenth International Conference on Computational Linguistics, pages 581-587.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "How to encode semantic knowledge: A method for meaning representation and computer-aided acquisition", "authors": [ { "first": "Paola", "middle": [], "last": "Velardi", "suffix": "" }, { "first": "Maria", "middle": [ "Teresa" ], "last": "Pazienza", "suffix": "" }, { "first": "Michela", "middle": [], "last": "Fasolo", "suffix": "" } ], "year": 1991, "venue": "Computational Linguistics", "volume": "17", "issue": "2", "pages": "153--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Velardi, Paola, Maria Teresa Pazienza, and Michela Fasolo. 1991. How to encode semantic knowledge: A method for meaning representation and computer-aided acquisition. Computational Linguistics, 17(2):153-170.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "An information measure for classification", "authors": [ { "first": "C", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Boulton", "suffix": "" } ], "year": 1968, "venue": "Computer Journal", "volume": "11", "issue": "", "pages": "185--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wallace, C. and D. M. Boulton. 1968. An information measure for classification. Computer Journal, 11:185-195.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Single-factor analysis by minimum message length estimation", "authors": [ { "first": "C", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "P", "middle": [], "last": "Freeman", "suffix": "" } ], "year": 1992, "venue": "Journal of Royal Statistical Society, B", "volume": "54", "issue": "", "pages": "195--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wallace, C. and P. Freeman. 1992. Single-factor analysis by minimum message length estimation. Journal of Royal Statistical Society, B, 54:195-209.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Automatic acquisition of the lexical semantics of verbs from sentence frames", "authors": [ { "first": "Mort", "middle": [], "last": "Webster", "suffix": "" }, { "first": "Mitch", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the 27th Annual Meeting", "volume": "", "issue": "", "pages": "177--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Webster, Mort and Mitch Marcus. 1989. Automatic acquisition of the lexical semantics of verbs from sentence frames. Proceedings of the 27th Annual Meeting, pages 177-184. Association for Computational Linguistics.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Empirical study of predictive powers of simple attachment schemes for post-modifier prepositional phrases", "authors": [ { "first": "Greg", "middle": [], "last": "Whittemore", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Ferrara", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Brunner", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 28th Annual Meeting", "volume": "", "issue": "", "pages": "23--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Whittemore, Greg, Kathleen Ferrara, and Hans Brunner. 1990. Empirical study of predictive powers of simple attachment schemes for post-modifier prepositional phrases. Proceedings of the 28th Annual Meeting, pages 23-30. Association for Computational Linguistics.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "A learning criterion for stochastic rules", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamanishi", "suffix": "" } ], "year": 1992, "venue": "Machine Learning", "volume": "9", "issue": "", "pages": "165--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yamanishi, Kenji. 1992. A learning criterion for stochastic rules. Machine Learning, 9:165-203.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Word-sense disambiguation using statistical models of Roger's categories trained on large corpora", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the fourteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "454--460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky, David. 1992. Word-sense disambiguation using statistical models of Roger's categories trained on large corpora. Proceedings of the fourteenth International Conference on Computational Linguistics, pages 454-460.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd Annual Meeting", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky, David. 1994. Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French. Proceedings of the 32nd Annual Meeting, pages 88-95. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "text": "; Figures 4-6 show three of these. For example,", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "that M defines a conditional probability distribution PM(n I v,r) as follows: For any noun that is in the tree cut, such as bee, the probability is given as explicitly specified by the model, i.e., PM(bee I flY, argl) = 0.2. For any class in the tree cut, the probability is distributed uniformly to all nouns dominated by it. For example, since there are four nouns that fall under the class BIRD, and swallow is one of them, the probability of swallow is thus given by Pt~(swallow I flY, argl) = 0.8/4 = 0.2. Note that the probabilities assigned to the nouns under BIRD are smoothed, even if the nouns have different observed frequencies.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "Figure 6", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "Figure 8illustrates how the algorithm works (on the co-occurrence data shown at the bottom): In the recursive application of Find-MDL on the subtree rooted at AIRPLANE, the if-clause on line 9 evaluates to true since L'([AIRPLANE]) = 32.27, L'(~et, helicopter, airplane]) = 32.72, and hence [AIRPLANE] is returned. Then in the call to Find-MDL on the subtree rooted at ARTIFACT, the same if-clause evaluates to false since L'([VEHICLE, AIRPLANE]) = 40.97, L'([ARTIFACT]) = 41.09, and hence [VEHICLE, AIRPLANE] is returned.", "uris": null, "num": null }, "FIGREF5": { "type_str": "figure", "text": "argn~x(P(M). P(S I M)) (15) where P(M) denotes the prior probability of the model M and P(S [ M) the probability of observing the data S given M. Equivalently, M satisfies ~'I = argn~m(-logP(M) -logP(S I M)).(16)This is equivalent to the MDL estimate, if we take -log P(M) to be the model description length. Interpreting -log P(M) as the model description length translates, in the Bayesian estimation, to assigning larger prior probabilities on simpler models, since it is equivalent to assuming that P(M) = (\u00bd)t(a), where I(M) is the description length of M. (Note that if we assign uniform prior probability P(M) to all models M, then (15)", "uris": null, "num": null }, "FIGREF8": { "type_str": "figure", "text": "Accuracy-coverage curves for MDL, SA, and LA.", "uris": null, "num": null }, "TABREF0": { "content": "
verb slot_name slot_value
flyarglbee
flyarglbird
flyarglbird
flyarglcrow
flyarglbird
flyargleagle
flyarglbee
flyargleagle
flyarglbird
flyarglcrow
\"Freq.\" --
swallowcroweaglebirdbugbeeinsect
", "type_str": "table", "num": null, "text": "Example (verb, slot_name, slot_value) triple data.", "html": null }, "TABREF2": { "content": "
rL(~ I r)L(S ] r,~)L'(P)
[ANIMAL]028.0728.07
[BIRD, INSECT]1.6626.3928.05
[BIRD, bug, bee, insect]4.9823.2228.20
[swallow, crow, eagle, bird, INSECT]6.6422.3929.03
[swallow, crow, eagle, bird, bug, bee, insect] 9.9719.2229.19
Table 5
Generalization result.
verbslot~nameslot_valueprobability
flyarglBIRD0.8
flyarglINSECT0.2
Here we let t denote a thesaurus (sub)tree, root(t) the root of the tree t.
Initially t is set to the entire tree.
Also input to the algorithm is a co-occurrence data.
algorithm Find-MDL(t) := cut
1.if
2.t is a leaf node
3.then
4.retum([t])
", "type_str": "table", "num": null, "text": "Description length of the five tree cut models.", "html": null }, "TABREF3": { "content": "
L'([ARTIFACT])=41.09
L'([VEHICLE,AIRPLANE])=40.97
ENTITYL'([AIR PLAN E])=32.27
r,airplane])=32.72
BIRO~iNSCTl\u2022A a ~ .. 7o.2-.. VEHICLE AIRPLANE 0.23
swallowcroweaglebirdbug ~lS~\"~insectcarbike jet helicopter airplane
", "type_str": "table", "num": null, "text": "5A' C\"I .........", "html": null }, "TABREF4": { "content": "", "type_str": "table", "num": null, "text": "", "html": null }, "TABREF7": { "content": "
ClassProbabilityExample Words
Direct Object of eat
(food,nutrient)0.39pizza, egg
(life_form,organism,being,living_thing)0.11lobster, horse
/measure,quantity, amount,quantum) (artifact,article,artefact)0.10 0.08amount of as if eat rope
Direct Object of buy
(object, inanimate-object,physical-object /0.30computer, painting
(asset)0.10stock, share
(group,grouping)0.07company, bank
(legal_document,legal_instrument,official_document .... )0.05security, ticket
Direct Object of .fly
(entity)0.35airplane, flag, executive
(linear_measure,long_measure)0.28mile
/group,grouping)0.08delegation
Direct Object of operate
/group,grouping/0.13company, fleet
(act,human_action,human_activity)0.13flight, operation
(structure,construction/0.12center
(abstraction)0.11service, unit
(possession/0.06profit, earnings
", "type_str": "table", "num": null, "text": "Examples of generalization results.", "html": null }, "TABREF8": { "content": "
VerbCPU Time (second) Average Number of Generalized Levels
eat1.005.2
buy0.664.6
fly1.116.0
operate0.905.0
Average0.925.2
", "type_str": "table", "num": null, "text": "Required computation time and number of generalized levels.", "html": null }, "TABREF9": { "content": "", "type_str": "table", "num": null, "text": "", "html": null }, "TABREF10": { "content": "
Training Data
Average number of doubles per data set91218.1
Average number of triples per data set91218.1
Average number of quadruples per data set21656.6
Test Data
Average number of quadruples per data set820.4
", "type_str": "table", "num": null, "text": "Number of different types of data.", "html": null }, "TABREF11": { "content": "
Coverage(%) Accuracy(%)
Default10056.2
MDL + Default10082.2
SA + Default10076.7
LA + Default10080.7
LA.t + Default10078.1
TEL10082.4
", "type_str": "table", "num": null, "text": "Results of PP-attachment disambiguation.", "html": null }, "TABREF12": { "content": "
Input
VerbPreposition NounFrequency
protect againstaccusation1
protect againstdamage1
protect againstdecline1
protect againstdrop1
protect againstloss1
protect againstresistance1
protect againstsquall1
protect againstvagary1
Generalization Result of MDL
VerbPreposition Noun ClassProbability
protect against(act,human_action,human_activity)0.212
protect against(phenomenon)0.170
protect against(psychological_feature)0.099
protect against(event)0.097
protect against(abstraction)0.093
Generalization Result of SA
VerbPreposition Noun ClassSA
protect against protect against(caprice,impulse,vagary, whim) (phenomenon)1.528 0.899
protect against(happening,occurrence,natural_event)0.339
protect against(deterioration,worsening,decline,declination)0.285
protect against(act,human_action,human_activity)0.260
protect against(drop,bead,pearl)0.202
protect against(drop)0.202
protect against(descent,declivity, fall,decline,downslope)0.188
protect against(resistor, resistance)0.130
protect against(underground,resistance)0.130
protect against{immunity, resistance)O. 124
protect against(resistance, opposition)0.111
protect against(loss,deprivation)0.105
protect against(loss)0.096
protect against(cost,price,terms,damage /0.052
shown in
", "type_str": "table", "num": null, "text": "Example generalization results for SA and MDL.", "html": null }, "TABREF13": { "content": "", "type_str": "table", "num": null, "text": "", "html": null } } } }