{ "paper_id": "P95-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:33:59.221887Z" }, "title": "A Quantitative Evaluation of Linguistic Tests for the Automatic Prediction of Semantic Markedness", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University New York", "location": { "postCode": "10027", "region": "N.Y" } }, "email": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University New York", "location": { "postCode": "10027", "region": "N.Y" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also apply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests.", "pdf_parse": { "paper_id": "P95-1027", "_pdf_hash": "", "abstract": [ { "text": "We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also apply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The concept of markedness originated in the work of Prague School linguists (Jakobson, 1984a) and refers to relationships between two complementary or antonymous terms which can be distinguished by the presence or absence of a feature (+A versus --A). Such an opposition can occur at various linguistic levels. For example, a markedness contrast can arise at the morphology level, when one of the two words is derived from the other and therefore contains an explicit formal marker such as a prefix; e.g., profitable-unprofitable. Markedness contrasts also appear at the semantic level in many pairs of gradable antonymous adjectives, especially scalar ones (Levinson, 1983) , such as tall-short. The marked and unmarked elements of such pairs function in different ways. The unmarked adjective (e.g., tall) can be used in how-questions to refer to the property described by both adjectives in the pair (e.g., height), but without any implication about the modified item relative to the norm for the property. For example, the question How tall is Jack? can be answered equally well by four or seven feet. In contrast, the marked element of the opposition cannot be used generically; when used in a how-question, it implies a presupposition of the speaker regarding the relative position of the modified item on the adjectival scale. Thus, the corresponding question using the marked term of the opposition (How short is Jack?)", "cite_spans": [ { "start": 76, "end": 93, "text": "(Jakobson, 1984a)", "ref_id": "BIBREF13" }, { "start": 658, "end": 674, "text": "(Levinson, 1983)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "conveys an implication on the part of the speaker that Jack is indeed short; the distinguishing feature A expresses this presupposition. While markedness has been described in terms of a distinguishing feature A, its definition does not specify the type of this feature. Consequently, several different types of features have been employed, which has led into some confusion about the meaning of the term markedness. Following Lyons (1977) , we distinguish between formal markedness where the opposition occurs at the morphology level (i.e., one of the two terms is derived from the other through inflection or affixation) and semantic markedness where the opposition occurs at the semantic level as in the example above. When two antonymous terms are also morphologically related, the formally unmarked term is usually also the semantically unmarked one (for example, clear-unclear). However, this correlation is not universal; consider the examples unbiased-biased and independent-dependent.", "cite_spans": [ { "start": 417, "end": 439, "text": "Following Lyons (1977)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In any case, semantic markedness is the more interesting of the two and the harder to determine, both for humans and computers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Various tests for determining markedness in general have been proposed by linguists (see Section 3). However, although potentially automatic versions of some of these have been successfully applied to the problem at the phonology level Greenberg, 1966) , little work has been done on the empirical validation or the automatic application of those tests at higher levels (but see (Ku~era, 1982) for an empirical analysis of a proposed markedness test at the syntactic level; some more narrowly focused empirical work has also been done on markedness in second language acquisition). In this paper we analyze the performance of several linguistic tests for the selection of the semantically unmarked term out of a pair of gradable antonymous adjectives. We describe a system that automatically extracts the relevant data for these tests from text corpora and corpora-based databases, and use this system to measure the applicability and accuracy of each method. We apply statistical tests to determine the significance of the results, and then discuss the performance of complex predictors that combine the answers of the linguistic tests according to two general statistical learning methods, decision trees and loglinear regression models.", "cite_spans": [ { "start": 236, "end": 252, "text": "Greenberg, 1966)", "ref_id": "BIBREF8" }, { "start": 379, "end": 393, "text": "(Ku~era, 1982)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of our work is twofold: First, we are interested in providing hard, quantitative evidence on the performance of markedness tests already proposed in the linguistics literature. Such tests are based on intuitive observations and/or particular theories of semantics, but their accuracy has not been measured on actual data. The results of our analysis can be used to substantiate theories which are compatible with the empirical evidence, and thus offer insight into the complex linguistic phenomenon of antonymy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "The second purpose of our work is practical applications. The semantically unmarked term is almost always the positive term of the opposition (Boucher and Osgood, 1969) ; e.g., high is positive, while low is negative. Therefore, an automatic method for determining markedness values can also be used to determine the polarity of antonyms. The work reported in this paper helps clarify which types of data and tests are useful for such a method and which are not.", "cite_spans": [ { "start": 142, "end": 168, "text": "(Boucher and Osgood, 1969)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "The need for an automatic corpus-based method for the identification of markedness becomes apparent when we consider the high number of adjectives in unrestricted text and the domain-dependence of markedness values. In the MRC Psycholinguistic Database (Coltheart, 1981) , a large machinereadable annotated word list, 25,547 of the 150,837 entries (16.94%) are classified as adjectives, not including past participles; if we only consider regularly used grammatical categories for each word, the percentage of adjectives rises to 22.97%. For comparison, nouns (the largest class) account for 51.28% and 57.47% of the words under the two criteria. In addition, while adjectives tend to have prevalent markedness and polarity values in the language at large, frequently these values are negated in specific domains or contexts. For example, healthy is in most contexts the unmarked member of the opposition healthy:sick; but in a hospital setting, sickness rather than health is expected, so sick becomes the unmarked term. The methods we describe are based on the form of the words and their overall statistical properties, and thus cannot predict specific occur-fences of markedness reversals. But they can predict the prevalent markedness value for each adjective in a given domain, something which is impractical to do by hand separately for each domain.", "cite_spans": [ { "start": 253, "end": 270, "text": "(Coltheart, 1981)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "We have built a large system for the automatic, domain-dependent classification of adjectives according to semantic criteria. The first phase of our system (Hatzivassiloglou and McKeown, 1993) separates adjectives into groups of semantically related ones. We extract markedness values according to the methods described in this paper and use them in subsequent phases of the system that further analyze these groups and determine their scalar structure.", "cite_spans": [ { "start": 156, "end": 192, "text": "(Hatzivassiloglou and McKeown, 1993)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "An automatic method for extracting polarity information would also be useful for the augmentation of lexico-semantic databases such as WordNet (Miller et al., 1990) , particularly when the method accounts for the specificities of the domain sublanguage; an increasing number of NLP systems rely on such databases (e.g., (Resnik, 1993; Knight and Luk, 1994) ). Finally, knowledge of polarity can be combined with corpus-based collocation extraction methods (Smadja, 1993) (Greenberg, 1966; Lyons, 1977; Waugh, 1982; Lehrer, 1985; Ross, 1987; Lakoff, 1987) and are not suitable for computer implementation. However, some proposed tests refer to comparisons between measurable properties of the words in question and are amenable to full automation. These tests are:", "cite_spans": [ { "start": 143, "end": 164, "text": "(Miller et al., 1990)", "ref_id": "BIBREF24" }, { "start": 320, "end": 334, "text": "(Resnik, 1993;", "ref_id": "BIBREF27" }, { "start": 335, "end": 356, "text": "Knight and Luk, 1994)", "ref_id": "BIBREF15" }, { "start": 456, "end": 470, "text": "(Smadja, 1993)", "ref_id": null }, { "start": 471, "end": 488, "text": "(Greenberg, 1966;", "ref_id": "BIBREF8" }, { "start": 489, "end": 501, "text": "Lyons, 1977;", "ref_id": null }, { "start": 502, "end": 514, "text": "Waugh, 1982;", "ref_id": "BIBREF35" }, { "start": 515, "end": 528, "text": "Lehrer, 1985;", "ref_id": "BIBREF18" }, { "start": 529, "end": 540, "text": "Ross, 1987;", "ref_id": "BIBREF28" }, { "start": 541, "end": 554, "text": "Lakoff, 1987)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "1. Text frequency. Since the unmarked term can appear in more contexts than the marked one, and it has both general and specific senses, it should appear more frequently in text than the marked term (Greenberg, 1966) .", "cite_spans": [ { "start": 199, "end": 216, "text": "(Greenberg, 1966)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "A formal markedness relationship (i.e., a morphology relationship between the two words), whenever it exists, should be an excellent predictor for semantic markedness (Greenberg, 1966; Zwicky, 1978) .", "cite_spans": [ { "start": 167, "end": 184, "text": "(Greenberg, 1966;", "ref_id": "BIBREF8" }, { "start": 185, "end": 198, "text": "Zwicky, 1978)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Formal markedness.", "sec_num": "2." }, { "text": "3. Formal complexity. Since the unmarked word is the more general one, it should also be morphologically the simpler (Jakobson, 1962; Battistella, 1990) . The \"economy of language\" prin-1MAGN stands for magnify.", "cite_spans": [ { "start": 117, "end": 133, "text": "(Jakobson, 1962;", "ref_id": null }, { "start": 134, "end": 152, "text": "Battistella, 1990)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Formal markedness.", "sec_num": "2." }, { "text": "ciple (Zipf, 1949) supports this claim. Note that this test subsumes test (2).", "cite_spans": [ { "start": 6, "end": 18, "text": "(Zipf, 1949)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Formal markedness.", "sec_num": "2." }, { "text": "being more general and frequently used to describe the whole scale, should be freer to combine with other linguistic elements (Winters, 1990; Battistella, 1990) .", "cite_spans": [ { "start": 126, "end": 141, "text": "(Winters, 1990;", "ref_id": "BIBREF36" }, { "start": 142, "end": 160, "text": "Battistella, 1990)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Morphological produclivity. Unmarked words,", "sec_num": "4." }, { "text": "Unmarked terms should exhibit higher differentiation with more subdistinetions (Jakobson, 1984b ) (e.g., the present tense (unmarked) appears in a greater variety of forms than the past), or, equivalently, the marked term should lack some subcategories (Greenberg, 1966) . The first of the above tests compares the text frequencies of the two words, which are clearly measurable and easily retrievable from a corpus. We use the one-million word Brown corpus of written American English (Ku~era and Francis, 1967) for this purpose. The mapping of the remaining tests to quantifiable variables is not as immediate. We use the length of a word in characters, which is a reasonable indirect index of morphological complexity, for tests 2and 3. This indicator is exact for the case of test (2), since the formally marked word is derived from the unmarked one through the addition of an affix (which for adjectives is always a prefix). The number of syllables in a word is another reasonable indicator of morphological complexity that we consider, although it is much harder to compute automatically than word length.", "cite_spans": [ { "start": 79, "end": 95, "text": "(Jakobson, 1984b", "ref_id": "BIBREF13" }, { "start": 253, "end": 270, "text": "(Greenberg, 1966)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Differentialion.", "sec_num": "5." }, { "text": "For morphological productivity (test (4)), we measure several variables related to the freedom of the word to receive affixes and to participate in compounds. Several distinctions exist for the definition of a variable that measures the number of words that are morphologically derived from a given word. These distinctions involve: Q Whether to consider the number of distinct words in this category (types) or the total frequency of these words (tokens).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differentialion.", "sec_num": "5." }, { "text": "\u2022 Whether to separate words derived through affixation from compounds or combine these types of morphological relationships.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differentialion.", "sec_num": "5." }, { "text": "\u2022 If word types (rather than word frequencies) are measured, we can select to count homographs (words identical in form but with different parts of speech, e.g., light as an adjective and light as a verb) as distinct types or map all homographs of the same word form to the same word type. Finally, the differentiation test (5) is the one general markedness test that cannot be easily mapped into observable properties of adjectives. Somewhat arbitrarily, we mapped this test to the number of grammatical categories (parts of speech) that each word can appear under, postulating that the unmarked term should have a higher such number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differentialion.", "sec_num": "5." }, { "text": "The various ways of measuring the quantities compared by the tests discussed above lead to the consideration of 32 variables. Since some of these variables are closely related and their number is so high that it impedes the task of modeling semantic markedness in terms of them, we combined several of them, keeping 14 variables for the statistical analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differentialion.", "sec_num": "5." }, { "text": "In order to measure the performance of the markedness tests discussed in the previous section, we collected a fairly large sample of pairs of antonymous gradable adjectives that can appear in howquestions. The Deese antonyms (Deese, 1964) is the prototypical collection of pairs of antonymous adjectives that have been used for similar analyses in the past (Deese, 1964; Justeson and Katz, 1991; Grefenstette, 1992 ). However, this collection contains only 75 adjectives in 40 pairs, some of which cannot be used in our study either because they are primarily adverbials (e.g., inside-outside) or not gradable (e.g., alive-dead). Unlike previous studies, the nature of the statistical analysis reported in this paper requires a higher number of pairs. Consequently, we augmented the Deese set with the set of pairs used in the largest manual previous study of markedness in adjective pairs (Lehrer, 1985) . In addition, we included all gradable adjectives which appear 50 times or more in the Brown corpus and have at least one gradable antonym; the antonyms were not restricted to belong to this set of frequent adjectives. For each adjective collected according to this last criterion, we included all the antonyms (frequent or not) that were explicitly listed in the Collins COBUILD dictionary (Sinclair, 1987) for each of its senses. This process gave us a sample of 449 adjectives (both frequent and infrequent ones) in 344 pairs. 2", "cite_spans": [ { "start": 225, "end": 238, "text": "(Deese, 1964)", "ref_id": "BIBREF5" }, { "start": 357, "end": 370, "text": "(Deese, 1964;", "ref_id": "BIBREF5" }, { "start": 371, "end": 395, "text": "Justeson and Katz, 1991;", "ref_id": "BIBREF14" }, { "start": 396, "end": 414, "text": "Grefenstette, 1992", "ref_id": "BIBREF9" }, { "start": 890, "end": 904, "text": "(Lehrer, 1985)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "4" }, { "text": "We separated the pairs on the basis of the how-test into those that contain one semantically unmarked and one marked term and those that contain two marked terms (e.g., fat-lhin), removing the latter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "4" }, { "text": "For the remaining pairs, we identified the unmarked member, using existing designations (Lehrer, 1985) whenever that was possible; when in doubt, the pair was dropped from further consideration. We also separated the pairs into two groups according to whether the two adjectives in each pair were morphologically related or not. This allowed us to study the different behavior of the tests for the two groups separately. Table 1 shows the results of this crossclassification of the adjective pairs. Our next step was to measure the variables described in Section 3 which are used in the various 2The collection method is similar to Deese's: He also started from frequent adjectives but used human subjects to elicit antonyms instead of a dictionary. tests for semantic markedness. For these measurements, we used the MRC Psycholinguistic Database (Coltheart, 1981) which contains a variety of measures for 150,837 entries counting different parts of speech or inflected forms as different words (115,331 distinct words). We implemented an extractor program to collect the relevant measurements for the adjectives in our sample, namely text frequency, number of syllables, word length, and number of parts of speech. All this information except the number of syllables can also be automatically extracted from the corpus. The extractor program also computes information that is not directly stored in the MRC database. Affixation rules from (Quirk et al., 1985) are recursively employed to check whether each word in the database can be derived from each adjective, and counts and frequencies of such derived words and compounds are collected. Overall, 32 measurements are computed for each adjective, and are subsequently combined into the 14 variables used in our study. Finally, the variables for the pairs are computed as the differences between the corresponding variables for the adjectives in each pair. The output of this stage is a table, with two strata corresponding to the two groups, and containing measurements on 14 variables for the 279 pairs with a semantically unmarked member.", "cite_spans": [ { "start": 88, "end": 102, "text": "(Lehrer, 1985)", "ref_id": "BIBREF18" }, { "start": 847, "end": 864, "text": "(Coltheart, 1981)", "ref_id": "BIBREF4" }, { "start": 1440, "end": 1460, "text": "(Quirk et al., 1985)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 421, "end": 428, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data Collection", "sec_num": "4" }, { "text": "For each of the variables, we measured how many pairs in each group it classified correctly. A positive (negative) value indicates that the first (second) adjective is the unmarked one, except for two variables (word length and number of syllables) where the opposite is true. When the difference is zero, the variable selects neither the first or second adjective as unmarked. The percentage of nonzero differences, which correspond to cases where the test actually suggests a choice, is reported as the applicability of the variable. For the purpose of evaluating the accuracy of the variable, we assign such cases randomly to one of the two possible outcomes in accordance with common practice in classification (Duda and Hart, 1973) .", "cite_spans": [ { "start": 715, "end": 736, "text": "(Duda and Hart, 1973)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Linguistic Tests", "sec_num": null }, { "text": "For each variable and each of the two groups, we also performed a statistical test of the null hypothesis that its true accuracy is 50%, i.e., equal to the expected accuracy of a random binary classifier. Under the null hypothesis, the number of correct responses follows a binomial distribution with parameter p = 0.5. Since all obtained measurements of accuracy were higher than 50%, any rejection of the null hypothesis implies that the corresponding test is significantly better than chance. Table 2 summarizes the values obtained for some of the 14 variables in our data and reveals some surprising facts about their performance. The frequency of the adjectives is the best predictor in both groups, achieving an overall accuracy of 80.64% with high applicability (98.5-99%). This is all the more remarkable in the case of the morphologically related adjectives, where frequency outperforms length of the words; recall that the latter directly encodes the formal markedness relationship, so frequency is able to correctly classify some of the cases where formal and semantic markedness values disagree. On the other hand, tests based on the \"economy of language\" principle, such as word length and number of syllables, perform badly when formal markedness relationships do not exist, with lower applicability and very low accuracy scores. The same can be said about the test based on the differentiation properties of the words (number of different parts of speech). In fact, for these three variables, the hypothesis of random performance cannot be rejected even at the 5% level. Tests based on the productivity of the words, as measured through affixation and compounding, tend to fall in-between: their accuracy is generally significant, but their applicability is sometimes low, particularly for compounds.", "cite_spans": [], "ref_spans": [ { "start": 496, "end": 503, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of Linguistic Tests", "sec_num": null }, { "text": "Predictions Based on More than One Test While the frequency of the adjectives is the best single predictor, we would expect to gain accuracy by combining the answers of several simple tests. We consider the problem of determining semantic markedness as a classification problem with two possible outcomes (\"the first adjective is unmarked\" and \"the second adjective is unmarked\"). To design an appropriate classifier, we employed two general statistical supervised learning methods, which we briefly describe in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "Decision trees (Quinlan, 1986) is the first statistical supervised learning paradigm that we explored. A popular method for the automatic construction of such trees is binary recursive partitioning, which constructs a binary tree in a top-down fashion. Starting from the root, the variable X which better discriminates among the possible outcomes is selected and a test of the form X < consiant is as- Table 2 : Evaluation of simple markedness tests. The probability of obtaining by chance performance equal to or better than the observed one is listed in the P-Value column for each test.", "cite_spans": [ { "start": 15, "end": 30, "text": "(Quinlan, 1986)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 402, "end": 409, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "sociated with the root node of the tree. All training cases for which this test succeeds (fails) belong to the left (right) subtree of the decision tree. The method proceeds recursively, by selecting a new variable (possibly the same as in the parent node) and a new cutting point for each subtree, until all the cases in one subtree belong to the same category or the data becomes too sparse. When a node cannot be split further, it is labeled with the locally most probable category. During prediction, a path is traced from the root of the tree to a leaf, and the category of the leaf is the category reported. If the tree is left to grow uncontrolled, it will exactly represent the training set (including its peculiarities and random variations), and will not be very useful for prediction on new cases. Consequently, the growing phase is terminated before the training samples assigned to the leaf nodes are entirely homogeneous. A technique that improves the quality of the induced tree is to grow a larger than optimal tree and then shrink it by pruning subtrees (Breiman et al., 1984) . In order to select the nodes to shrink, we normally need to use new data that has not been used for the construction of the original tree.", "cite_spans": [ { "start": 1071, "end": 1093, "text": "(Breiman et al., 1984)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "In our classifier, we employ a maximum likelihood estimator based on the binomial distribution to select the optimal split at each node. During the shrinking phase, we optimally regress the probabilities of children nodes to their parent according to a shrinking parameter ~ (Hastie and Pregibon, 1990) , instead of pruning entire subtrees. To select the optimal value for (~, we initially held out a part of the training data. In a later version of the classifier, we employed cross-validation, separating our training data in 10 equally sized subsets and repeatedly training on 9 of them and validating on the other.", "cite_spans": [ { "start": 275, "end": 302, "text": "(Hastie and Pregibon, 1990)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "Log-linear regression (Santner and Duffy, 1989) is the second general supervised learning method that we explored. In classical linear modeling, the response variable y is modeled as y --bTx+e where b is a vector of weights, x is the vector of the values of the predictor variables and e is an error term which is assumed to be normally distributed with zero mean and constant variance, independent of the mean of y. The log-linear regression model generalizes this setting to binomial sampling where the response variable follows a Bernoulli distribution (corresponding to a two-category outcome); note that the variance of the error term is not independent of the mean of y any more. The resulting generalized linear model (McCullagh and Nelder, 1989) employs a linear predictor y = bTx + e as before, but the response variable y is non-linearly related to through the inverse logit function, eY y -__ 1A-e\" Note that y E (0, 1); each of the two ends of that interval is associated with one of the possible choices.", "cite_spans": [ { "start": 22, "end": 47, "text": "(Santner and Duffy, 1989)", "ref_id": "BIBREF30" }, { "start": 725, "end": 753, "text": "(McCullagh and Nelder, 1989)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "We employ the iterative reweighted least squares algorithm (Baker and Nelder, 1989) to approximate the maximum likelihood cstimate of the vector b, but first we explicitly drop the constant term (intercept) and most of the variables. The intercept is dropped because the prior probabilities of the two outcomes are known to be equal. 3 Several of the variables are dropped to avoid overfitting (Duda and Hart, 1973) ; otherwise the regression model will use all available variables, unless some of them are linearly dependent. To identify which variables we should keep in the model, we use the analysis of deviance method with iterative stepwise refinement of the model by iteratively adding or dropping one term if the reduction (increase) in the deviance compares 3The order of the adjectives in the pairs is randomized before training the model, to ensure that both outcomes are equiprobable. favorably with the resulting loss (gain) in residual degrees of freedom. Using a fixed training set, six of the fourteen variables were selected for modeling the morphologically unrelated adjectives. Frequency was selected as the only component of the model for the morphologically related ones. We also examined the possibility of replacing some variables in these models by smoothing cubic Bsplines (Wahba, 1990) . The analysis of deviance for this model indicated that for the morphologically unrelated adjectives, one of the six selected variables should be removed altogether and another should be replaced by a smoothing spline.", "cite_spans": [ { "start": 59, "end": 83, "text": "(Baker and Nelder, 1989)", "ref_id": "BIBREF0" }, { "start": 394, "end": 415, "text": "(Duda and Hart, 1973)", "ref_id": "BIBREF6" }, { "start": 1298, "end": 1311, "text": "(Wahba, 1990)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "For both decision trees and log-linear regression, we repeatedly partitioned the data in each of the two groups into equally sized training and testing sets, constructed the predictors using the training sets, and evaluated them on the testing sets. This process was repeated 200 times, giving vectors of estimates for the performance of the various methods. The simple frequency test was also evaluated in each testing set for comparison purposes. From these vectors, we estimate the density of the distribution of the scores for each method; Figure 1 gives these densities for the frequency test and the log-linear model with smoothing splines on the most difficult case, the morphologically unrelated adjectives. Table 3 summarizes the performance of the methods on the two groups of adjective pairs. 4 In order to assess the significance of the differences between 4The applicability of all complex methods was 100% in both groups. the scores, we performed a nonparametric sign test (Gibbons and Chakraborti, 1992) for each complex predictor against the simple frequency variable. The test statistic is the number of runs where the score of one predictor is higher than the other's; as is common in statistical practice, ties are broken by assigning half of them to each category. Under the null hypothesis of equal performance of the two methods that are contrasted, this test statistic follows the binomial distribution with p = 0.5. Table 3 includes the exact probabilities for obtaining the observed (or more extreme) values of the test statistic.", "cite_spans": [ { "start": 987, "end": 1018, "text": "(Gibbons and Chakraborti, 1992)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 544, "end": 552, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 716, "end": 723, "text": "Table 3", "ref_id": null }, { "start": 1440, "end": 1447, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of the Complex Predictors", "sec_num": null }, { "text": "From the table, we observe that the tree-based methods perform considerably worse than frequency (significant at any conceivable level), even when cross-validation is employed. Both the standard and smoothed log-linear models outperform the frequency test on the morphologically unrelated adjectives (significant at the 5% and 0.1% levels respectively), while the log-linear model's performance is comparable to the frequency test's on the morphologically related adjectives. The best predictor overall is the smoothed log-linear model. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Complex Predictors", "sec_num": null }, { "text": "The above results indicate that the frequency test essentially contains almost all the information that can be extracted collectively from all linguistic tests. Consequently, even very sophisticated methods for combining the tests can offer only small improvement. Furthermore, the prominence of one variable can easily lead to overfitting the training data in the remaining variables. This causes the decision tree models to perform badly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Complex Predictors", "sec_num": null }, { "text": "We have presented a quantitative analysis of the performance of measurable linguistic tests for the selection of the semantically unmarked term out of a pair of antonymous adjectives. The analysis shows that a simple test, word frequency, outperforms more complicated tests, and also dominates them in terms of information content. Some of the tests that have been proposed in the linguistics literature, notably tests that are based on the formal complexity and differentiation properties of the words; fail to give any useful information at all, at least with the approximations we used for them (Section 3). On the other hand, tests based on morphological productivity are valid, although not as accurate as frequency. Naturally, the validity of our results depends on the quality of our measurements. While for most of the variables our measurements are necessarily apsit should be noted here that the independence assumption of the sign test is mildly violated in these repeated runs, since the scores depend on collections of independent samples from a finite population. This mild dependence will increase somewhat the probabilities under the true null distribution, but we can be confident that probabilities such as 0.08% will remain significant. Table 3 : Evaluation of the complex predictors. The probability of obtaining by chance a difference in performance relative to the simple frequency test equal to or larger than the observed one is listed in the P-Value column for each complex predictor.", "cite_spans": [], "ref_spans": [ { "start": 1256, "end": 1263, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": null }, { "text": "proximate, we believe that they are nevertheless of acceptable accuracy since (1) we used a representative corpus;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": null }, { "text": "(2) we selected both a large sample of adjective pairs and a large number of frequent adjectives to avoid sparse data problems;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": null }, { "text": "(3) the procedure of identifying secondary words for indirect measurements based on morphological productivity operates with high recall and precision; and (4) the mapping of the linguistic tests to comparisons of quantitative variables was in most cases straightforward, and always at least plausible. The analysis of the linguistic tests and their combinations has also led to a computational method for the determination of semantic markedness. The method is completely automatic and produces accurate results at 82% of the cases. We consider this performance reasonably good, especially since no previous automatic method for the task has been proposed. While we used a fixed set of 449 adjectives for our analysis, the number of adjectives in unrestricted text is much higher, as we noted in Section 2. This multitude of adjectives, combined with the dependence of semantic markedness on the domain, makes the manual identification of markedness values impractical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": null }, { "text": "In the future, we plan to expand our analysis to other classes of antonymous words, particularly verbs which are notoriously difficult to analyze semantically (Levin, 1993) . A similar methodology can be applied to identify unmarked (positive) versus marked (negative) terms in pairs such as agree: dissent.", "cite_spans": [ { "start": 159, "end": 172, "text": "(Levin, 1993)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": null } ], "back_matter": [ { "text": "This work was supported jointly by the Advanced Research Projects Agency and the Office of Naval Research under contract N00014-89-J-1782, and by the National Science Foundation under contract GER-90-24069. It was conducted under the auspices of the Columbia University CAT in High Performance Computing and Communications in Healthcare, a New York State Center for Advanced Technology supported by the New York State Science and Technology Foundation. We wish to thank Judith Klavans, Rebecca Passonneau, and the anonymous reviewers for providing us with useful comments on earlier versions of the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The GLIM System, Release 3: Generalized Linear Interactive Modeling", "authors": [ { "first": "R", "middle": [ "J" ], "last": "Baker", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Nelder", "suffix": "" } ], "year": 1989, "venue": "Numerical Algorithms Group", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. J. Baker and J. A. Nelder. 1989. The GLIM System, Release 3: Generalized Linear Interactive Modeling. Numerical Algorithms Group, Oxford.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Markedness: The Evaluative Superstructure of Language. State University of", "authors": [ { "first": "Edwin", "middle": [ "L" ], "last": "Battistella", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edwin L. Battistella. 1990. Markedness: The Eval- uative Superstructure of Language. State Univer- sity of New York Press, Albany, NY.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Polyanna hypothesis", "authors": [ { "first": "T", "middle": [], "last": "Boucher", "suffix": "" }, { "first": "C", "middle": [ "E" ], "last": "Osgood", "suffix": "" } ], "year": 1969, "venue": "Journal of Verbal Learning and Verbal Behavior", "volume": "8", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Boucher and C. E. Osgood. 1969. The Polyanna hypothesis. Journal of Verbal Learning and Verbal Behavior, 8:1-8.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Classification and Regression Trees", "authors": [ { "first": "J", "middle": [ "H" ], "last": "Leo Breiman", "suffix": "" }, { "first": "R", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "C", "middle": [ "J" ], "last": "Olshen", "suffix": "" }, { "first": "", "middle": [], "last": "Stone", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leo Breiman, J. H. Friedman, R. Olshen, and C. J. Stone. 1984. Classification and Regression Trees. Wadsworth International Group, Belmont, CA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The MRC Psycholinguistic Database", "authors": [ { "first": "M", "middle": [], "last": "Coltheart", "suffix": "" } ], "year": 1981, "venue": "Quarterly Journal of Experimental Psychology", "volume": "33", "issue": "", "pages": "497--505", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Coltheart. 1981. The MRC Psycholinguis- tic Database. Quarterly Journal of Experimental Psychology, 33A:497-505.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The associative structure of some common English adjectives", "authors": [ { "first": "James", "middle": [], "last": "Deese", "suffix": "" } ], "year": 1964, "venue": "Journal of Verbal Learning and Verbal Behavior", "volume": "3", "issue": "5", "pages": "347--357", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Deese. 1964. The associative structure of some common English adjectives. Journal of Ver- bal Learning and Verbal Behavior, 3(5):347-357.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Pattern Classification and Scene Analysis", "authors": [ { "first": "O", "middle": [], "last": "Richard", "suffix": "" }, { "first": "Peter", "middle": [ "E" ], "last": "Duda", "suffix": "" }, { "first": "", "middle": [], "last": "Hart", "suffix": "" } ], "year": 1973, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard O. Duda and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. Wiley, New York.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Nonparametric Statistical Inference", "authors": [ { "first": "Jean", "middle": [], "last": "Dickinson Gibbons", "suffix": "" }, { "first": "Subhabrata", "middle": [], "last": "Chakraborti", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Dickinson Gibbons and Subhabrata Chak- raborti. 1992. Nonparametric Statistical Infer- ence. Marcel Dekker, New York, 3rd edition.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Language Universals", "authors": [ { "first": "Joseph", "middle": [ "H" ], "last": "Greenberg", "suffix": "" } ], "year": 1966, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph H. Greenberg. 1966. Language Universals. Mouton, The Hague.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Finding semantic similarity in raw text: The Deese antonyms", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 1992, "venue": "Probabilistic Approaches to Natural Language: Papers from the 1992 Fall Symposium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Grefenstette. 1992. Finding semantic simi- larity in raw text: The Deese antonyms. In Prob- abilistic Approaches to Natural Language: Papers from the 1992 Fall Symposium. AAAI.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Shrinking trees", "authors": [ { "first": "T", "middle": [], "last": "Hastie", "suffix": "" }, { "first": "D", "middle": [], "last": "Pregibon", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Hastie and D. Pregibon. 1990. Shrinking trees. Technical report, AT&T Bell Laboratories.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Towards the automatic identification of adjectival scales: Clustering adjectives according to meaning", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "172--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasileios Hatzivassiloglou and Kathleen McKeown. 1993. Towards the automatic identification of ad- jectival scales: Clustering adjectives according to meaning. In Proceedings of the 31st Annual Meet- ing of the ACL, pages 172-182, Columbus, Ohio.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Zero sign (1939). In Russian and Slavic Grammar Studies", "authors": [ { "first": "Roman", "middle": [], "last": "Jakobson", "suffix": "" } ], "year": 1931, "venue": "", "volume": "", "issue": "", "pages": "151--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Jakobson. 1984a. The structure of the Rus- sian verb (1932). In Russian and Slavic Grammar Studies 1931-1981, pages 1-14. Mouton, Berlin. Roman Jakobson. 1984b. Zero sign (1939). In Russian and Slavic Grammar Studies 1931-1981, pages 151-160. Mouton, Berlin.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Cooccurrences of antonymous adjectives and their contexts", "authors": [ { "first": "John", "middle": [ "S" ], "last": "Justeson", "suffix": "" }, { "first": "M", "middle": [], "last": "Slava", "suffix": "" }, { "first": "", "middle": [], "last": "Katz", "suffix": "" } ], "year": 1991, "venue": "Computational Linguistics", "volume": "17", "issue": "1", "pages": "1--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "John S. Justeson and Slava M. Katz. 1991. Co- occurrences of antonymous adjectives and their contexts. Computational Linguistics, 17(1):1-19.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Building a large-scale knowledge base for machine translation", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Steve", "middle": [ "K" ], "last": "Luk", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 12th National Conference on Artificial Intelligence (AAAI-94)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Steve K. Luk. 1994. Building a large-scale knowledge base for machine transla- tion. In Proceedings of the 12th National Confer- ence on Artificial Intelligence (AAAI-94). AAAI.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Computational Analysis of Present-Day American English", "authors": [ { "first": "Henry", "middle": [], "last": "Kusera", "suffix": "" }, { "first": "Winthrop", "middle": [ "N" ], "last": "Francis", "suffix": "" } ], "year": 1967, "venue": "Proceedings of the Ninth International Conference on Computational Linguistics (COLING-82)", "volume": "", "issue": "", "pages": "167--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henry KuSera and Winthrop N. Francis. 1967. Computational Analysis of Present-Day American English. Brown University Press, Providence, RI. Henry Ku6era. 1982. Markedness and frequency: A computational analysis. In Jan Horecky, edi- tor, Proceedings of the Ninth International Con- ference on Computational Linguistics (COLING- 82), pages 167-173, Prague. North-Holland.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Women, Fire, and Dangerous Things", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Lakoff. 1987. Women, Fire, and Dangerous Things. University of Chicago Press, Chicago.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Markedness and antonymy", "authors": [ { "first": "Adrienne", "middle": [], "last": "Lehrer", "suffix": "" } ], "year": 1985, "venue": "Journal of Linguistics", "volume": "31", "issue": "3", "pages": "397--429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrienne Lehrer. 1985. Markedness and antonymy. Journal of Linguistics, 31(3):397-429, September.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "English Verb Classes and Alternations: A Preliminary Investigation", "authors": [ { "first": "Beth", "middle": [], "last": "Levin", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beth Levin. 1993. English Verb Classes and Alter- nations: A Preliminary Investigation. University of Chicago Press, Chicago.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Pragmatics", "authors": [ { "first": "C", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "", "middle": [], "last": "Levinson", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen C. Levinson. 1983. Pragmatics. Cambridge University Press, Cambridge, England.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Generalized Linear Models", "authors": [ { "first": "Peter", "middle": [], "last": "Mccullagh", "suffix": "" }, { "first": "John", "middle": [ "A" ], "last": "Nelder", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter McCullagh and John A. Nelder. 1989. Gen- eralized Linear Models. Chapman and Hall, Lon- don, 2nd edition.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Surface Syntax of English: a Formal Model within the Meaning-Text Framework. Benjamins, Amsterdam and Philadelphia", "authors": [ { "first": "Igor", "middle": [ "A" ], "last": "", "suffix": "" }, { "first": "Nikolaj", "middle": [ "V" ], "last": "Pertsov", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor A. Mel'~uk and Nikolaj V. Pertsov. 1987. Sur- face Syntax of English: a Formal Model within the Meaning-Text Framework. Benjamins, Ams- terdam and Philadelphia.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "WordNet: An on-line lexical database", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "D", "middle": [], "last": "Gross", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography (special issue)", "volume": "3", "issue": "4", "pages": "235--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller. 1990. WordNet: An on-line lexical database. International Journal of Lexicography (special issue), 3(4):235-312.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Induction of decision trees", "authors": [ { "first": "John", "middle": [ "R" ], "last": "Quinlan", "suffix": "" } ], "year": 1986, "venue": "Machine Learning", "volume": "1", "issue": "1", "pages": "81--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R. Quinlan. 1986. Induction of decision trees. Machine Learning, 1(1):81-106.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A Comprehensive Grammar of the English Language", "authors": [ { "first": "Randolph", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Sidney", "middle": [], "last": "Greenbaum", "suffix": "" } ], "year": 1985, "venue": "Geoffrey Leech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Randolph Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman, London and New York.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semantic classes and syntactic ambiguity", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the ARPA Workshop on Human Language Technology. ARPA Information Science and Technology Office", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1993. Semantic classes and syntactic ambiguity. In Proceedings of the ARPA Workshop on Human Language Technology. ARPA Informa- tion Science and Technology Office.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Islands and syntactic prototypes", "authors": [ { "first": "John", "middle": [ "R" ], "last": "Ross", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R. Ross. 1987. Islands and syntactic pro- totypes.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Papers from the 23rd Annual Regional Meeting of the Chicago Linguistic Society (Part I: The General Session)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "309--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "In B. Need et ah, editors, Papers from the 23rd Annual Regional Meeting of the Chicago Linguistic Society (Part I: The General Session), pages 309-320. Chicago Linguistic Soci- ety, Chicago.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The Statistical Analysis of Discrete Data", "authors": [ { "first": "J", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Diane", "middle": [ "E" ], "last": "Santner", "suffix": "" }, { "first": "", "middle": [], "last": "Duffy", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas J. Santner and Diane E. Duffy. 1989. The Statistical Analysis of Discrete Data. Springer- Verlag, New York.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Collins COBUILD English Language Dictionary", "authors": [], "year": 1987, "venue": "Retrieving collocations from text: XTRACT. Computational Linguistics", "volume": "19", "issue": "", "pages": "143--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Sinclair (editor in chief). 1987. Collins COBUILD English Language Dictionary. Collins, London. Frank Smadja. 1993. Retrieving collocations from text: XTRACT. Computational Linguistics, 19(1):143-177, March.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Grundzuger der Phonologic", "authors": [ { "first": "Nikolai", "middle": [ "S" ], "last": "Trubetzkoy", "suffix": "" } ], "year": 1939, "venue": "Prague. English translation in", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolai S. Trubetzkoy. 1939. Grundzuger der Phonologic. Travaux du Cercle Linguistique de Prague 7, Prague. English translation in (Trubet- zkoy, 1969).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Principles of Phonology", "authors": [ { "first": "Nikolai", "middle": [ "S" ], "last": "Trubetzkoy", "suffix": "" } ], "year": 1939, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolai S. Trubetzkoy. 1969. Principles of Phonol- ogy. University of California Press, Berkeley and Los Angeles, California. Translated into English from (Trubetzkoy, 1939).", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Spline Models for Observational Data", "authors": [ { "first": "Grace", "middle": [], "last": "Wahba", "suffix": "" } ], "year": 1990, "venue": "CBMS-NSF Regional Conference series in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grace Wahba. 1990. Spline Models for Observa- tional Data. CBMS-NSF Regional Conference se- ries in Applied Mathematics. Society for Indus- trial and Applied Mathematics (SIAM), Philadel- phia, PA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Marked and unmarked: A choice between unequals", "authors": [ { "first": "R", "middle": [], "last": "Linda", "suffix": "" }, { "first": "", "middle": [], "last": "Waugh", "suffix": "" } ], "year": 1982, "venue": "Semiotica", "volume": "38", "issue": "", "pages": "299--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda R. Waugh. 1982. Marked and unmarked: A choice between unequals. Semiotica, 38:299-318.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Toward a theory of syntactic prototypes", "authors": [ { "first": "Margaret", "middle": [], "last": "Winters", "suffix": "" } ], "year": 1990, "venue": "Meanings and Prototypes: Studies in Linguistic Categorization", "volume": "", "issue": "", "pages": "285--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret Winters. 1990. Toward a theory of syn- tactic prototypes. In Savas L. Tsohatzidis, editor, Meanings and Prototypes: Studies in Linguistic Categorization, pages 285-307. Routledge, Lon- don.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology", "authors": [ { "first": "K", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Zipf ; Addison-Wesley", "suffix": "" }, { "first": "Ma", "middle": [ "A" ], "last": "Reading", "suffix": "" }, { "first": "", "middle": [], "last": "Zwicky", "suffix": "" } ], "year": 1949, "venue": "Die Spra'che", "volume": "24", "issue": "", "pages": "129--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "George K. Zipf. 1949. Human Behavior and the Principle of Least Effort: An Introduction to Hu- man Ecology. Addison-Wesley, Reading, MA. A. Zwicky. 1978. On markedness in morphology. Die Spra'che, 24:129-142.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Probability densities for the accuracy of the frequency method (dotted line) and the smoothed log-linear model (solid line) on the morphologically unrelated adjectives." } } } }