{ "paper_id": "D13-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:41:28.174057Z" }, "title": "Studying the recursive behaviour of adjectival modification with compositional distributional semantics", "authors": [ { "first": "Eva", "middle": [ "Maria" ], "last": "Vecchi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": { "country": "Italy" } }, "email": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": { "country": "Italy" } }, "email": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": { "country": "Italy" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this study, we use compositional distributional semantic methods to investigate restrictions in adjective ordering. Specifically, we focus on properties distinguishing Adjective-Adjective-Noun phrases in which there is flexibility in the adjective ordering from those bound to a rigid order. We explore a number of measures extracted from the distributional representation of AAN phrases which may indicate a word order restriction. We find that we are able to distinguish the relevant classes and the correct order based primarily on the degree of modification of the adjectives. Our results offer fresh insight into the semantic properties that determine adjective ordering, building a bridge between syntax and distributional semantics.", "pdf_parse": { "paper_id": "D13-1015", "_pdf_hash": "", "abstract": [ { "text": "In this study, we use compositional distributional semantic methods to investigate restrictions in adjective ordering. Specifically, we focus on properties distinguishing Adjective-Adjective-Noun phrases in which there is flexibility in the adjective ordering from those bound to a rigid order. We explore a number of measures extracted from the distributional representation of AAN phrases which may indicate a word order restriction. We find that we are able to distinguish the relevant classes and the correct order based primarily on the degree of modification of the adjectives. Our results offer fresh insight into the semantic properties that determine adjective ordering, building a bridge between syntax and distributional semantics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A prominent approach for representing the meaning of a word in Natural Language Processing (NLP) is to treat it as a numerical vector that codes the pattern of co-occurrence of that word with other expressions in a large corpus of language (Sahlgren, 2006; Turney and Pantel, 2010) . This approach to semantics (sometimes called distributional semantics) scales well to large lexicons and does not require words to be manually disambiguated (Sch\u00fctze, 1997) . Until recently, however, this method had been almost exclusively limited to the level of single content words (nouns, adjectives, verbs) , and had not directly addressed the problem of compositionality (Frege, 1892; Montague, 1970; Partee, 2004) , the crucial property of natural language which allows speakers to derive the meaning of a complex linguistic constituent from the meaning of its immediate syntactic subconstituents.", "cite_spans": [ { "start": 240, "end": 256, "text": "(Sahlgren, 2006;", "ref_id": "BIBREF23" }, { "start": 257, "end": 281, "text": "Turney and Pantel, 2010)", "ref_id": "BIBREF31" }, { "start": 441, "end": 456, "text": "(Sch\u00fctze, 1997)", "ref_id": null }, { "start": 569, "end": 595, "text": "(nouns, adjectives, verbs)", "ref_id": null }, { "start": 661, "end": 674, "text": "(Frege, 1892;", "ref_id": null }, { "start": 675, "end": 690, "text": "Montague, 1970;", "ref_id": "BIBREF20" }, { "start": 691, "end": 704, "text": "Partee, 2004)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several recent proposals have strived to extend distributional semantics with a component that also generates vectors for complex linguistic constituents, using compositional operations in the vector space (Baroni and Zamparelli, 2010; Guevara, 2010; Mitchell and Lapata, 2010; Grefenstette and Sadrzadeh, 2011; Socher et al., 2012) . All of these approaches construct distributional representations for novel phrases starting from the corpusderived vectors for their lexical constituents and exploiting the geometric quality of the representation. Such methods are able to capture complex semantic information of adjective-noun (AN) phrases, such as characterizing modification (Boleda et al., 2012; Boleda et al., 2013) , and can detect semantic deviance in novel phrases (Vecchi et al., 2011) . Furthermore, these methods are naturally recursive: they can derive a representation not only for, e.g., red car, but also for new red car, fast new red car, etc. This aspect is appealing since trying to extract meaningful representations for all recursive phrases directly from a corpus will result in a problem of sparsity, since most large phrases will never occur in any finite sample.", "cite_spans": [ { "start": 206, "end": 235, "text": "(Baroni and Zamparelli, 2010;", "ref_id": "BIBREF0" }, { "start": 236, "end": 250, "text": "Guevara, 2010;", "ref_id": "BIBREF16" }, { "start": 251, "end": 277, "text": "Mitchell and Lapata, 2010;", "ref_id": "BIBREF19" }, { "start": 278, "end": 311, "text": "Grefenstette and Sadrzadeh, 2011;", "ref_id": "BIBREF15" }, { "start": 312, "end": 332, "text": "Socher et al., 2012)", "ref_id": "BIBREF27" }, { "start": 679, "end": 700, "text": "(Boleda et al., 2012;", "ref_id": "BIBREF3" }, { "start": 701, "end": 721, "text": "Boleda et al., 2013)", "ref_id": "BIBREF4" }, { "start": 774, "end": 795, "text": "(Vecchi et al., 2011)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Once we start seriously looking into recursive modification, however, the issue of modifier ordering restrictions naturally arises. Such restrictions have often been discussed in the theoretical linguistic literature (Sproat and Shih, 1990; Crisma, 1991; Scott, 2002) , and have become one of the key in-gredients of the 'cartographic' approach to syntax (Cinque, 2002) . In this paradigm, the ordering is derived by assigning semantically different classes of modifiers to the specifiers of distinct functional projections, whose sequence is hard-wired. While it is accepted that in different languages movement can lead to a principled rearrangement of the linear order of the modifiers (Cinque, 2010; Steddy and Samek-Lodovici, 2011) , one key assumption of the cartographic literature is that exactly one intonationally unmarked order for stacked adjectives should be possible in languages like English. The possibility of alternative orders, when discussed at all, is attributed to the presence of idioms (high American building, but American high officer), to asyndetic conjunctive meanings (e.g. new creative idea parsed as [new & creative] idea, rather than [new [creative idea]]), or to semantic category ambiguity for any adjective which appears in different orders (see Cinque (2004) for discussion).", "cite_spans": [ { "start": 217, "end": 240, "text": "(Sproat and Shih, 1990;", "ref_id": "BIBREF28" }, { "start": 241, "end": 254, "text": "Crisma, 1991;", "ref_id": "BIBREF11" }, { "start": 255, "end": 267, "text": "Scott, 2002)", "ref_id": "BIBREF25" }, { "start": 355, "end": 369, "text": "(Cinque, 2002)", "ref_id": null }, { "start": 689, "end": 703, "text": "(Cinque, 2010;", "ref_id": "BIBREF9" }, { "start": 704, "end": 736, "text": "Steddy and Samek-Lodovici, 2011)", "ref_id": "BIBREF29" }, { "start": 1131, "end": 1147, "text": "[new & creative]", "ref_id": null }, { "start": 1281, "end": 1294, "text": "Cinque (2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we show that the existence of both rigid and flexible order cases is robustly attested at least for adjectival modification, and that flexible ordering is unlikely to reduce to idioms, coordination or ambiguity. Moreover, we show that at least for some recursively constructed adjective-adjectivenoun phrases (AANs) we can extract meaningful representations from the corpus, approximating them reasonably well by means of compositional distributional semantic models, and that the semantic information contained in these models characterizes which AA will have rigid order (as with rapid social change vs. *social rapid change), or flexible order (e.g. total estimated population vs. estimated total population). In the former case, we find that the same distributional semantic cues discriminate between correct and wrong orders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To achieve these goals, we consider various properties of the distributional representation of AANs (both corpus-extracted and compositionallyderived), and explore their correlation with restrictions in adjective ordering. We conclude that measures that quantify the degree to which the modifiers have an impact on the distributional meaning of the AAN can be good predictors of ordering restrictions in AANs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our initial step was to construct a semantic space for our experiments, consisting of a matrix where each row represents the meaning of an adjective, noun, AN or AAN as a distributional vector, each column a semantic dimension of meaning. We first introduce the source corpus, then the vocabulary of words and phrases that we represent in the space, and finally the procedure adopted to build the vectors representing the vocabulary items from corpus statistics, and obtain the semantic space matrix. We work here with a traditional, window-based semantic space, since our focus is on the effect of different composition methods given a common semantic space. In addition, Blacoe and Lapata (2012) found that a vanilla space of this sort performed best in their composition experiments, when compared to a syntax-aware space and to neural language model vectors such as those used for composition by Socher et al. (2011) .", "cite_spans": [ { "start": 673, "end": 697, "text": "Blacoe and Lapata (2012)", "ref_id": "BIBREF2" }, { "start": 900, "end": 920, "text": "Socher et al. (2011)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic space", "sec_num": "2.1" }, { "text": "We use as our source corpus the concatenation of the Web-derived ukWaC corpus, a mid-2009 dump of the English Wikipedia and the British National Corpus 1 . The corpus has been tokenized, POS-tagged and lemmatized with the Tree-Tagger (Schmid, 1995) , and it contains about 2.8 billion tokens. We extract all statistics at the lemma level, meaning that we consider only the canonical form of each word ignoring inflectional information, such as pluralization and verb inflection.", "cite_spans": [ { "start": 234, "end": 248, "text": "(Schmid, 1995)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Source corpus", "sec_num": null }, { "text": "Semantic space vocabulary The words/phrases in the semantic space must of course include the items that we need for our experiments (adjectives, nouns, ANs and AANs used for model training, as input to composition and for evaluation). Therefore, we first populate our semantic space with a core vocabulary containing the 8K most frequent nouns and the 4K most frequent adjectives from the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source corpus", "sec_num": null }, { "text": "The ANs included in the semantic space are composed of adjectives with very high frequency in the corpus so that they are generally able to combine with many classes of nouns. They are composed of the 700 most frequent adjectives and 4K most frequent nouns in the corpus, which were manually controlled for problematic cases -excluding adjectives such as above, less, or very, and nouns such as cant, mph, or yours -often due to tagging errors. We generated the set of ANs by crossing the filtered 663 adjectives and 3,910 nouns. We include those ANs that occur at least 100 times in the corpus in our vocabulary, which amounted to a total of 128K ANs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source corpus", "sec_num": null }, { "text": "Finally, we created a set of AAN phrases composed of the adjectives and nouns used to generate the ANs. Additional preprocessing of the generated A x A y Ns includes: (i) control that both A x N and A y N are attested in the corpus; (ii) discard any A x A y N in which A x N or A y N are among the top 200 most frequent ANs in the source corpus (as in this case, order will be affected by the fact that such phrases are almost certainly highly lexicalized); and (iii) discard AANs seen as part of a conjunction in the source corpus (i.e., where the two adjectives appear separated by comma, and, or or; this addresses the objection that a flexible order AAN might be a hidden A(&)A conjunction: we would expect that such a conjunction should also appear overtly elsewhere). The set of AANs thus generated is then divided into two types of adjective ordering:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source corpus", "sec_num": null }, { "text": "1. Flexible Order (FO): phrases where both orders, A x A y N and A y A x N, are attested (f >10 in both orders).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source corpus", "sec_num": null }, { "text": "2. Rigid Order (RO): phrases with one order, A x A y N, attested (20", "type_str": "table", "html": null }, "TABREF5": { "num": null, "text": "Flexible vs. Rigid Order AANs.", "content": "
t-normalized
differences between flexible order (FO) and rigid order
(FO) mean cosines (or mean \u2206PMI values) for corpus-
extracted and model-generated vectors. For significant
differences (p<0.05 after Bonferroni correction), the last
column reports whether mean cosine (or \u2206PMI) is larger
for flexible order (FO) or rigid order (RO) class.
", "type_str": "table", "html": null }, "TABREF7": { "num": null, "text": "Attested-vs. unattested-order rigid order AANs. t-normalized mean paired cosine (or \u2206PMI) dif-", "content": "
ferences between attested (A) and unattested (U) AANs
with their components. For significant differences (paired
t-test p<0.05 after Bonferroni correction), last column
reports whether cosines (or \u2206PMI) are on average larger
for A or U.
", "type_str": "table", "html": null } } } }