{
"paper_id": "S12-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:24:12.074611Z"
},
"title": "Learning Semantics and Selectional Preference of Adjective-Noun Pairs",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {
"postCode": "OX1 3QD",
"settlement": "Oxford",
"country": "UK"
}
},
"email": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {
"postCode": "OX1 3QD",
"settlement": "Oxford",
"country": "UK"
}
},
"email": "cdyer@cs.cmu.edu"
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "phil.blunsom@cs.ox.ac.uk"
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {
"postCode": "OX1 3QD",
"settlement": "Oxford",
"country": "UK"
}
},
"email": "stephen.pulman@cs.ox.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We investigate the semantic relationship between a noun and its adjectival modifiers. We introduce a class of probabilistic models that enable us to to simultaneously capture both the semantic similarity of nouns and modifiers, and adjective-noun selectional preference. Through a combination of novel and existing evaluations we test the degree to which adjective-noun relationships can be categorised. We analyse the effect of lexical context on these relationships, and the efficacy of the latent semantic representation for disambiguating word meaning.",
"pdf_parse": {
"paper_id": "S12-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "We investigate the semantic relationship between a noun and its adjectival modifiers. We introduce a class of probabilistic models that enable us to to simultaneously capture both the semantic similarity of nouns and modifiers, and adjective-noun selectional preference. Through a combination of novel and existing evaluations we test the degree to which adjective-noun relationships can be categorised. We analyse the effect of lexical context on these relationships, and the efficacy of the latent semantic representation for disambiguating word meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Developing models of the meanings of words and phrases is a key challenge for computational linguistics. Distributed representations are useful in capturing such meaning for individual words (Sato et al., 2008; Maas and Ng, 2010; Curran, 2005) . However, finding a compelling account of semantic compositionality that utilises such representations has proven more difficult and is an active research topic (Mitchell and Lapata, 2008 ; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011) . It is in this area that our paper makes its contribution.",
"cite_spans": [
{
"start": 191,
"end": 210,
"text": "(Sato et al., 2008;",
"ref_id": "BIBREF16"
},
{
"start": 211,
"end": 229,
"text": "Maas and Ng, 2010;",
"ref_id": "BIBREF10"
},
{
"start": 230,
"end": 243,
"text": "Curran, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 406,
"end": 432,
"text": "(Mitchell and Lapata, 2008",
"ref_id": "BIBREF11"
},
{
"start": 435,
"end": 463,
"text": "Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF0"
},
{
"start": 464,
"end": 497,
"text": "Grefenstette and Sadrzadeh, 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dominant approaches to distributional semantics have relied on relatively simple frequency counting techniques. However, such approaches fail to generalise to the much sparser distributions encountered when modeling compositional processes and provide no account of selectional preference. We propose a probabilistic model of the semantic representations for nouns and modifiers. The foundation of this model is a latent variable representa-tion of noun and adjective semantics together with their compositional probabilities. We employ this formulation to give a dual view of noun-modifier semantics: the induced latent variables provide an explicit account of selectional preference while the marginal distributions of the latent variables for each word implicitly produce a distributed representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most related work on selectional preference uses class-based probabilities to approximate (sparse) individual probabilities. Relevant papers includ\u00e9 O S\u00e9aghdha (2010), who evaluates several topic models adapted to learning selectional preference using co-occurence and Baroni and Zamparelli (2010) , who represent nouns as vectors and adjectives as matrices, thus treating them as functions over noun meaning. Again, inference is achieved using co-occurrence and dimensionality reduction.",
"cite_spans": [
{
"start": 269,
"end": 297,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We hypothesize that semantic classes determine the semantic characteristics of nouns and adjectives, and that the distribution of either with respect to other components of the sentences they occur in is also mediated by these classes (i.e., not by the words themselves). We assume that in general nouns select for adjectives, 1 and that this selection is dependent on both their latent semantic classes. In the next section, we describe a model encoding our hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjective-Noun Model",
"sec_num": "2"
},
{
"text": "We model a corpus D of tuples of the form (n, m, c 1 . . . c k ) consisting of a noun n, an adjective m (modifier), and k words of context. The context variables (c 1 . . . c k ) are treated as a bag of words and include the words to the left and right of the noun, its siblings and governing verbs. We designate the vocabulary V n for nouns, V m for modifiers and V c for context. We use z i to refer to the i th tuple in D and refer to variables within that tuple by subscripting them with i, e.g., n i and c 3,i are the noun and the third context variable of z i . The latent noun and adjective class variables are designated N i and M i . The corpus D is generated according to the plate diagram in figure 1. First, a set of parameters is drawn. A multinomial \u03a8 N representing the distribution of noun semantic classes in the corpus is drawn from a Dirichlet distribution with parameter \u03b1 N . For each noun class i we have distributions \u03a8 M i over adjective classes, \u03a8 n i over V n and \u03a8 c i over V c , also drawn from Dirichlet distributions. Finally, for each adjective class j, we have distributions \u03a8 m j over V m . Next, the contents of the corpus are generated by first drawing the length of the corpus (we do not parametrise this since we never generate from this model). Then, for each i, we generate noun class N i , adjective class M i , and the tuple z i as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process",
"sec_num": "2.1"
},
{
"text": "|N| |M| |N| N M n m c k |D| \u03b1 N \u03a8 N \u03b1 M \u03a8 M \u03b1 c \u03a8 c |N| \u03a8 n \u03a8 m \u03b1 n \u03b1 m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process",
"sec_num": "2.1"
},
{
"text": "N i | \u03a8 N \u223c Multi(\u03a8 N ) M i | \u03a8 M N i \u223c Multi(\u03a8 M N i ) n i | \u03a8 n N i \u223c Multi(\u03a8 n N i ) m i | \u03a8 m M i \u223c Multi(\u03a8 m M i ) \u2200k: c k,i | \u03a8 c N i \u223c Multi(\u03a8 c N i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process",
"sec_num": "2.1"
},
{
"text": "We use Gibbs sampling to estimate the distributions of N and M , integrating out the multinomial parameters \u03a8 x (Griffiths and Steyvers, 2004). The Dirichlet parameters \u03b1 are drawn independently from a \u0393(1, 1) distribution, and are resampled using slice sampling at frequent intervals throughout the sampling process (Johnson and Goldwater, 2009) . This \"vague\" prior encourages sparse draws from the Dirichlet distribution. The number of noun and adjective classes N and M was set to 50 each; other sizes (100,150) did not significantly alter results.",
"cite_spans": [
{
"start": 317,
"end": 346,
"text": "(Johnson and Goldwater, 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization and Inference",
"sec_num": "2.2"
},
{
"text": "As our model was developed on the basis of several hypotheses, we design the experiments and evaluation so that these hypotheses can be examined on their individual merit. We test the first hypothesis, that nouns and adjectives can be represented by semantic classes, recoverable using co-occurence, using a sense clustering evaluation by Ciaramita and Johnson (2003) . The second hypothesis, that the distribution with respect to context and to each other is governed by these semantic classes is evaluated using pseudo-disambiguation (Clark and Weir, 2002; Pereira et al., 1993; Rooth et al., 1999) and bigram plausibility (Keller and Lapata, 2003) tests. To test whether noun classes indeed select for adjective classes, we also evaluate an inverse model (M od i ), where the adjective class is drawn first, in turn generating both context and the noun class. In addition, we evaluate copies of both models ignoring context (M od nc and M od inc ).",
"cite_spans": [
{
"start": 339,
"end": 367,
"text": "Ciaramita and Johnson (2003)",
"ref_id": "BIBREF1"
},
{
"start": 536,
"end": 558,
"text": "(Clark and Weir, 2002;",
"ref_id": "BIBREF2"
},
{
"start": 559,
"end": 580,
"text": "Pereira et al., 1993;",
"ref_id": "BIBREF14"
},
{
"start": 581,
"end": 600,
"text": "Rooth et al., 1999)",
"ref_id": "BIBREF15"
},
{
"start": 625,
"end": 650,
"text": "(Keller and Lapata, 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We use the British National Corpus (BNC), training on 90 percent and testing on 10 percent of the corpus. Results are reported after 2,000 iterations including a burn-in period of 200 iterations. Classes are marginalised over every 10th iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Supersense tagging (Ciaramita and Johnson, 2003; Curran, 2005) evaluates a model's ability to cluster words by their semantics. The task of this evaluation is to determine the WORDNET supersenses of a given list of nouns. We report results on the WN1.6 test set as defined by Ciaramita and Johnson (2003) , who used 755 randomly selected nouns with a unique supersense from the WORDNET 1.6 corpus. As their test set was random, results weren't exactly replicable. For a fair comparison, we select all suitable nouns from the corpus that also appeared in the training corpus. We report results on type and token level (52314 tokens with 1119 types). The baseline 2 chooses the most common supersense. We use cosine-similarity on the marginal noun class vectors to measure distance between nouns. Each noun in the test set is then assigned a supersense by performing a distance-weighted voting among its k nearest neighbours. Results of this evaluation are shown in Table 1 , with Figure 2 showing scores for model M od across different values for k. The results demonstrate that nouns can semantically be represented as members of latent classes, while the superiority of M od over M od nc supports our hypothesis that context co-occurence is a key feature for learning these classes.",
"cite_spans": [
{
"start": 19,
"end": 48,
"text": "(Ciaramita and Johnson, 2003;",
"ref_id": "BIBREF1"
},
{
"start": 49,
"end": 62,
"text": "Curran, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 276,
"end": 304,
"text": "Ciaramita and Johnson (2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 964,
"end": 971,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 979,
"end": 987,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Supersense Tagging",
"sec_num": "4.1"
},
{
"text": "Pseudo-disambiguation was introduced by Clark and Weir (2002) to evaluate models of selectional preference. The task is to select the more probable of two candidate arguments to associate with a given 2 The baseline results are from Ciaramita and Johnson (2003) . Using the majority baseline on the full test set, we only get .176 and .160 for token and type respectively. predicate. For us, this is to decide which adjective, a 1 or a 2 , is more likely to modify a noun n.",
"cite_spans": [
{
"start": 40,
"end": 61,
"text": "Clark and Weir (2002)",
"ref_id": "BIBREF2"
},
{
"start": 201,
"end": 202,
"text": "2",
"ref_id": null
},
{
"start": 233,
"end": 261,
"text": "Ciaramita and Johnson (2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-Disambiguation",
"sec_num": "4.2"
},
{
"text": "We follow the approach by Clark and Weir (2002) to create the test data. To improve the quality of the data, we filtered using bigram counts from the Web1T corpus, setting a lower bound on the probable bigram (a 1 , n) and chosing a 2 from five candidates, picking the lowest count for bigram (a 2 , n).",
"cite_spans": [
{
"start": 26,
"end": 47,
"text": "Clark and Weir (2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-Disambiguation",
"sec_num": "4.2"
},
{
"text": "We report results for all variants of our model in Table 2 . As baseline we use unigram counts in our training data, chosing the more frequent adjective. While all models decisively beat the baseline, the models using context strongly outperform those that do not. This supports our hypothesis regarding the importance of context in semantic clustering.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Pseudo-Disambiguation",
"sec_num": "4.2"
},
{
"text": "The similarity between the normal and inverse models implies that the direction of the nounadjective relationship has negligible impact for this evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-Disambiguation",
"sec_num": "4.2"
},
{
"text": "Bigram plausibility (Keller and Lapata, 2003 ) is a second evaluation for selectional preference. Unlike the frequency-based pseudo-disambiguation task, it evaluates how well a model matches human judgement of the plausibility of adjective-noun pairs. Keller and Lapata (2003) demonstrated a correlation between frequencies and plausibility, but this does not sufficiently explain human judgement. An example taken from their unseen data set illustrates the dissociation between frequency and plausibility:",
"cite_spans": [
{
"start": 20,
"end": 44,
"text": "(Keller and Lapata, 2003",
"ref_id": "BIBREF9"
},
{
"start": 252,
"end": 276,
"text": "Keller and Lapata (2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Plausibility",
"sec_num": "4.3"
},
{
"text": "\u2022 Frequent, implausible: \"educational water\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Plausibility",
"sec_num": "4.3"
},
{
"text": "\u2022 Infrequent, plausible: \"difficult foreigner\" 3 The plausibility evaluation has two data sets of 90 adjective-noun pairs each. The first set (seen) contains random bigrams from the BNC. The second set (unseen) are bigrams not contained in the BNC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Plausibility",
"sec_num": "4.3"
},
{
"text": "Recent work (\u00d3 S\u00e9aghdha, 2010; Erk et al., 2010) approximated plausibility with joint probability (JP). We believe that for semantic plausibility (not probability!) mutual information (MI), which factors out acutal frequencies, is a better metric. 4 We report results using JP, MI and MI\u02c62. Table 3 shows the performance of our models compared to results reported in\u00d3 S\u00e9aghdha (2010) . As before, results between the normal and the inverse model (omitted due to space) are very similar. Surprisingly, the no-context models consistently outperform the models using context on the seen data set. This suggests that the seen data set can quite precisely be ranked using frequency estimates, which the no-context models might be better at capturing without the 'noise' introduced by context. Table 4 : Results on the unseen plausibility dataset.",
"cite_spans": [
{
"start": 12,
"end": 30,
"text": "(\u00d3 S\u00e9aghdha, 2010;",
"ref_id": null
},
{
"start": 31,
"end": 48,
"text": "Erk et al., 2010)",
"ref_id": "BIBREF4"
},
{
"start": 248,
"end": 249,
"text": "4",
"ref_id": null
},
{
"start": 368,
"end": 383,
"text": "S\u00e9aghdha (2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 291,
"end": 298,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 788,
"end": 795,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bigram Plausibility",
"sec_num": "4.3"
},
{
"text": "The results on the unseen data set (Table 4 ) prove interesting as well. The inverse no-context model is performing significantly poorer than any of the other models. To understand this result we must investigate the differences between the unseen data set and the seen data set and to the pseudodisambiguation evaluation. The key difference to pseudo-disambiguation is that we measure a human 4 See (Evert, 2005) for a discussion of these metrics. plausibility judgement, which -as we have demonstrated -only partially correlates with bigram frequencies. Our models were trained on the BNC, hence they could only learn frequency estimates for the seen data set, but not for the unseen data.",
"cite_spans": [
{
"start": 400,
"end": 413,
"text": "(Evert, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "(Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bigram Plausibility",
"sec_num": "4.3"
},
{
"text": "Based on our hypothesis about the role of context, we expect M od and M od i to learn semantic classes based on the distribution of context. Without the access to that context, we argued that M od nc and M od inc would instead learn frequency estimates. 5 The hypothesis that nouns generally select for adjectives rather than vice versa further suggests that M od and M od nc would learn semantic properties that M od i and M od inc could not learn so well.",
"cite_spans": [
{
"start": 254,
"end": 255,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Plausibility",
"sec_num": "4.3"
},
{
"text": "In summary, we hence expected M od to perform best on the unseen data, learning semantics from both context and noun-adjective selection. Also, as supported by the results, we expected M od inc to performs poorly, as it is the model least capable of learning semantics according to our hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Plausibility",
"sec_num": "4.3"
},
{
"text": "We have presented a class of probabilistic models which successfully learn semantic clusterings of nouns and a representation of adjective-noun selectional preference. These models encoded our beliefs about how adjective-noun pairs relate to each other and to the other words in the sentence. The performance of our models on estimating selectional preference strongly supported these initial hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We discussed plausibility judgements from a theoretical perspective and argued that frequency estimates and JP are imperfect approximations for plausibility. While models can perform well on some evaluations by using either frequency estimates or semantic knowledge, we explained why this does not apply to the unseen plausibility test. The performance on that task demonstrates both the success of our model and the shortcomings of frequency-based approaches to human plausibility judgements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Finally, this paper demonstrated that it is feasible to learn semantic representations of words while concurrently learning how they relate to one another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Future work will explore learning words from broader classes of semantic relations and the role of context in greater detail. Also, we will evaluate the system applied to higher level tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We evaluate this hypothesis as well as its inverse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "At the time of writing, Google estimates 56,900 hits for \"educational water\" and 575 hits for \"difficult foreigner\". \"Educational water\" ranks bottom in the gold standard of the unseen set, \"difficult foreigner\" ranks in the top ten.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This could also explain their weaker performance on pseudo-disambiguation in the previous section, where the negative examples had zero frequency in the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 1183-1193, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Supersense tagging of unknown nouns in wordnet",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 conference on Empirical methods in natural language processing, EMNLP '03",
"volume": "",
"issue": "",
"pages": "168--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita and Mark Johnson. 2003. Su- persense tagging of unknown nouns in wordnet. In Proceedings of the 2003 conference on Empirical methods in natural language processing, EMNLP '03, pages 168-175, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Class-based probability estimation using a semantic hierarchy",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2002,
"venue": "Comput. Linguist",
"volume": "28",
"issue": "",
"pages": "187--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and David Weir. 2002. Class-based prob- ability estimation using a semantic hierarchy. Comput. Linguist., 28:187-206, June.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Supersense tagging of unknown nouns using semantic similarity",
"authors": [
{
"first": "R",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R. Curran. 2005. Supersense tagging of unknown nouns using semantic similarity. In Proceedings of the 43rd Annual Meeting on Association for Compu- tational Linguistics, ACL '05, pages 26-33, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A flexible, corpus-driven model of regular and inverse selectional preferences",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Ulrike",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "",
"pages": "723--763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk, Sebastian Pad\u00f3, and Ulrike Pad\u00f3. 2010. A flexible, corpus-driven model of regular and inverse selectional preferences. Computational Linguistics, 36:723-763.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The statistics of word cooccurrences: word pairs and collocations",
"authors": [
{
"first": "Stefan",
"middle": [
"Evert"
],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "70174",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Evert. 2005. The statistics of word cooccur- rences: word pairs and collocations. Ph.D. the- sis, Universit\u00e4t Stuttgart, Holzgartenstr. 16, 70174 Stuttgart.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1394--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP '11, pages 1394-1404, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Finding scientific topics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "101",
"issue": "",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences, 101:5228-5235.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '09",
"volume": "",
"issue": "",
"pages": "317--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. In Proceedings of Human Language Technolo- gies: The 2009 Annual Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics, NAACL '09, pages 317-325, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using the web to obtain frequencies for unseen bigrams",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "459--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Keller and Mirella Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Computational Linguistics, pages 459-484.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A probabilistic model for semantic word vectors",
"authors": [
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maas",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Workshop on Deep Learning and Unsupervised Feature Learning, NIPS '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas and Andrew Y. Ng. 2010. A probabilis- tic model for semantic word vectors. In Workshop on Deep Learning and Unsupervised Feature Learning, NIPS '10.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL-HLT'08",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In ACL-HLT'08, pages 236 -244.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Latent variable models of selectional preference",
"authors": [
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diarmuid\u00d3 S\u00e9aghdha. 2010. Latent variable models of selectional preference. In Proceedings of the 48th",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Annual Meeting of the Association for Computational Linguistics, ACL '10",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "435--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, ACL '10, pages 435-444, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributional clustering of English words",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st annual meeting on Association for Computational Linguistics, ACL '93",
"volume": "",
"issue": "",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Pro- ceedings of the 31st annual meeting on Association for Computational Linguistics, ACL '93, pages 183-190, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Inducing a semantically annotated lexicon via EM-based clustering",
"authors": [
{
"first": "Mats",
"middle": [],
"last": "Rooth",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Detlef",
"middle": [],
"last": "Prescher",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, ACL '99",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Car- roll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. In Pro- ceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Lin- guistics, ACL '99, pages 104-111, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Knowledge discovery of semantic relationships between words using nonparametric bayesian graph model",
"authors": [
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Minoru",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '08",
"volume": "",
"issue": "",
"pages": "587--595",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Issei Sato, Minoru Yoshida, and Hiroshi Nakagawa. 2008. Knowledge discovery of semantic relationships between words using nonparametric bayesian graph model. In Proceeding of the 14th ACM SIGKDD in- ternational conference on Knowledge discovery and data mining, KDD '08, pages 587-595, New York, NY, USA. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Plate diagram illustrating our model of noun and modifier semantic classes (designated N and M , respectively), a modifier-noun pair (m,n), and its context.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Scores of M od on the supersense task. The upper line denotes token-, the lower type-level scores. The y-axis is the percentage of correct assignments, the x-axis denotes the number of neighbours included in the vote.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"num": null,
"content": "
| k | Token | Type |
Baseline | | .241 | .210 |
Ciaramita & Johnson | | .523 | .534 |
Curran | | - | .680 |
M od | 10 | .592 | .517 |
M od nc | 10 | .473 | .410 |
",
"html": null,
"text": "Supersense evaluation results. Values are the percentage of correctly assigned supersenses. k indicates the number of nearest neighbours considered.",
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "",
"html": null,
"text": "",
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": ": Results (Pearson r and Spearman \u03c1 correlations) |
on the Keller and Lapata (2003) plausibility data. Bold |
indicates best scores, underlining our best scores. High |
values indicate high correlation with the gold standard. |
",
"html": null,
"text": "",
"type_str": "table"
}
}
}
}