|
{ |
|
"paper_id": "N15-1037", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:33:12.292682Z" |
|
}, |
|
"title": "Interpreting Compound Noun Phrases Using Web Search Queries", |
|
"authors": [ |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Pa\u015fca", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Google Inc", |
|
"location": { |
|
"addrLine": "1600 Amphitheatre Parkway Mountain View", |
|
"postCode": "94043", |
|
"region": "California" |
|
} |
|
}, |
|
"email": "mars@google.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A weakly-supervised method is applied to anonymized queries to extract lexical interpretations of compound noun phrases (e.g., \"fortune 500 companies\"). The interpretations explain the subsuming role (\"listed in\") that modifiers (fortune 500) play relative to heads (companies) within the noun phrases. Experimental results over evaluation sets of noun phrases from multiple sources demonstrate that interpretations extracted from queries have encouraging coverage and precision. The top interpretation extracted is deemed relevant for more than 70% of the noun phrases.", |
|
"pdf_parse": { |
|
"paper_id": "N15-1037", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A weakly-supervised method is applied to anonymized queries to extract lexical interpretations of compound noun phrases (e.g., \"fortune 500 companies\"). The interpretations explain the subsuming role (\"listed in\") that modifiers (fortune 500) play relative to heads (companies) within the noun phrases. Experimental results over evaluation sets of noun phrases from multiple sources demonstrate that interpretations extracted from queries have encouraging coverage and precision. The top interpretation extracted is deemed relevant for more than 70% of the noun phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Motivation: Semantic classes of interest to Web users are often expressed as lexical class labels (e.g., \"fortune 500 companies\", \"italian composers\", \"victorinox knives\"). Each class label hints at the implicit properties shared among its instances (e.g., general electric, gaetano donizetti, swiss army jetsetter respectively). Class labels allow for the organization of instances into hierarchies, which in turn allows for the systematic development of knowledge repositories. This motivates research efforts to acquire as many relevant class labels of instances as possible, which have received particular emphasis (Wang and Cohen, 2009; Dalvi et al., 2012; Flati et al., 2014) . The efforts are part of the larger area of extracting open-domain facts and relations (Banko et al., 2007; Hoffart et al., 2013; Yao and Van Durme, 2014) , ultimately delivering richer results in Web search.", |
|
"cite_spans": [ |
|
{ |
|
"start": 619, |
|
"end": 641, |
|
"text": "(Wang and Cohen, 2009;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 661, |
|
"text": "Dalvi et al., 2012;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 662, |
|
"end": 681, |
|
"text": "Flati et al., 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 790, |
|
"text": "(Banko et al., 2007;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 791, |
|
"end": 812, |
|
"text": "Hoffart et al., 2013;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 813, |
|
"end": 837, |
|
"text": "Yao and Van Durme, 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Different methods can associate instances (general electric) with both class labels (\"fortune 500 companies\") and facts (<general electric, founded in, 1892>) extracted from text. But the class labels tend to be extracted, maintained and used separately from facts. Beyond organizing the class labels hierarchically (Kozareva and Hovy, 2010) , the meaning of a class label is rarely explored (Nastase and Strube, 2008) , nor is it made available downstream to applications using the extracted data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 158, |
|
"text": "(<general electric, founded in, 1892>)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 341, |
|
"text": "(Kozareva and Hovy, 2010)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 418, |
|
"text": "(Nastase and Strube, 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The method introduced in this paper is the first to exploit Web search queries to uncover the semantics of open-domain class labels in particular; and of compound noun phrases in general. The method extracts candidate, lexical interpretations of compound noun phrases from queries. The interpretations turn implicit properties or subsuming roles (\"listed in\", \"from\", \"made by\") that modifiers (fortune 500, italian, victorinox) play within longer noun phrases (\"fortune 500 companies\", \"italian composers\", \"victorinox knives\") into explicit strings. The roles of modifiers relative to heads of noun phrase compounds cannot be characterized in terms of a finite list of possible compounding relationships (Downing, 1977) . Hence, the interpretations are not restricted to a closed, pre-defined set. Experimental results over evaluation sets of noun phrases from multiple sources demonstrate that interpretations can be extracted from queries for a significant fraction of the input noun phrases. Without relying on syntactic analysis, extracted interpretations induce implicit bracketings over the interpreted noun phrases. The bracketings reveal the multiple senses, some of which are more rare but still plausible, in which the same noun phrase can be sometimes explained. The quality of interpretations is encouraging, with at least one interpretation deemed relevant among the top 3 retrieved for 77% of the noun phrases with extracted interpretations. The top interpretation is deemed relevant for more than 70% of the noun phrases. Applications: The extracted interpretations can serve as a bridge connecting class labels and facts. Relevant interpretations allow one to potentially derive missing facts (<general electric, listed in, fortune 500>) from existing class labels (<general electric, fortune 500 companies>) and vice versa. In addition, relevant interpretations of class labels are themselves class labels inferred for the same instances. Examples are <general electric, companies listed in fortune 500>, or <general electric, companies in fortune 500>, based on <general electric, fortune 500 companies>. If the input class labels are organized hierarchically (<fortune 500 companies, companies>), interpretations explain why more specific class labels (\"fortune 500 companies\", \"german companies\", \"dow jones industrial average companies\", \"french companies\") do not merely belong under more general ones (\"companies\"), but do so along shared interpretations (companies\u2192listed in\u2192{fortune 500, dow jones industrial average companies}; vs. {companies\u2192from\u2192{germany, france}); and, more generally, aid in the better understanding of noun phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 706, |
|
"end": 721, |
|
"text": "(Downing, 1977)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contributions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hypothesis: Let N be a compound noun phrase, containing a head H preceded by modifiers M . Each of H and M may contain one or multiple tokens. Being a compound, the sequence of modifiers and head in N act as a single noun (Downing, 1977; Hendrickx et al., 2013) . If N is relevant and of interest to Web users, then in a sufficiently large corpus it will eventually be referred to in relatively more verbose search queries, which explain the implicit role that modifiers M play relative to the head H. Acquisition from Queries: To illustrate the intuition above, consider the noun phrases \"water animals\" and \"zone 7 plants\". If enough Web users are interested in the concepts represented by these noun phrases, then the phrases are likely to be submitted as search queries. In addition, some Web users seeking similar information are likely to submit queries that make the role of the modifiers water and zone 7 explicit, such as \"animals living in water\" or \"plants that grow in zone 7\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 237, |
|
"text": "(Downing, 1977;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 261, |
|
"text": "Hendrickx et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpreting Noun Phrases", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As illustrated in Figure 1 , the extraction method animals that grow in water animals who live in water plants that grow in zone 11 plants that grow in zone 7 plants that grow well in zone 7 plants for zone 10 animals living in coral reef animals living in freshwater animals living in water plants for zone 7 plants for planting zone 10 plants for planting zone 7", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 26, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Interpreting Noun Phrases", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "justices of the washington state supreme court supreme court justices in ohio supreme court justices in oregon supreme court justices in washington state proposed in this paper takes as input a vocabulary of noun phrases, as well as a set of anonymized queries from which possible interpretations for the noun phrases must be extracted. The extraction consists of several steps: (1) the selection of a subset of queries that may be candidate interpretations of some yet-tobe-specified noun phrases; (2) the matching of the selected queries to the noun phrases to interpret; and (3) the aggregation of matched queries into candidate interpretations extracted for a noun phrase. Queries as Candidate Interpretations: The input queries are matched against the extraction patterns from Table 1 . The use of targeted patterns in information extraction has been suggested before (Hearst, 1992; Fader et al., 2011) . In our case, the patterns match queries that start with an arbitrary ngram H, followed by what is likely a ", |
|
"cite_spans": [ |
|
{ |
|
"start": 873, |
|
"end": 887, |
|
"text": "(Hearst, 1992;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 888, |
|
"end": 907, |
|
"text": "Fader et al., 2011)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 782, |
|
"end": 789, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Interpreting Noun Phrases", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "N =[N 1 N 2 ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpreting Noun Phrases", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": ", where the two sequences correspond to a hypothetical modifier and a hypothetical head of the noun phrase. For example, the noun phrase \"zone 7 plants\" is split into [\"zone\", \"7 plants\"] and separately into [\"zone 7\", \"plants\"]. If N 1 and Q 3 , and N 2 and Q 1 respectively, match, then the matching query Q (e.g., \"(plants) H that grow in (zone 7) M \") is retained as a candidate interpretation of the noun phrase N (\"(zone 7) M (plants) H \"), as shown in the middle portion of Figure 1 . Mapping via Modifier Variants: At its simplest, the matching of the hypothetical modifier relies on strict string matching. Alternatively, original modifiers in the noun phrases to interpret may be matched to queries via expansion variants. Variants are phrases that likely play the same role, and therefore share interpretations, as modifiers relative to the head in a noun phrase. Variants allow for the extraction of candidate interpretations that may otherwise not be available in the input data. For example, in Figure 1 , the variant new jersey available for california allows for the matching of california in the noun phrase \"(california) M (supreme court justices) H \", with new jersey in the query \"(supreme court justices) H born in (new jersey) M \". The candidate interpretation \"(supreme court justices) H born in (california) M \" is extracted for the noun phrase \"(california) M (supreme court justices) H \", even though the query \"supreme court justices born in california\" is not present among the input queries. Possible sources of variants include distributionally similar phrases (Lin and Wu, 2009) , where the phrases most similar to a modifier would act as its variants. Mappings from adjectival modifiers in noun phrases (e.g., aquatic in \"aquatic animals\" in Figure 1 ) into the nominal counterparts (e.g., water) that are likely to occur in interpretations (e.g., \"(animals) H who live in (water) M \") are also useful. Concretely, as described later in Section 3, variants are generated using WordNet (Fellbaum, 1998), distributional similarities and Wikipedia. Aggregation of Candidate Interpretations: Candidate interpretations of a noun phrase are aggregated from source queries that matched the noun phrase. The frequency score of a candidate interpretation is the weighted sum of the frequencies of source queries from which the candidate interpretation is collected, possibly via variants of modifiers. In the weighted sum, the weights are similarity scores between the original modifier from the noun phrase, on one hand, and the variant from the source query into which the modifier was mapped, on the other hand. For example, in Figure 1 , the frequency score of the candidate interpretation \"(plants) H that grow in (zone 7) M \" for the noun phrase \"(zone 7) M (plants) H \" is the weighted sum of the frequencies of the source queries \"plants that grow in zone 7\" and \"plants that grow in zone 11\". The weights for the variants zone 7 and zone 11 relative to the original modifier zone 7 may be 1.0 (identity) and 0.8 (distributional similarity), whereas the weights of adjectival modifiers such as water for aquatic may be 1.0. Separately from the frequency score, a penalty score is computed that penalizes interpretations containing extraneous tokens. Specifically, the penalty counts the number of nouns or adjectives located outside the modifier and head. Candidate interpretations extracted for a noun phrase are ranked in increasing order of their penalty scores or, in case of ties, in decreasing order of their frequency scores.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1591, |
|
"end": 1609, |
|
"text": "(Lin and Wu, 2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 481, |
|
"end": 489, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1009, |
|
"end": 1017, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1774, |
|
"end": 1782, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 2654, |
|
"end": 2662, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Interpreting Noun Phrases", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Sources of Textual Data: The experiments rely on a random sample of around 1 billion fullyanonymized Web search queries in English. The sample is drawn from queries submitted to a generalpurpose Web search engine. Each query is available independently from other queries, and is accompanied by its frequency of occurrence in the query logs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The original form of the modifiers is denoted as orig-phrase. Three types of variant phrases are collected for the purpose of matching modifiers within noun phrases to interpret, with phrases from queries. Relations encoded as Value-Of, Related-Noun and Derivationally-Related relations in WordNet (Fellbaum, 1998) are the source of adj-noun variants. They map around 6,000 adjectives into one or more nouns (e.g., (french\u2192france), (electric\u2192electricity), (aquatic\u2192water)). A repository of distributionally similar phrases, collected in advance (Lin and Wu, 2009) from a sample of around 200 million Web documents in English, is the source of dist-sim variants. For each of around 1 million phrases, the variants consist of their 50 most similar phrases (e.g., art garfunkel\u2192{carly simon, melissa manchester, aaron neville, ..}).", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 314, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sources of Variants:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A snapshot of all Wikipedia articles in English, as available in June 2014, is the source of wiki-templ variants. For each of around 50,000 phrases, their wiki-templ variants are collected from Wikipedia categories sharing a common parent Wikipedia category (e.g., \"albums by artist\") and having a common head (\"art garfunkel albums\", \"black sabbath albums\", \"metallica albums\"). The different modifiers (art garfunkel, black sabbath, metallica) that accompany the shared head are collected as variants of one another. Among the four types of variants, wiki-templ variants are applied only when the noun phrase to interpret, and the source Wikipedia category names from which the variants were collected, have the same head. For example, X=art garfunkel\u2192{black sabbath, metallica, 50 cent, ..} is applied only in the context of the noun phrase \"X albums\". method acquires interpretations from queries, for noun phrases from three vocabularies. ListQ is a set of phrases X (e.g., \"aramaic words\") from queries in the form [list of X] , where the frequency of the query [X] is at most 100 times higher than the frequency of the query [list of X], and the frequency of the latter is at least 5. IsA is a set of class labels (e.g., \"academy award nominees\"), originally extracted from Web documents via Hearst patterns (Hearst, 1992) , and associated with at least 25 instances each (e.g., zero dark thirty). WikiC is a set of Wikipedia categories that contain some tokens in lowercase beyond prepositions and determiners, and whose heads are pluralform nouns (e.g., \"french fiction writers\"). Only phrases that are one of the full-length queries from the input set of Web search queries are retained in the respective sets, as vocabularies of noun phrases to interpret; other phrases are discarded. Parameter Settings: The noun phrases to interpret and queries are both part-of-speech tagged (Brants, 2000) . From among candidate interpretations extracted for a noun phrase, interpretations whose penalty score is higher than 1 are discarded. When computing the frequency score of a candidate interpretation as the weighted sum of the frequencies of source queries, the weights assigned to various variants are 1.0, for orig-phrase, adj-noun and wikitempl variants; and the available distributional similarity scores within [0.0, 1.0], for dist-sim variants.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1021, |
|
"end": 1032, |
|
"text": "[list of X]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1315, |
|
"end": 1329, |
|
"text": "(Hearst, 1992)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1889, |
|
"end": 1903, |
|
"text": "(Brants, 2000)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sources of Variants:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Relative Coverage: Because it is not feasible to manually compile the exhaustive sets of all string forms of valid interpretations of all (or many) noun phrases, we compute relative instead of absolute coverage. As illustrated in Table 2 , some interpretations are extracted from queries for more than 500,000 of the noun phrases from all input vocabu- A noun phrase is not interpretable if it is in fact an instance (\"new york\", \"alicia keys\") rather than a class; or it is not a properly formed noun phrase (\"watch movies\"); or does not refer to a meaningful class (\"3 significant figures\"). The manual inspection ends, once a sample of 100 noun phrases has been retained. The procedure gives weighted random samples of 100 noun phrases, drawn from each of the ListQ, IsA and WikiC vocabularies. The samples, shown in Table 3 , constitute the gold sets of phrases ListQ, IsA and WikiC, over which precision of interpretations is computed. Note that, since the samples are random, Wikipedia categories that contribute to the automatic construction of wiki-templ variants may be selected as gold phrases in WikiC. This is the case for three of the gold phrases in Wi-kiC.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 237, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 820, |
|
"end": 827, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The top 20 interpretations extracted for each gold phrase are manually annotated with correctness labels. As shown in Table 4 , an interpretation is annotated as: correct and generic, or correct and specific, if relevant; okay, if useful but containing nonessential information; or wrong. To compute the precision score over a gold set of phrases, the correctness labels are converted to numeric values. Precision of a ranked list of extracted interpretations is the average of the correctness values of the interpretations in the list. Table 5 provides a comparison of precision scores at various ranks in the extracted lists of interpretations, as an average over all phrases from a gold set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 125, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 544, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Label (Score) \u2192 Examples of (Noun Phrase: Interpretation) cg (1.0) \u2192 (good short stories: short stories that are good), (bay area counties: counties in the bay area), (fourth grade sight words: sight words in fourth grade), (army ranks: ranks from the army), (who wants to be a millionaire winners: winners of who wants to be a millionaire), (us visa: visa for us) cs (1.0) \u2192 (brazilian dances: dances of the brazilian culture), (tsunami charities: charities that gave to the tsunami), (stephen king books: books published by stephen king), (florida insurance companies: insurance companies headquartered in florida), (florida insurance companies: insurance companies operating in florida), (us visa: visa required to enter us) qq (0.5) \u2192 (super smash bros brawl characters: characters meant to be in super smash bros brawl), (carribean islands: islands by the carribean), (pain assessment tool: tool for recording pain assessment) xw (0.0) \u2192 (periodic functions: functions of periodic distributions), (simpsons episodes: episodes left of simpsons), (atm card: card left in wachovia atm) Table 5 : Average precision, at various ranks in the ranked lists of interpretations extracted for noun phrases from various sets of gold phrases At ranks 1, 5 and 20, precision scores vary between 0.770, 0.655 and 0.465 respectively, for the ListQ gold set; and between 0.730, 0.530 and 0.329 respectively, for the IsA gold set. Presence of Relevant Interpretations: Sometimes it is difficult to even manually enumerate as many as 20 distinct, relevant string forms of interpretations for a given noun phrase. Measuring precision at a particular rank (e.g., 20) in a ranked list of interpretations may be too conservative. Table 6 summarizes a different type of scoring metric, namely the presence of any relevant interpretation, among the interpretations extracted up to a particular rank. Relevance is flexibly defined, by requiring the interpretations to have been assigned a certain correctness label, then computing the average number of gold phrases for which such interpretations are present up to a particular rank. When considering only interpretations annotated as correct and generic or correct and specific, in the second row of each vertical noun phrase, multiple bracketings may be possible, each corresponding to a different interpretation. Interpretations extracted from queries do capture such multiple bracketings, even for phrases from the gold sets, as illustrated in Table 7 . Over all noun phrases from the input vocabularies that have some extracted interpretations and contain at least 3 tokens, about 10% (ListQ and IsA) and 5% (WikiC) of the noun phrases have multiple bracketings induced by their top 10 interpretations, as shown in Table 8 . Table 9 shows examples of noun phrases with multiple extracted interpretations that induce identical bracketings, but capture distinct interpretations. Impact of Variants: Variants of modifiers provide alternatives in extracting candidate interpretations, even when the modifiers from the noun phrases are not present in their original form in the interpretations. For example, the adj-noun variant ethiopia of the modifier ethiopian leads to the extraction of the interpretation \"runners from ethiopia\" for the noun phrase \"ethiopian runners\". Similarly, wiki- Table 10 : Impact of various types of variants of modifiers, on the coverage of noun phrase interpretations. Computed as the fraction of the top 10 extracted interpretations produced by a particular variant type, and possibly by other variant types (upper portion); or produced only by a particular variant type, and by no other variant types (lower portion) (Vocab=vocabulary of noun phrases) templ variants metallica and 50 cent of the modifier art garfunkel, in the context \"X albums\", allow for the extraction of the interpretation \"albums sold by art garfunkel\" for the noun phrase \"art garfunkel albums\", via the interpretations \"albums sold by metallica\" and \"albums sold by 50 cent\". Table 10 quantifies the impact of various types of variants, on the coverage of noun phrase interpretations. The scores provided for each variant type correspond to either non-exclusive (upper portion of the table) or exclusive (lower portion) contribution of that variant type towards some extracted interpretations. In other words, in the lower portion, the scores capture the fraction of the top 10 interpretations that are produced only by that particular variant type. Three conclusions can be drawn from the results. First, all variant types contribute to increasing coverage, relative to using only orig-phrase variants. Second, dist-sim variants have a particularly strong impact. Third, wiki-templ variants have a strong impact, but only when the contexts from which they were collected match the context of the noun phrase being interpreted. On the WikiC vocabulary in the lower portion of Table 10 , the scores for wiki-templ illustrate the potential that contextual variants have in extracting additional interpretations. Table 11 again quantifies the impact of variant types, but this time on the coverage and, more importantly, accuracy of interpretations extracted for phrases from the gold sets. The scores are computed over the ranked lists of interpretations from the ListQ gold set, as certain types of variants are temporarily disabled in ablation experiments. The upper portion of the Table 11 : Impact of various types of variants of modifiers, on the precision of noun phrase interpretations. Computed over the ListQ gold set, at rank 5 in the ranked lists of extracted interpretations, when various variant types are allowed ( \u221a ) or temporarily not allowed (-) to produce interpretations (O=orig-phrase variant type; A=adj-noun variant type; D=dist-sim variant type; W=wiki-templ variant type; Cvg=number of noun phrases from the gold set with some interpretation(s) produced by the allowed variant types; P@5=precision at rank 5; C@5=average presence of any interpretations annotated as correct and generic (cg) or correct and specific (cs), among interpretations up to rank 5) one of the variant types is enabled. It shows that none of the variant types, taken in isolation, can match what they achieve when combined together, in terms of both coverage and accuracy. The middle portion of the table shows results when all but one of the variant types are enabled. Each of the variant types incrementally contributes to higher coverage and accuracy over the combination of the other variant types. The incremental contribution of wikitempl variants is the smallest. The lower portion of Table 11 gives the incremental contribution of the variant types, relative to using only the orig-phrase variant type. The last row of Table 11 corresponds to all variant types being enabled. Discussion: Independently of the choice of the textual data source (e.g., documents, queries) from which interpretations are extracted, a noun phrase is intuitively more difficult to interpret if it is relatively more rare or more complex (i.e., longer). Additional experiments quantify the effect, by measuring the correlation between the presence of some extracted interpretations for an input noun phrase, on one hand; and the frequency of the noun phrase as a query (in Table 12 ), on the other hand. In Figure 2 : Ability to extract interpretations for noun phrases, as a function of the length of noun phrases. Computed as the fraction of noun phrases from an input vocabulary with a particular number of tokens, for which there are some extracted interpretation(s) the effect is visible in that query frequency is higher for noun phrases with some extracted interpretations vs. noun phrases with none. For example, the average query frequency is almost three times higher for the former than for the latter, for the ListQ vocabulary. Similarly, in Figure 2 , a larger fraction of the input noun phrases with a particular number of tokens have some extracted interpretations, when the number of tokens is lower rather than higher. The effect is somewhat less pronounced for, but still applicable to, the WikiC vocabulary, with some extracted interpretations being present for 75%, 71%, 63%, and 37% of the noun phrases containing 2, 3, 4 and 8 tokens respectively. That a larger fraction of the longer noun phrases can be interpreted in the Wi-kiC vocabulary is attributed to the role of wiki-templ variants in extracting interpretations that would otherwise not be available.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1088, |
|
"end": 1095, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1712, |
|
"end": 1719, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 2477, |
|
"end": 2484, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 2749, |
|
"end": 2756, |
|
"text": "Table 8", |
|
"ref_id": "TABREF10" |
|
}, |
|
{ |
|
"start": 2759, |
|
"end": 2766, |
|
"text": "Table 9", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 3321, |
|
"end": 3329, |
|
"text": "Table 10", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 4013, |
|
"end": 4021, |
|
"text": "Table 10", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 4913, |
|
"end": 4921, |
|
"text": "Table 10", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 5047, |
|
"end": 5055, |
|
"text": "Table 11", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 5419, |
|
"end": 5427, |
|
"text": "Table 11", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 6626, |
|
"end": 6634, |
|
"text": "Table 11", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 6761, |
|
"end": 6769, |
|
"text": "Table 11", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 7292, |
|
"end": 7300, |
|
"text": "Table 12", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 7326, |
|
"end": 7334, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 7873, |
|
"end": 7881, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Interpretations from Queries vs. Documents: For completeness, additional experiments evaluate the interpretations extracted from queries, relative to a gold standard introduced in (Hendrickx et al., 2013) . The gold standard consists of a gold set of 181 compound noun phrases (e.g., \"accounting principle\" and \"application software\"), their manually-assembled gold paraphrases (e.g., \"principle of accounting\", \"software to make applications\"), and associated scoring metrics referred to as non-isomorphic and isomorphic. Note that, in comparison to the ListQ, IsA and WikiC evaluation sets, the gold standard in (Hendrickx et al., 2013) may contain relatively less popular gold phrases. As many as 45 gold paraphrases are available per gold phrase on average. They illustrate the difficulty of any attempt to manually assemble exhaustive sets of all strings that are valid interpretations of a noun phrase. For example, the gold paraphrases of the gold phrase blood cell include \"cell that is found in the blood\", but not the arguably equally-relevant \"cell found in the blood\". In addition, more than one human annotators independently provide the same gold paraphrase for only a tenth of all gold paraphrases. See (Hendrickx et al., 2013) for details on the gold standard and scoring metrics. The gold set is added as another input vocabulary to the method proposed here. After inspection of a training set of compound noun phrases also introduced in (Hendrickx et al., 2013), the parameter settings are modified to only retain interpretations whose penalty score is 0.", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 204, |
|
"text": "(Hendrickx et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 638, |
|
"text": "(Hendrickx et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1218, |
|
"end": 1242, |
|
"text": "(Hendrickx et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The isomorphic and non-isomorphic scores reward coverage and accuracy respectively. For the ranked candidate interpretations extracted from queries for the gold set, they are 0.037 and 0.556 respectively. In comparison to previous methods that operate over documents instead of queries, the isomorphic score is much lower for our method (e.g., 0.037 vs. 0.130 (Van de Cruys et al., 2013)). It suggests that queries cannot reliably provide an exhaustive list of all possible strings available in the gold standard for each gold phrase. However, the non-isomorphic score is higher for our method than for the best method operating over documents (i.e., 0.556 vs. 0.548 (Hendrickx et al., 2013) ). In fact, the non-isomorphic score using queries would be 0.745 instead of 0.556, if it were computed over only the 135 gold noun phrases with some extracted interpretations. The results suggests that the method proposed here extracts more accurate interpretations from queries, than previous methods extract from documents. Higher accuracy is preferable in scenarios like Web search, where it is important to accurately trigger structured results. Error Analysis: The relative looseness of the extraction patterns applied to queries causes interpretations containing undesirable tokens to be extracted. In addition, part-of-speech tagging errors lead to interpretations receiving artificially low penalty scores, and therefore being considered to be of higher quality than they should be. For example, phd in the interpretation \"job for phd in chemistry\" is incorrectly tagged as a past participle verb. As a result, the computed penalty score is too low.", |
|
"cite_spans": [ |
|
{ |
|
"start": 644, |
|
"end": 691, |
|
"text": "(i.e., 0.556 vs. 0.548 (Hendrickx et al., 2013)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Occasionally, the presence of additional tokens within an interpretation is harmless (e.g., \"issues of controversy in society\" for \"controversial issues\", \"foods allowed on a high protein low carb diet\" for \"high protein low carb foods\"), if not necessary (e.g., \"dances with brazilian origin\" for \"brazilian dances\", \"artists of the surrealist movement\" for \"surrealist artists\", \"options with weekly expirations\" for \"weekly options\"). But often it leads to incorrect interpretations (e.g., \"towns of alaska map\" for \"alaska towns\", \"processes in chemical vision\" for \"chemical processes\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Variants of modifiers occasionally lead to incorrect interpretations for a noun phrase, even if the interpretations may be correct for the individual variants. The phenomenon is an instance of semantic drift, wherein variants do share many properties but still diverge in others. Examples are \"words that are bleeped similarly\" extracted for \"bleeped words\" via the variant bleeped\u2192spelled. Separately, linguistic constructs that negate or at least alter the desired meaning affect the understanding of text in general and also affect the extraction of interpretations in particular. Examples are \"heaters with no electricity\" for \"electric heaters\", and \"animal that used to be endangered\" for \"endangered animal\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Relevant interpretations extracted from queries act as a potential bridge between facts, on one hand, and class labels, on the other hand, available for instances. The former might be inferred from the latter and vice versa. There are two previous studies that are relevant to the task of extracting facts from existing noun phrases. First, (Yahya et al., 2014) extract facts for attributes of instances, without requiring the presence of the verbal predicates usu-ally employed (Fader et al., 2011) in open-domain information extraction. Second, in (Nastase and Strube, 2008) , relations encoded implicitly within Wikipedia categories are converted into explicit relations. As an example, the relation <deconstructing harry, directed, woody allen> is obtained from the fact that deconstructing harry is listed under \"movies directed by woody allen\" in Wikipedia. The method in (Nastase and Strube, 2008) relies on manually-compiled knowledge, and does not attempt to interpret compound noun phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 361, |
|
"text": "(Yahya et al., 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 499, |
|
"text": "(Fader et al., 2011)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 539, |
|
"end": 576, |
|
"text": "Second, in (Nastase and Strube, 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Since relevant interpretations paraphrase the noun phrases which they interpret, a related area of research is paraphrase acquisition (Madnani and Dorr, 2010; Ganitkevitch et al., 2013) . Previous methods for the acquisition of paraphrases of compound noun phrases (Kim and Nakov, 2011; Van de Cruys et al., 2013) operate over documents, and may rely on text analysis tools including syntactic parsing (Nakov and Hearst, 2013) . In contrast, the method proposed here extracts interpretations from queries, and applies part of speech tagging. Queries were used as a textual data source in other tasks in open-domain information extraction (Jain and Pennacchiotti, 2010; Pantel et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 158, |
|
"text": "(Madnani and Dorr, 2010;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 185, |
|
"text": "Ganitkevitch et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 286, |
|
"text": "(Kim and Nakov, 2011;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 313, |
|
"text": "Van de Cruys et al., 2013)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 426, |
|
"text": "(Nakov and Hearst, 2013)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 668, |
|
"text": "(Jain and Pennacchiotti, 2010;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 689, |
|
"text": "Pantel et al., 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Interpretations extracted from queries explain the roles that modifiers play within longer noun phrases. Current work explores the interpretation of noun phrases containing multiple modifiers (e.g., \"(french) M 1 ( healthcare) M 2 (companies) H \" by separately interpreting \"(french) M 1 (companies) H \" and \"(healthcare) M 2 (companies) H \"); the grouping of lexically different but semantically equivalent interpretations (e.g., \"dances of brazilian origin\", \"dances from brazil\"); the collection of more variants from Wikipedia and other resources; the incorporation of variants of heads (physicists\u2192scientists for interpreting the phrase \"belgian physicists\"), which likely need to be more conservatively applied than for modifiers; and the use of query sessions, as an alternative to sets of disjoint queries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The paper benefits from comments from Jutta Degener, Mihai Surdeanu and Susanne Riehemann. Data extracted by Haixun Wang and Jian Li is the source of the IsA vocabulary of noun phrases used in the evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "putational Linguistics , pages 510-518, Beijing, China. N. Kim and P. Nakov. 2011 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 81, |
|
"text": "Kim and P. Nakov. 2011", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Open information extraction from the Web", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Cafarella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Broadhead", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "224--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Banko, Michael J Cafarella, S. Soderland, M. Broad- head, and O. Etzioni. 2007. Open information ex- traction from the Web. In Proceedings of the 20th In- ternational Joint Conference on Artificial Intelligence (IJCAI-07), pages 2670-2676, Hyderabad, India. T. Brants. 2000. TnT -a statistical part of speech tagger. In Proceedings of the 6th Conference on Applied Natu- ral Language Processing (ANLP-00), pages 224-231, Seattle, Washington.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Websets: Extracting sets of entities from the Web using unsupervised information extraction", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Callan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 5th ACM Conference on Web Search and Data Mining (WSDM-12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "243--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Dalvi, W. Cohen, and J. Callan. 2012. Websets: Ex- tracting sets of entities from the Web using unsuper- vised information extraction. In Proceedings of the 5th ACM Conference on Web Search and Data Mining (WSDM-12), pages 243-252, Seattle, Washington.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "On the creation and use of English compound nouns. Language", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Downing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "", |
|
"volume": "53", |
|
"issue": "", |
|
"pages": "810--842", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Downing. 1977. On the creation and use of English compound nouns. Language, 53:810-842.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Identifying relations for open information extraction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Fader", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-11)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1535--1545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Fader, S. Soderland, and O. Etzioni. 2011. Identifying relations for open information extraction. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-11), pages 1535-1545, Edinburgh, Scotland.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "WordNet: An Electronic Lexical Database and Some of its Applications", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Fellbaum, editor. 1998. WordNet: An Electronic Lexi- cal Database and Some of its Applications. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Two is bigger (and better) than one: the Wikipedia Bitaxonomy project", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Flati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Vannella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Pasini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL-14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "945--955", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Flati, D. Vannella, T. Pasini, and R. Navigli. 2014. Two is bigger (and better) than one: the Wikipedia Bitaxonomy project. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (ACL-14), pages 945-955, Balti- more, Maryland.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "PPDB: The paraphrase database", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Association for Computational Linguistics (NAACL-HLT-13)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "758--764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Ganitkevitch, B. Van Durme, and C. Callison-Burch. 2013. PPDB: The paraphrase database. In Proceed- ings of the 2013 Conference of the North American Association for Computational Linguistics (NAACL- HLT-13), pages 758-764, Atlanta, Georgia.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatic acquisition of hyponyms from large text corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hearst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING-92)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "539--545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th In- ternational Conference on Computational Linguistics (COLING-92), pages 539-545, Nantes, France.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "SemEval-2013 task 4: Free paraphrases of noun compounds", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Hendrickx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Kozareva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "S\u00e9aghdha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Szpakowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Veale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval-14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "138--143", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Hendrickx, Z. Kozareva, P. Nakov, D.\u00d3 S\u00e9aghdha, S. Szpakowicz, and T. Veale. 2013. SemEval-2013 task 4: Free paraphrases of noun compounds. In Pro- ceedings of the 7th International Workshop on Seman- tic Evaluation (SemEval-14), pages 138-143, Atlanta, Georgia.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "YAGO2: a spatially and temporally enhanced knowledge base from Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Berberich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Artificial Intelligence Journal. Special Issue on Artificial Intelligence, Wikipedia and Semi-Structured Resources", |
|
"volume": "194", |
|
"issue": "", |
|
"pages": "28--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Hoffart, F. Suchanek, K. Berberich, and G. Weikum. 2013. YAGO2: a spatially and temporally enhanced knowledge base from Wikipedia. Artificial Intelli- gence Journal. Special Issue on Artificial Intelligence, Wikipedia and Semi-Structured Resources, 194:28-61.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Open entity extraction from Web search query logs", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Pennacchiotti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Com", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Jain and M. Pennacchiotti. 2010. Open entity ex- traction from Web search query logs. In Proceed- ings of the 23rd International Conference on Com-", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "supreme court justices)H born in (california)M (plants)H that grow in (zone 7)M (animals)H living in (water)M (animals)H who live in (water)M (animals)H living in (water)M (justices)H of the (california supreme court)M (animals)H who live in (water)Overview of extraction of interpretations of noun phrases from Web search queries" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Noun phrase</td><td>Source queries</td></tr><tr><td>aquatic animals Noun phrase</td><td>animals living in coral reef animals living in freshwater animals living in water</td></tr><tr><td>water animals</td><td>Source queries</td></tr><tr><td/><td>animals who live in water</td></tr><tr><td>Noun phrase</td><td>Source queries</td></tr><tr><td>zone 7 plants</td><td>plants that grow in zone 7 plants that grow in zone 11</td></tr><tr><td/><td>Source queries</td></tr><tr><td/><td>justices of the california supreme court</td></tr><tr><td>Noun phrase california supreme court justices</td><td>justices of the australian high court justices of the washington state supreme court justices of the warren court Source queries</td></tr><tr><td colspan=\"2\">Candidate interpretations for noun phrases</td></tr><tr><td>aquatic animals</td><td/></tr><tr><td>water animals</td><td/></tr><tr><td>zone 7 plants</td><td/></tr><tr><td>california supreme court justices</td><td/></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "supreme court justices born in new jersey justices of the vermont supreme court justices of the warren court supreme court justices from new hampshire justices of the australian high court justices of the california supreme court" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Extraction patterns matched against queries to</td></tr><tr><td>identify candidate interpretations (H, M =head and mod-</td></tr><tr><td>ifier of a hypothetical noun phrase)</td></tr><tr><td>passive, prepositional or relative-pronoun construct,</td></tr><tr><td>followed by another ngram M , and optionally fol-</td></tr><tr><td>lowed by other tokens. The ngrams H and M</td></tr><tr><td>contain one or more tokens. The patterns effec-</td></tr><tr><td>tively split matching queries into four consecutive</td></tr><tr><td>sequences of tokens Q=[Q 1 Q 2 Q 3 Q 4 ], where H</td></tr><tr><td>and M correspond to Q 1 and Q 3 , and Q 4 may be</td></tr><tr><td>empty. For example, the pattern in the lower portion</td></tr><tr><td>of Table 1 matches the query \"(plants) H that grow</td></tr><tr><td>in (zone 7) M \", which is one of the queries shown in</td></tr><tr><td>the upper portion of Figure 1.</td></tr><tr><td>Mapping Noun Phrases to Interpretations: Each</td></tr><tr><td>noun phrase to interpret is split into all possible de-</td></tr><tr><td>compositions of two consecutive sequences of to-</td></tr><tr><td>kens</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>over noun phrases from various vocabularies (R=number</td></tr><tr><td>of raw noun phrases; Q=subset of noun phrases from R</td></tr><tr><td>that are queries; I=subset of noun phrases from Q with</td></tr><tr><td>some extracted interpretation(s); I/Q=fraction of noun</td></tr><tr><td>phrases from Q that are present in I)</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "Relative coverage of noun phrase interpretation," |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Gold Set: Sample of Noun Phrases ListQ: 1911 pistols, 2009 movies, alabama sororities, alaskan towns, american holidays, aramaic words, argumentative essays, arm loans, army ranks, .., yugioh movies IsA: academy award nominees, addicting games, advanced weapons systems, android tablet, application layer protocols, astrological signs, automotive parts, .., zip code WikiC: 2k sports games, aaliyah songs, advertising slogans, airline tickets, alan jackson songs, ancient romans, andrea bocelli albums, athletic shoes, .., wii accessories" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Gold sets of 100 noun phrases per vocabulary laries, or around 70% of all input noun phrases." |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"5\">: Examples of interpretations manually anno-</td></tr><tr><td colspan=\"5\">tated with each correctness label (cg=correct generic;</td></tr><tr><td colspan=\"4\">cs=correct specific; qq=okay; xw=incorrect)</td><td/></tr><tr><td>Gold Set</td><td colspan=\"2\">Precision@N</td><td/><td/></tr><tr><td>@1</td><td>@3</td><td>@5</td><td>@10</td><td>@20</td></tr><tr><td colspan=\"5\">ListQ 0.770 0.708 0.655 0.568 0.465</td></tr><tr><td colspan=\"5\">IsA 0.730 0.598 0.530 0.423 0.329</td></tr><tr><td colspan=\"5\">WikiC 0.780 0.647 0.561 0.455 0.357</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Average of scores indicating the presence or ab-</td></tr><tr><td>sence of any interpretations annotated with a correctness</td></tr><tr><td>label from a particular subset of correctness labels. Com-</td></tr><tr><td>puted over interpretations extracted up to various ranks</td></tr><tr><td>in the ranked lists of extracted interpretations (cg=correct</td></tr><tr><td>generic; cs=correct specific; qq=okay)</td></tr><tr><td>Noun Phrase \u2192 Multiple-Bracketing Interpretations</td></tr><tr><td>african american women writers \u2192 (writers)H who wrote</td></tr><tr><td>about (african american)M women, (women writers)H who</td></tr><tr><td>are (african american)M , (writers)H who cover (african</td></tr><tr><td>american)M women struggles</td></tr><tr><td>chinese traditional instruments \u2192 (traditional instruments)H</td></tr><tr><td>of (china)M , (instruments)H used in (chinese traditional)M</td></tr><tr><td>music</td></tr><tr><td>elementary math manipulatives \u2192 (manipulatives)H</td></tr><tr><td>for (elementary math)M , (math manipulatives)H in the</td></tr><tr><td>(elementary)M classroom, (manipulatives)H used in (ele-</td></tr><tr><td>mentary math)M , (math manipulatives)H for (elementary)M</td></tr><tr><td>level</td></tr><tr><td>global corporate tax rates \u2192 (corporate tax rates)H around</td></tr><tr><td>the (world)M , (tax rates)H on (global corporate)M profits</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Sample of noun phrases from the ListQ gold</td></tr><tr><td>set, whose top 10 extracted interpretations induce mul-</td></tr><tr><td>tiple pairs of a head and a modifier of the noun phrases</td></tr><tr><td>(H=head; M=modifier)</td></tr><tr><td>portion in Table 6, the scores at rank 5 are 0.830 for</td></tr><tr><td>ListQ, 0.790 for IsA and 0.860 for WikiC. Alterna-</td></tr><tr><td>tively, in the fourth rows of each vertical portion, the</td></tr><tr><td>scores at rank 5 are 0.360, 0.350 and 0.370 respec-</td></tr><tr><td>tively. The scores indicate that at least one of the top</td></tr><tr><td>5 interpretations is correct and specific for about a</td></tr><tr><td>third of the noun phrases in the gold sets.</td></tr><tr><td>Induced Modifiers, Heads and Interpretations:</td></tr><tr><td>When a candidate interpretation is extracted for a</td></tr><tr><td>noun phrase, the interpretation effectively induces</td></tr><tr><td>a particular bracketing over the noun phrase, as it</td></tr><tr><td>splits it into a modifier and a head. For an ambiguous</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF10": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Noun Phrase \u2192 Extracted Interpretations</td></tr><tr><td>beatles songs \u2192 (songs)H sung by the (beatles)M , (songs)H</td></tr><tr><td>about the (beatles)M</td></tr><tr><td>company accounts \u2192 (accounts)H maintained by the</td></tr><tr><td>(company)M , (accounts)H owed to a (company)M</td></tr><tr><td>florida insurance companies \u2192 (insurance companies)H</td></tr><tr><td>headquartered in (florida)M , (insurance companies)H insur-</td></tr><tr><td>ing in (florida)M</td></tr><tr><td>german food \u2192 (food)H eaten in (germany)M , (food)H</td></tr><tr><td>produced in (germany)M , (food)H that originated in</td></tr><tr><td>(germany)M</td></tr><tr><td>math skills \u2192 (skills)H needed for (math)M , (skills)H</td></tr><tr><td>learned in (math)M , (skills)H gained from studying (math)M</td></tr><tr><td>michael jackson song \u2192 (song)H written by (michael</td></tr><tr><td>jackson)M , (song)H sung by (michael jackson)M , (song)H</td></tr><tr><td>about (michael jackson)M</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "Fraction of noun phrases that have some extracted interpretation(s) and contain at least 3 tokens, whose interpretations induce multiple (rather than single) bracketings over interpreted noun phrases. The presence of multiple bracketings for a noun phrase is equivalent to the presence of multiple pairs of a head and a modifier, as induced by the top 10 interpretations extracted for the noun phrase" |
|
}, |
|
"TABREF11": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Sample of alternative relevant interpretations extracted among the top 20 interpretations for noun phrases from the ListQ gold set (H=head; M=modifier)" |
|
}, |
|
"TABREF13": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Variant Types Impact on Precision</td></tr><tr><td>O A D W Cvg P@5 C@5 \u221a ---74 0.433 0.581 -\u221a --16 0.474 0.562 --\u221a -66 0.478 0.651 ---\u221a 2 0.166 0.500 -\u221a \u221a \u221a 73 0.484 0.657 \u221a -\u221a \u221a 97 0.641 0.835 \u221a \u221a -\u221a 83 0.448 0.590 \u221a \u221a \u221a -99 0.649 0.828 \u221a ---74 0.433 0.581 \u221a \u221a --81 0.453 0.592 \u221a -\u221a -96 0.635 0.833 \u221a --\u221a 76 0.429 0.578 \u221a \u221a \u221a \u221a 100 0.655 0.830</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "table shows results when only" |
|
}, |
|
"TABREF14": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Vocabulary</td><td colspan=\"2\">Noun Phrases</td></tr><tr><td/><td colspan=\"2\">With : Without Interpretation(s)</td></tr><tr><td/><td colspan=\"2\">I : \u00acI AI : A\u00acI MI : M\u00acI</td></tr><tr><td colspan=\"2\">ListQ 2.14 : 1 2.93 : 1</td><td>2.65 : 1</td></tr><tr><td colspan=\"2\">IsA 2.31 : 1 5.76 : 1</td><td>3.26 : 1</td></tr><tr><td colspan=\"2\">WikiC 2.60 : 1 3.72 : 1</td><td>3.63 : 1</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF15": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"9\">: Correlation between coverage, measured as the</td></tr><tr><td colspan=\"10\">presence of some extracted interpretation(s) for a noun</td></tr><tr><td colspan=\"10\">phrase, on one hand; and frequency of the noun phrase</td></tr><tr><td colspan=\"10\">as a query, on the other hand (I=number of noun phrases</td></tr><tr><td colspan=\"10\">that are queries and have some extracted interpretation(s);</td></tr><tr><td colspan=\"10\">\u00acI=number of noun phrases that are queries and do not</td></tr><tr><td colspan=\"10\">have any extracted interpretation(s); A=average query</td></tr><tr><td colspan=\"10\">frequency of noun phrases as queries; M=median query</td></tr><tr><td colspan=\"8\">frequency of noun phrases as queries)</td><td/><td/></tr><tr><td>Fraction of noun phrases</td><td>0.000 0.001 0.004 0.010 0.020 0.050 0.100 0.200 0.400 0.800</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9 10 ListQ IsA WikiC</td></tr><tr><td/><td/><td colspan=\"8\">Number of tokens in noun phrase</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "" |
|
} |
|
} |
|
} |
|
} |