{ "paper_id": "D14-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:54:13.566185Z" }, "title": "ReNoun: Fact Extraction for Nominal Attributes", "authors": [ { "first": "Mohamed", "middle": [], "last": "Yahya", "suffix": "", "affiliation": {}, "email": "myahya@mpi-inf.mpg.de" }, { "first": "Steven", "middle": [ "Euijong" ], "last": "Whang", "suffix": "", "affiliation": {}, "email": "swhang@google.com" }, { "first": "Rahul", "middle": [], "last": "Gupta", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Alon", "middle": [], "last": "Halevy", "suffix": "", "affiliation": {}, "email": "halevy@google.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Search engines are increasingly relying on large knowledge bases of facts to provide direct answers to users' queries. However, the construction of these knowledge bases is largely manual and does not scale to the long and heavy tail of facts. Open information extraction tries to address this challenge, but typically assumes that facts are expressed with verb phrases, and therefore has had difficulty extracting facts for noun-based relations. We describe ReNoun, an open information extraction system that complements previous efforts by focusing on nominal attributes and on the long tail. ReNoun's approach is based on leveraging a large ontology of noun attributes mined from a text corpus and from user queries. ReNoun creates a seed set of training data by using specialized patterns and requiring that the facts mention an attribute in the ontology. ReNoun then generalizes from this seed set to produce a much larger set of extractions that are then scored. We describe experiments that show that we extract facts with high precision and for attributes that cannot be extracted with verb-based techniques.", "pdf_parse": { "paper_id": "D14-1038", "_pdf_hash": "", "abstract": [ { "text": "Search engines are increasingly relying on large knowledge bases of facts to provide direct answers to users' queries. However, the construction of these knowledge bases is largely manual and does not scale to the long and heavy tail of facts. Open information extraction tries to address this challenge, but typically assumes that facts are expressed with verb phrases, and therefore has had difficulty extracting facts for noun-based relations. We describe ReNoun, an open information extraction system that complements previous efforts by focusing on nominal attributes and on the long tail. ReNoun's approach is based on leveraging a large ontology of noun attributes mined from a text corpus and from user queries. ReNoun creates a seed set of training data by using specialized patterns and requiring that the facts mention an attribute in the ontology. ReNoun then generalizes from this seed set to produce a much larger set of extractions that are then scored. We describe experiments that show that we extract facts with high precision and for attributes that cannot be extracted with verb-based techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "One of the major themes driving the current evolution of search engines is to make the search experience more efficient and mobile friendly for users by providing them concrete answers to queries. These answers, that apply to queries about entities that the search engine knows about (e.g., famous individuals, organizations or locations) complement the links that the search en- * Work done during an internship at Google Research. gine typically returns (Sawant and Chakrabati, 2013; Singhal, 2012; Yahya et al., 2012) . To support such answers, the search engine maintains a knowledge base that describes various attributes of an entity (e.g., (Nicolas Sarkozy, wife, Carla Bruni) ). Upon receiving a query, the search engine tries to recognize whether the answer is in its knowledge base.", "cite_spans": [ { "start": 456, "end": 485, "text": "(Sawant and Chakrabati, 2013;", "ref_id": null }, { "start": 486, "end": 500, "text": "Singhal, 2012;", "ref_id": "BIBREF20" }, { "start": 501, "end": 520, "text": "Yahya et al., 2012)", "ref_id": "BIBREF22" }, { "start": 647, "end": 683, "text": "(Nicolas Sarkozy, wife, Carla Bruni)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the most part, the aforementioned knowledge bases are constructed using manual techniques and carefully supervised information extraction algorithms. As a result, they obtain high coverage on head attributes, but low coverage on tail ones, such as those shown in Table 1 . For example, they may have the answer for the query \"Sarkozy's wife\", but not for \"Hollande's exgirlfriend\" or \"Google's philanthropic arm\". In addition to broadening the scope of query answering, extending the coverage of the knowledge base to long tail attributes can also facilitate providing Web answers to the user. Specifically, the search engine can use lower-confidence facts to corroborate an answer that appears in text in one of the top Web results and highlight them to the user. This paper describes ReNoun, an openinformation extraction system that focuses on extracting facts for long tail attributes. The observation underlying our approach is that attributes from the long tail are typically expressed as nouns, whereas most previous work on open-information extraction (e.g., Mausam et al. (2012) ) extend techniques for extracting attributes expressed in verb form. Hence, the main contribution of our work is to develop an extraction system that complements previous efforts, focuses on nominal attributes and is effective for the long tail. To that end, ReNoun begins with a large but imperfect ontology of nominal attributes that is extracted from text and the query stream (Gupta et al., 2014) . ReNoun proceeds by using a small set of highprecision extractors that exploit the nominal na- staff writer (The New Yorker, staff writer, Adam Gopnik is one of The New Adam Gopnik) Yorker's best staff writers. Table 1 : Examples of noun phrases as attributes, none which are part of a verb phrase. Additionally, the first two attributes do not occur within a verb phrase in a large corpus (see \u00a7 2 for details) in a setting where they can be associated with a triple. ture of the attributes to obtain a training set, and then generalizes from the training set via distant supervision to find a much larger set of extraction patterns. Finally, ReNoun scores extracted facts by considering how frequently their patterns extract triples and the coherence of these patterns, i.e., whether they extract triples for semantically similar attributes. Our experiments demonstrate that ReNoun extracts a large body of high precision facts, and that these facts are not extracted with techniques based on verb phrases.", "cite_spans": [ { "start": 1071, "end": 1091, "text": "Mausam et al. (2012)", "ref_id": "BIBREF11" }, { "start": 1473, "end": 1493, "text": "(Gupta et al., 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 267, "end": 274, "text": "Table 1", "ref_id": null }, { "start": 1706, "end": 1713, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of ReNoun is to extract triples of the form (S,A,O), where S is subject, A is the attribute, and O is the object. In our setting, the attribute is always a noun phrase. We refer to the subject and object as the arguments of the attribute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "ReNoun takes as input a set of attributes, which can be collected using the methods described in Gupta el al. (2014) , Lee et al. (2012) , and Pasca and van Durme (2007) . In this work, we use Biperpedia (Gupta et al., 2014) , which is an ontology of nominal attributes automatically extracted from Web text and user queries. For every attribute, Biperpedia supplies the Freebase (Bollacker et al., 2008) domain type (e.g., whether the attribute applies to people, organizations or hotels). Since the attributes themselves are the result of an extraction algorithm, they may include false positives (i.e., attributes that do not make sense).", "cite_spans": [ { "start": 97, "end": 116, "text": "Gupta el al. (2014)", "ref_id": null }, { "start": 119, "end": 136, "text": "Lee et al. (2012)", "ref_id": null }, { "start": 139, "end": 169, "text": "and Pasca and van Durme (2007)", "ref_id": "BIBREF18" }, { "start": 204, "end": 224, "text": "(Gupta et al., 2014)", "ref_id": "BIBREF7" }, { "start": 380, "end": 404, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "The focus of ReNoun is on attributes whose values are concrete objects (e.g., wife, protege, chief-economist). Other classes of attributes that we do not consider in this work are (1) numeric (e.g., population, GDP) that are better extracted from Web tables (Cafarella et al., 2008) , and (2) vague (e.g., culture, economy) whose value is a narrative that would not fit the current mode of query answering on search engines.", "cite_spans": [ { "start": 258, "end": 282, "text": "(Cafarella et al., 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "We make the distinction between the fat head and long tail of attributes. To define these two sets, we ordered the attributes in decreasing order of the number of occurrences in the corpus 1 . We defined the fat head to be the attributes until the point N in the ordering such that the sum of the total number of occurrences of attributes before N equaled the number of total occurrences of the attributes after N . In our news corpus, the fat head included 218 attributes (i.e., N = 218) and the long tail included 60K attributes. Table 2 shows examples from both.", "cite_spans": [], "ref_spans": [ { "start": 532, "end": 539, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "daughter, headquarters president, spokesperson,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fat head", "sec_num": null }, { "text": "Long tail chief economist, defender, philanthropic arm, protege Table 2 : Examples of fat head and long tail attributes.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Fat head", "sec_num": null }, { "text": "The output of ReNoun is a set of facts, where each fact could be generated by multiple extractions. We store the provenance of each extraction and the number of times each fact was extracted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fat head", "sec_num": null }, { "text": "ReNoun's goal is to extract facts for attributes expressed as noun phrases. A natural question is whether we can exploit prior work on open information extraction, which focused on extracting relations expressed as verbs. For example, if we can extract facts for the attribute advised or is advisor of, we can populate the noun attribute advisor with the same facts. In Section 7.2 we demonstrate that this approach is limited for several reasons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun versus verb attributes", "sec_num": null }, { "text": "First, attributes in knowledge bases are typically expressed as noun phrases. Table 3 shows that Knowledge Base %Nouns %Verbs Freebase 97 3 DBpedia 96 4 Table 3: Percentage of attributes expressed as nouns phrases among the 100 attributes with the most facts. the vast majority of the attributes in both Freebase and DBpedia (Auer et al., 2007) are expressed as nouns even for the fat head (and even more so for the long tail). Hence, if we extract the verb form of attributes we would need to translate them into noun form, which would require us to solve the paraphrasing problem and introduce more sources of error (Madnani and Dorr, 2010) . Second, as we dig deeper into the long tail, attributes tend to be expressed in text more in noun form rather than verb form. One of the reasons is that the attribute names tend to get longer and therefore unnatural to express as verbs (e.g. chief privacy officer, automotive division). Finally, there is often a subtle difference in meaning between verb forms and noun forms of attributes. For example, it is common to see the phrase \"Obama advised Merkel on saving the Euro,\" but that would not necessarily mean we want to say that Obama is an advisor of Angela Merkel, in the common sense of advisor.", "cite_spans": [ { "start": 334, "end": 353, "text": "(Auer et al., 2007)", "ref_id": "BIBREF0" }, { "start": 627, "end": 651, "text": "(Madnani and Dorr, 2010)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 78, "end": 268, "text": "Table 3 shows that Knowledge Base %Nouns %Verbs Freebase 97 3 DBpedia 96 4 Table 3: Percentage of attributes expressed as nouns phrases among the 100 attributes with the most facts.", "ref_id": null } ], "eq_spans": [], "section": "Noun versus verb attributes", "sec_num": null }, { "text": "ReNoun extracts facts from a large corpus of 400M news articles. We exploit rich synactic and linguistic cues, by processing these documents with a natural language processing pipeline comprising of -dependency parsing, noun phrase chunking, named entity recognition, coreference resolution, and entity resolution to Freebase. The chunker identifies nominal mentions in the text that include our attributes of interest. As discussed later in the paper, we exploit the dependency parse, coreference and entity resolution heavily during various stages of our pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Processed document corpus", "sec_num": null }, { "text": "Since ReNoun aims at extracting triples for attributes not present in head-heavy knowledge bases, one key challenge is that we do not have any labeled data (i.e. known facts) for such attributes, especially in the long tail. Therefore ReNoun has an initial seed fact extraction step that automatically generates a small corpus of relatively precise seed facts for all attributes, so that distant supervision can be employed. The second big challenge is to filter the noise from the resulting extractions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of ReNoun", "sec_num": "3" }, { "text": "ReNoun's extraction pipeline, shown in Figure 1, is composed of four stages. Seed fact extraction: We begin by extracting a small number of high-precision facts for our attributes. For this step, we rely on manually specified lexical patterns that are specifically tailored for noun phrases, but are general enough to be independent of any specific attributes. When applying such patterns, we exploit coreference to make the generated seed facts more precise by requiring the attribute and object noun phrases of a seed fact to refer to the same real-world entity. This is elaborated further in Section 4. Extraction pattern generation: Utilizing the seed facts, we use distant supervision (Mintz et al., 2009) to learn a set of dependency parse patterns that are used to extract a lot more facts from the text corpus. Candidate generation: We apply the learned dependency parse patterns from the previous stage to generate a much larger set of extractions. We aggregate all the extractions that give rise to the same fact and store with them the provenance of the extraction. The extractions generated here are called candidates because they are assigned scores that determine how they are used. The application consuming an extraction can decide whether to discard an extraction or use it, and in this case the manner in which it is used, based on the scores we attach to it and the application's precision requirements. Scoring: In the final stage, we score the facts, reflecting our confidence in their correctness. Intuitively, we give a pattern a high score if it extracts many facts that have semantically similar attributes, and then propagate this score to the facts extracted by the pattern (Section 6).", "cite_spans": [ { "start": 690, "end": 710, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 39, "end": 45, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Overview of ReNoun", "sec_num": "3" }, { "text": "Since we do not have facts, but only attributes, the first phase of ReNoun's pipeline is to extract a set of high-precision seed facts that are used to train more general extraction patterns. ReNoun extracts seed facts using a manually crafted set of extraction rules (see Table 4 ). However, the extraction rules and the application of these rules are tailored to our task of extracting noun-based attributes.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 280, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Seed fact extraction", "sec_num": "4" }, { "text": "Specifically, when we apply an extraction rule to generate a triple (S,A,O), we require that (1) A is an attribute in our ontology, and (2) the value of A and the object O corefer to the same real-world Figure 1: Extraction Pipeline: we begin with a set of high-precision extractors and use distant supervision to train other extractors. We then apply the new extractors and score the resulting triples based on the frequency and coherence of the patterns that produce them. Table 4 : High precision patterns used for seed fact extraction along with an example of each. Here, the object (O) and the attribute (A) corefer and the subject (S) is in close proximity. In all examples, the resulting fact is (Google, CEO, Larry Page). Patterns are not attribute specific. Figure 2 , CEO is in our ontology and we can use a coreference resolver to infer that CEO and Larry Page refer to the same entity. The use of coreference follows from the simple observation that objects will often be referred to by nominals, many of which are our attributes of interest. Since the sentence matches our sixth extraction rule, ReNoun extracts the triple (Google, CEO, Larry Page). We rely on a coreference resolver in the spirit of Haghighi and Klein (2009) . The resolver clusters the mentions of entities in a document so the references in each cluster are assumed to refer to the same real-world entity. The resolver also chooses for each cluster a representative phrase, which is a proper noun or proper adjective (e.g., Canadian). Other phrases in the same cluster can be other proper nouns or adjectives, common nouns like CEO or pronouns like he in the example. Each cluster is possibly linked by an entity resolver to a Freebase entity using a unique Freebase ID. Figure 2(b) shows the coreference clusters from the sample document, with representative phrases in bold, along with the Freebase ID of each cluster. Note that in our example the phrase executive chairman, which is also in our ontology of attributes, is not part of any coreference cluster. Therefore, the fact centered around this attribute in the example will not be part of the seed extractions, but could be extracted in the next phase. The resulting facts use Freebase IDs for the subject and object (for readability, we will use entity names in the rest of this work). In summary, our seed extraction proceeds in two steps. First, we find sentences with candidate attribute-object pairs that corefer and in which the attribute is in our ontology. Second, we match these sentences against our hand-crafted rules to generate the extractions. In Section 7 we show that the precision of our seed facts is 65% for fat head attributes and 80% for long tail ones.", "cite_spans": [ { "start": 1214, "end": 1239, "text": "Haghighi and Klein (2009)", "ref_id": "BIBREF8" }, { "start": 1754, "end": 1765, "text": "Figure 2(b)", "ref_id": null } ], "ref_spans": [ { "start": 475, "end": 482, "text": "Table 4", "ref_id": null }, { "start": 767, "end": 775, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Seed fact extraction", "sec_num": "4" }, { "text": "In this section we describe how ReNoun uses the seed facts to learn a much broader set of extraction patterns. ReNoun uses the learned patterns to extract many more candidate facts that are then assigned scores reflecting their quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern and candidate fact generation", "sec_num": "5" }, { "text": "We use the seed facts to learn patterns over dependency parses of text sentences. A dependency parse of a sentence is a directed graph whose vertices correspond to tokens labeled with the word and the POS tag, and the edges are syntactic relations between the corresponding tokens (de Marneffe et al., 2006) . A dependency pattern is a subgraph of a dependency parse where some words have been replaced by variables, but the POS tags have been retained (called delexicalization). A dependency pattern enables us to extract sentences with the same dependency parse as the sentence that generated the pattern, modulo the delexicalized words. We note that one big benefit of using dependency patterns is that they generalize well, as they ignore extra tokens in the sentence that do not belong to the dependency subgraph of interest.", "cite_spans": [ { "start": 285, "end": 307, "text": "Marneffe et al., 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency patterns", "sec_num": "5.1" }, { "text": "The procedure for dependency pattern generation is shown in Algorithm 1, and Figure 3 shows an example of its application. The input to the algorithm is the ontology of attributes, the seed facts (Section 4), and our processed text corpus (Section 2). The procedure iterates over the sentences in the corpus, looking for matches between a sentence s and a seed fact f . A sentence s matches f if s contains (i) the attribute in f , and (ii) phrases in coreference clusters that map to the same Freebase IDs as the subject and object of f . When a match is found, we generate a pattern as follows.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Generating dependency patterns", "sec_num": "5.2" }, { "text": "We denote by P the minimal subgraph of the dependency parse of s containing the head tokens of the subject, attribute and object (Figure 3 (b) ). We delexicalize the three vertices corresponding to the head tokens of the subject, attribute and object by variables indicating their roles. The POS tag associated with the attribute token is always a noun. The subject and object are additionally allowed to have pronouns and adjectives associated with their tokens. All POS tags corresponding to nouns are lifted to N, in order to match the various types of nouns. We denote the resulting dependency pattern by P and add it to our output, associated with the matched attribute. We note that in principle, the vertex corresponding to the head of the attribute does not need to be delexicalized. However, we do this to improve the efficiency of pattern-matching, since we will often have patterns for different attributes differing only at the attribute vertex.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 142, "text": "(Figure 3 (b)", "ref_id": null } ], "eq_spans": [], "section": "Generating dependency patterns", "sec_num": "5.2" }, { "text": "It is important to note that because of the manner in which the roles of subject and object were assigned during seed fact extraction, the patterns ReNoun generates clearly show which argument will take the role of the subject, and which will take the role of the object. This is in contrast to previous work such as Ollie (Mausam at al., 2012) , where the assignment depends on the order in which the arguments are expressed in the sentence from which the fact is being extracted. For example, from the sentence \"Opel was described as GM's most successful subsidiary.\" and the seed fact (GM, subsidiary, Opel), the pattern that ReNoun generates will consistently extract facts like (BMW, subsidiary, Rolls-Royce), and not the incorrect inverse, regardless of the relative ordering of the two entities in the sentence.", "cite_spans": [ { "start": 323, "end": 344, "text": "(Mausam at al., 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Generating dependency patterns", "sec_num": "5.2" }, { "text": "At this point we have dependency patterns capable of generating more extractions for their seed fact attributes. For efficient matching, we use the output of Algorithm 1 to generate a map from dependency patterns to their attributes with entries like that shown in Figure 4(a) . This way, a pattern match can be propagated to all its mapped attributes in one shot, as we explain in Section 5.3. Finally, we discard patterns that do not pass a support threshold, where support is the number of distinct seed facts from which a pattern could be generated. \"A creative director, like will.i.am of 3D Systems, may also be referred to as chief creative officer.\" \u2193 (3D Systems, creative director, will.i.am) (c) Figure 4 : A dependency pattern and its use in extraction: (a) the pattern in our running example and the set of attributes to which it applies; (b) and (c) sentences matching the pattern and the resulting extractions.", "cite_spans": [], "ref_spans": [ { "start": 265, "end": 276, "text": "Figure 4(a)", "ref_id": null }, { "start": 707, "end": 715, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Generating dependency patterns", "sec_num": "5.2" }, { "text": "Given the learned patterns, we can now generate new extractions. Each match of a pattern against the corpus will indicate the heads of the potential subject, attribute and object. The noun phrase headed by the token matching the {A/N} vertex is checked against the set of attributes to which the pattern is mapped. If the noun phrase is found among these attributes, then a triple (S, A, O) is constructed from the attribute and the Freebase entities to which the tokens corresponding to the S and O nodes in the pattern are resolved. This triple is then emitted as an extraction along with the pattern that generated it. Figure 4(b) and (c) show two sentences that match the dependency pattern in our running example and the resulting extractions.", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 390, "text": "(S, A, O)", "ref_id": null }, { "start": 622, "end": 633, "text": "Figure 4(b)", "ref_id": null } ], "eq_spans": [], "section": "Applying the dependency patterns", "sec_num": "5.3" }, { "text": "Finally, we aggregate our extractions by their generated facts. For each fact f , we save the distinct dependency patterns that yielded f and the total number of times it was found in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying the dependency patterns", "sec_num": "5.3" }, { "text": "In this section we describe how we score the candidate facts extracted by applying the dependency patterns in Section 5. Recall that a fact may be obtained from multiple extractions, and assigning scores to each fact (rather than each extraction) enables us to consider all extractions of a fact in aggregate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "6" }, { "text": "We score facts based on the patterns which extract them. Our scheme balances two characteristics of a pattern: its frequency and coherence. Pattern frequency is defined as the number of ex- tractions produced by the pattern. Our first observation is that patterns with a large number of extractions are always able to produce correct extractions (in addition to incorrect ones). We also observe that generic patterns produce more erroneous facts compared to more targeted ones. To capture this, we introduce pattern coherence, which reflects how targeted a pattern is based on the attributes to which it applies. For example, we observed that if an extraction pattern yields facts for the coherent set of attributes ex-wife, boyfriend, and ex-partner, then its output is consistently good. On the other hand, a pattern that yields facts for a less coherent set of attributes ex-wife, general manager, and subsidiary is more likely to produce noisy extractions. Generic, more incoherent patterns are more sensitive to noise in the linguistic annotation of a document. Figure 5 shows an example pattern for each case, along with its frequency and coherence.", "cite_spans": [], "ref_spans": [ { "start": 1067, "end": 1075, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "6" }, { "text": "We capture coherence of attributes using wordvector representations of attributes that are created over large text corpora (Mikolov et al., 2013) . The word-vector representation v(w) for a word w (multi-word attributes can be preprocessed into single words) is computed in two steps. First, the algorithm counts the number of occurrences of a word w 1 that occurs within the text window centered at w (typically a window of size 10), producing an intermediate vector that potentially has a non-zero value for every word in the corpus. The intermediate vector is then mapped to a much smaller dimension (typically less than 1000) to produce v(w). As shown in (Mikolov et al., 2013) , two words w 1 and w 2 for which the cosine dis-tance between v(w 1 ) and v(w 2 ) is small tend to be semantically similar. Therefore, a pattern is coherent if it applies to attributes deemed similar as per their word vectors.", "cite_spans": [ { "start": 123, "end": 145, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF12" }, { "start": 659, "end": 681, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "6" }, { "text": "Given an extraction pattern P that extracts facts for a set of attributes A, we define the coherence of P to be the average pairwise coherence of all attributes in A, where the pairwise coherence of two attributes a 1 and a 2 is the cosine distance between v(a 1 ) and v(a 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "6" }, { "text": "Finally, we compute the score of a fact f by summing the product of frequency and coherence for each pattern of f as shown in Equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "6" }, { "text": "S(f ) = P \u2208P at(f ) f requency(P ) \u00d7 coherence(P ) (1) 7 Experimental Evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "6" }, { "text": "We describe a set of experiments that validate the contributions of ReNoun. In Sections 7.2 and 7.3 we validate our noun-centric approach: we show that extractions based on verb phrases cannot yield the results of ReNoun and that NomBank, the resource used by state of the art in semantic rolelabeling for nouns, will not suffice either. In Sections 7.4-7.6 we evaluate the different components of ReNoun and its overall quality, and in Section 7.7 we discuss the cases in which ReNoun was unable to extract any facts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "6" }, { "text": "We used the fat head (FH) and long tail (LT) attributes and annotated news corpus described in Section 2. When evaluating facts, we used majority voting among three human judges, unless otherwise noted. The judges were instructed to consider facts with inverted subjects and objects as incorrect. For example, while (GM, subsidiary, Opel) is correct, its inverse is incorrect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting", "sec_num": "7.1" }, { "text": "State-of-art open information extraction systems like Ollie (Mausam at al., 2012) assume that a relation worth extracting is expressed somewhere in verb form. We show this is not the case and justify our noun-centric approach. In this experiment we compare ReNoun to a custom implementation of Ollie that uses the same corpus as ReNoun and supports multi-word attributes. While Ollie does try to find relations expressed as nouns, its seed facts are relations expressed as verbs.", "cite_spans": [ { "start": 60, "end": 81, "text": "(Mausam at al., 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Verb phrases are not enough", "sec_num": "7.2" }, { "text": "We randomly sampled each of FH and LT for 100 attributes for which ReNoun extracts facts and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb phrases are not enough", "sec_num": "7.2" }, { "text": "Ollie flagship company railway minister legal affairs correspondentspokesperson be spokesperson of president-elect be president elect of co-founder be co-founder of Table 5 : ReNoun attributes with and without a corresponding Ollie relation.", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 172, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "ReNoun", "sec_num": null }, { "text": "asked a judge to find potentially equivalent Ollie relations. Note that we did not require the judge to find exactly the same triple (thereby biasing the experiment towards finding more attribute matches). Furthermore, the judge was instructed that a verb phrase like advised by should be considered a match to the ReNoun attribute advisor. However, looking at the data, most facts involving the relation advised are not synonymous with the advisor relation as we think of it (e.g., \"Obama advised Merkel on saving the Euro\"). This observation suggests that there is an even more subtle difference between the meaning of verb expressions and noun-based expressions in text. This experiment, therefore, gives an upper bound on the number of ReNoun attributes that Ollie can cover.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ReNoun", "sec_num": null }, { "text": "For FH, not surprisingly, we could find matches for 99 of the 100 attributes. However, for LT, only 31 of the 100 attributes could be found, even under our permissive setting. Most attributes that could not be matched were multi-word noun phrases. While in principle, one could use the Ollie patterns that apply to the head of a multi-word attribute, we found that we generate more interesting patterns for specific multi-word attributes. Table 5 shows examples of attributes with and without verb mappings in Ollie.", "cite_spans": [], "ref_spans": [ { "start": 439, "end": 446, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "ReNoun", "sec_num": null }, { "text": "We also compare in the other direction and estimate the portion of Ollie relations centered around nouns for which ReNoun fails to extract facts. For this experiment, we randomly sampled 100 Ollie relations that contained common nouns whose objects are concrete values, and looked for equivalent attributes in ReNoun extractions. ReNoun extracts facts for 48 of the Ollie relations. Among the 52 relations with no facts, 25 are not in Biperpedia (which means that ReNoun cannot extract facts for them no matter what). For the other 27 relations, ReNoun did not extract facts for the following reasons. First, some relations expressed actions, which cannot be expressed using nouns only, and are not considered attributes describing the subject entity (e.g., citation of in \"Obama's citation of the Bible\"). Second, some relations have the object (a common noun) embedded within them (e.g., have microphone in) and do not have corresponding attributes that can be expressed using nouns only. The remaining relations either have meaningless extractions or use common noun phrases as arguments. ReNoun only uses proper nouns (i.e., entities) for arguments because facts with common noun arguments are rarely interesting without more context. We note that the majority of the 25 Ollie relations without corresponding Biperpedia attributes also fall into one of the three categories above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ReNoun", "sec_num": null }, { "text": "In principle, the task of extracting noun-mediated relations can be compared to that of semantic role labeling (SRL) for nouns. The task in SRL is to identify a relation, expressed either through a verb or a noun, map it to a semantic frame, and map the arguments of the relation to the various roles within the frame. State of the art SRL systems, such as that of Johansson and Nugues (2008) , are trained on NomBank (Meyers et al., 2004) for handling nominal relations, which also means that they are limited by the knowledge it has. We asked a judge to manually search NomBank for 100 attributes randomly drawn from each of FH and LT for which ReNoun extracts facts. For multi-word attributes, we declare a match if its head word was found. We were able to find 80 matches for the FH attributes and 42 for LT ones. For example, we could not find entries for the noun attributes coach or linebacker (of a football team). This result is easy to explain by the fact that NomBank only has 4700 attributes.", "cite_spans": [ { "start": 365, "end": 392, "text": "Johansson and Nugues (2008)", "ref_id": "BIBREF9" }, { "start": 418, "end": 439, "text": "(Meyers et al., 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison against NomBank", "sec_num": "7.3" }, { "text": "In addition, for some nouns, the associated frames do not allow for the extraction of triples. For example, all frames for the noun member specify one argument only, so in the sentence \"John became a member of ACM\", the output relation is (ACM, member) instead of the desired triple (ACM, member, John).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison against NomBank", "sec_num": "7.3" }, { "text": "As we did with Ollie, we also looked at nouns from NomBank for which ReNoun does not extract facts. Out of a random sample of 100 Nom-Bank nouns, ReNoun did not extract facts for 29 nouns (four of which are not in Biperpedia). The majority of the missed nouns cannot be used by ReNoun because they either take single arguments (instead of two) or take either preposi-tional phrases or common nouns (instead of proper nouns correponding to entities) as one their arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison against NomBank", "sec_num": "7.3" }, { "text": "In Section 4, we described our method for extracting seed facts for our attributes. Applying the method to our corpus resulted in 139M extractions, which boiled down to about 680K unique facts covering 11319 attributes. We sampled 100 random facts from each of FH and LT, and obtained 65% precision for FH seed facts and 80% precision for LT ones. This leads us to two observations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quality of seed facts", "sec_num": "7.4" }, { "text": "First, the precision of seed facts for LT attributes is high, which makes them suitable for use as a building block in a distant supervision scheme to learn dependency parse patterns. We are primarily interested in LT attributes, which earlier approaches cannot deal with satisfactorily as we demonstrated above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quality of seed facts", "sec_num": "7.4" }, { "text": "Second, LT attributes have higher precision than FH attributes. One reason is that multi-word attributes (which tend to be in LT) are sometimes incorrectly chunked, and only their head words are recognized as attributes (which are more likely to be in FH). For example, in the phrase \"America's German coach, Klinsmann\", the correct attribute is German coach (LT), but bad chunking may produce the attribute coach (FH) with Germany as the subject. Another reason is that FH attributes are likely to occur in speculative contexts where the presence of the attribute is not always an assertion of a fact. (While both FH and LT attributes can be subject to speculative contexts, we observe this more for FH than LT in our data.) For example, before a person is a railway minister of a country, there is little mention of her along with the attribute. However, before a person is elected president, there is more media about her candidacy. Speculative contexts, combined with incorrect linguistic analysis of sentences, can result in incorrect seed facts (e.g., from \"Republican favorite for US president, Mitt Romney, visited Ohio\", we extract the incorrect seed fact (US, president, Mitt Romney)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quality of seed facts", "sec_num": "7.4" }, { "text": "Using the seed facts, we ran our candidate generation algorithm (Section 5). In the first step of the algorithm we produced a total of about 2 million unique dependency patterns. A third of these patterns could extract values for exactly one attribute. Manual inspection of these long tail patterns showed that they were either noise, or do not generalize. We kept patterns supported by at least 10 seed facts, yielding more than 30K patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "7.5" }, { "text": "We then applied the patterns to the corpus. The result was over 460M extractions, aggregated into about 40M unique facts. Of these, about 22M facts were for LT attributes, and 18M for FH. We now evaluate the quality of these facts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "7.5" }, { "text": "In Section 6, we presented a scheme for scoring facts using pattern frequency and coherence. To show its effectiveness we (i) compare it against other scoring schemes, and (ii) show the quality of the top-k facts produced using this scheme, for various k. To compute coherence, we generated attribute word vectors using the word2vec 2 tool trained on a dump of Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "7.6" }, { "text": "First, we compare the quality of our scoring scheme (FREQ COH) with three other schemes as shown in Table 6 . The scheme FREQ is identical to FREQ COH except that all coherences are set to 1. PATTERN counts the number of distinct patterns that extract the fact while PATTERN COH sums the pattern coherences. We generated a random sample of 252 FH and LT nouns with no entity disambiguation errors by the underlying natural language processing pipeline. The justification is that none of the schemes we consider here capture such errors. Accounting for such errors requires elaborate signals from the entity linking system, which we leave for future work. For each scoring scheme, we computed the Spearman's rank correlation coefficient \u03c1 between the scores and manual judgments (by three judges). A larger \u03c1 indicates more correlation, and computing \u03c1 was statistically significant (p-value<0.01) for all schemes. that adding coherence helps when two facts have similar frequencies, but this effect is tempered when considering a large number of facts.", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "7.6" }, { "text": "Second, we evaluate the scoring of facts generated by ReNoun by the precision of top-k results for several values of k. In this evaluation, facts with disambiguation errors are counted as wrong. The particular context in which ReNoun is applied will determine where in the ordering to set the threshold of facts to consider. We compute precision based on a sample of 50 randomly chosen facts for each k. Table 7 shows the precision results, along with the number of distinct attributes (#Attr) for which values are extracted at each k.", "cite_spans": [], "ref_spans": [ { "start": 404, "end": 411, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "7.6" }, { "text": "As we can see, ReNoun is capable of generating a large number of high quality facts (\u226570% precise at 1M), which our scoring method manages to successfully surface to the top. The major sources of error were (i) incorrect dependency parsing mainly due to errors in boilerplate text removal from news documents, (ii) incorrect coreference resolution of pronouns, (iii) incorrect entity resolution against Freebase, and (iv) cases where a triple is not sufficient (e.g., ambassador where both arguments are countries.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring extracted facts", "sec_num": "7.6" }, { "text": "We analyze why ReNoun does not extract facts for certain attributes. For FH, we investigate all the 77 attributes for which ReNoun is missing facts. For LT, there are about 50K attributes without corresponding facts, and we use a random sample of 100 of those attributes. Table 8 shows the categorization of the missed attributes. The first three categories are cases that are currently outside the scope of ReNoun: vague attributes whose values are long narratives, numeric attributes, and typed attributes (e.g., email) whose values are not modeled as Freebase entities. The next two categories are due to limitations of the ontology, e.g., plural forms of attributes are not always synonymized with singular forms and some attributes are bad. Finally, the \"Value expected\" category contains the attributes for which ReNoun should have extracted values. One reason for missing values is that the corpus itself does not contain values of all attributes. Another reason is that some attributes are not verbalized in text. For example, attributes like nationality are usually not explicitly stated when expressed in text.", "cite_spans": [], "ref_spans": [ { "start": 272, "end": 279, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Missed extractions", "sec_num": "7.7" }, { "text": "Open information extraction (OIE) was introduced by Banko et al. (2007) . For a pair of noun phrases, their system, TEXTRUNNER, looks for the attribute (or more generally the relation) in the text between them and uses a classifier to judge the trustworthiness of an extraction. WOE parse (Wu and Weld, 2010) extends this by using dependency parsing to connect the subject and object. Both systems assume that the attribute is between its two arguments, an assumption that ReNoun drops since it is not suitable for nominal attributes.", "cite_spans": [ { "start": 52, "end": 71, "text": "Banko et al. (2007)", "ref_id": "BIBREF1" }, { "start": 289, "end": 308, "text": "(Wu and Weld, 2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "8" }, { "text": "Closest to our work are ReVerb (Fader et al., 2011) and Ollie (Mausam at al., 2012) . ReVerb uses POS tag patterns to locate verb relations and then looks at noun phrases to the left and right for arguments. Ollie uses the ReVerb extractions as its seeds to train patterns that can further extract triples. While Ollie's patterns themselves are not limited to verb relations (they also support noun relations), the ReVerb seeds are limited to verbs, which makes Ollie's coverage on noun relations also limited. In comparison, ReNoun take a nouncentric approach and extracts many facts that do not exist in Ollie.clo ClausIE (Del Corro and Gemulla, 2013) is an OIE framework that exploits knowledge about the grammar of the English language to find clauses in a sentence using its dependency parse. The clauses are subsequently used to generate extractions at multiple granularities, possibly with more than triples. While ClausIE comes with a predefined set of rules on how to extract facts from a dependency parse, ReNoun learns such rules from its seed facts.", "cite_spans": [ { "start": 31, "end": 51, "text": "(Fader et al., 2011)", "ref_id": "BIBREF6" }, { "start": 62, "end": 83, "text": "(Mausam at al., 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "8" }, { "text": "Finally, Nakashole et al. (2014) and Mintz et al. (2009) find additional facts for attributes that already have facts in a knowledge base. In contrast, ReNoun is an OIE framework whose goal is to find facts for attributes without existing facts.", "cite_spans": [ { "start": 9, "end": 32, "text": "Nakashole et al. (2014)", "ref_id": null }, { "start": 37, "end": 56, "text": "Mintz et al. (2009)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "8" }, { "text": "We described ReNoun, an open information extraction system for nominal attributes that focuses on the long tail. The key to our approach is to start from a large ontology of nominal attributes and apply noun-specific manual patterns on a large preprocessed corpus (via standard NLP components) to extract precise seed facts. We then learn a set of dependency patterns, which are used to generate a much larger set of candidate facts. We proposed a scoring function for filtering candidate facts based on pattern frequency and coherence. We demonstrated that the majority of long tail attributes in ReNoun do not have corresponding verbs in Ollie. Finally, our experiments show that our scoring function is effective in filtering candidate facts (top-1M facts are \u226570% precise).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "In the future, we plan to extend ReNoun to extract triples whose components are not limited to Freebase IDs. As an example, extending ReNoun to handle numerical or typed attributes would involve extending our extraction pattern learning to accommodate units (e.g., kilograms) and other special data formats (e.g., addresses).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "The occurrences were weighted by the number of semantic classes they occur with in the ontology because many classes overlap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Luna Dong, Anjuli Kannan, Tara McIntosh, and Fei Wu for many discussions about the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "DBpedia: A Nucleus for a Web of Open Data", "authors": [ { "first": "S\u00f6ren", "middle": [], "last": "Auer", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bizer", "suffix": "" }, { "first": "Georgi", "middle": [], "last": "Kobilarov", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Cyganiak", "suffix": "" }, { "first": "Zachary", "middle": [ "G" ], "last": "Ives", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Semantic Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives 2007. DBpedia: A Nucleus for a Web of Open Data. In Proceedings of the International Semantic Web Conference.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Open Information Extraction from the Web", "authors": [ { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Cafarella", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Broadhead", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soder- land, Matthew Broadhead, and Oren Etzioni 2007. Open Information Extraction from the Web. In Pro- ceedings of the International Joint Conference on Artificial Intelligence .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Freebase: a Collaboratively Created Graph Database for Structuring Human Knowledge", "authors": [ { "first": "Kurt", "middle": [ "D" ], "last": "Bollacker", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Praveen", "middle": [], "last": "Paritosh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Sturge", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the International Conference on Management of Data", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a Col- laboratively Created Graph Database for Structuring Human Knowledge. In Proceedings of the Interna- tional Conference on Management of Data.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "WebTables: Exploring the Power of Tables on the Web", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Cafarella", "suffix": "" }, { "first": "Alon", "middle": [ "Y" ], "last": "Halevy", "suffix": "" }, { "first": "Daisy", "middle": [ "Zhe" ], "last": "Wang", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the VLDB Endowment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Cafarella, Alon Y. Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang 2008. WebTables: Exploring the Power of Tables on the Web. In Pro- ceedings of the VLDB Endowment.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "ClausIE: Clause-based Open Information Extraction", "authors": [ { "first": "Luciano", "middle": [], "last": "Del Corro", "suffix": "" }, { "first": "Rainer", "middle": [], "last": "Gemulla", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International World Wide Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luciano Del Corro and Rainer Gemulla. 2013. ClausIE: Clause-based Open Information Extrac- tion. In Proceedings of the International World Wide Web Conference.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Generating Typed Dependency Parses from Phrase Structure Parses", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of Language Resources and Evalu- ation.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Identifying Relations for Open Information Extraction", "authors": [ { "first": "Anthony", "middle": [], "last": "Fader", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying Relations for Open Information Extraction. In Proceedings of Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Biperpedia: An Ontology for Search Applications", "authors": [ { "first": "Rahul", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Halevy", "suffix": "" }, { "first": "Xuezhi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Whang", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the VLDB Endowment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahul Gupta, Alon Halevy, Xuezhi Wang, Steven Whang, and Fei Wu. 2014. Biperpedia: An On- tology for Search Applications. In Proceedings of the VLDB Endowment.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Simple Coreference Resolution with Rich Syntactic and Semantic Features", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein 2009. Simple Corefer- ence Resolution with Rich Syntactic and Semantic Features. In Proceedings of Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Effect of Syntactic Representation on Semantic Role Labeling", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Pierre Nugues 2008. The Ef- fect of Syntactic Representation on Semantic Role Labeling. In Proceedings of the International Con- ference on Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Attribute Extraction and Scoring: A Probabilistic Approach", "authors": [ { "first": "Taesung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Zhongyuan", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Conference on Data Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taesung Lee, Zhongyuan Wang, Haixun Wang, and Seung-won Hwang 2013. Attribute Extraction and Scoring: A Probabilistic Approach. In Proceedings of the International Conference on Data Engineer- ing .", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Open Language Learning for Information Extraction", "authors": [ { "first": "Michael", "middle": [], "last": "Mausam", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Bart", "suffix": "" }, { "first": "", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2012, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open Language Learning for Information Extraction. In Proceed- ings of Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Efficient Estimation of Word Representations in Vector Space. arXiv", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Repre- sentations in Vector Space. arXiv.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Distant Supervision for Relation Extraction Without Labeled Data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant Supervision for Relation Ex- traction Without Labeled Data. In Proceedings of the Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Annotating Noun Argument Structure for NomBank", "authors": [ { "first": "Adam", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "Ruth", "middle": [], "last": "Reeves", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Szekely", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Zielinska", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Young", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. Annotating Noun Ar- gument Structure for NomBank. In Proceedings of Language Resources and Evaluation.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generating Phrasal and Sentential Paraphrases: A Survey of Data-Driven Methods", "authors": [ { "first": "Nitin", "middle": [], "last": "Madnani", "suffix": "" }, { "first": "J", "middle": [], "last": "Bonnie", "suffix": "" }, { "first": "", "middle": [], "last": "Dorr", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "36", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitin Madnani and Bonnie J. Dorr. 2010. Generat- ing Phrasal and Sentential Paraphrases: A Survey of Data-Driven Methods. In Computational Linguis- tics 36(3).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Scalable Knowledge Harvesting with High Precision and High Recall", "authors": [ { "first": "Ndapandula", "middle": [], "last": "Nakashole", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Theobald", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2011, "venue": "Proceedings of Web Search and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ndapandula Nakashole, Martin Theobald, and Gerhard Weikum. 2011. Scalable Knowledge Harvesting with High Precision and High Recall. In Proceed- ings of Web Search and Data Mining.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Acquisition of Open-domain Classes via Intersective Semantics", "authors": [ { "first": "Marius", "middle": [], "last": "Pasca", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the International World Wide Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marius Pasca. 2014. Acquisition of Open-domain Classes via Intersective Semantics. In Proceedings of the International World Wide Web Conference.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "What You Seek Is What You Get: Extraction of Class Attributes from Query Logs", "authors": [ { "first": "Marius", "middle": [], "last": "Pasca", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marius Pasca and Benjamin Van Durme. 2007. What You Seek Is What You Get: Extraction of Class At- tributes from Query Logs. In Proceedings of the International Joint Conference on Artificial Intelli- gence .", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning Joint Query Interpretation and Response Ranking", "authors": [ { "first": "Uma", "middle": [], "last": "Sawant", "suffix": "" }, { "first": "Soumen", "middle": [], "last": "Chakrabarti", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International World Wide Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uma Sawant and Soumen Chakrabarti. 2013. Learn- ing Joint Query Interpretation and Response Rank- ing. In Proceedings of the International World Wide Web Conference.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Introducing the Knowledge Graph: things", "authors": [ { "first": "Amit", "middle": [], "last": "Singhal", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Singhal. 2012 Introducing the Knowledge Graph: things, not strings http://googleblog.blogspot.com/2012/ 05/introducing-knowledge-graph- things-not.html", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Open Information Extraction Using Wikipedia", "authors": [ { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "S", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Weld", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Wu and Daniel S. Weld. 2010. Open Information Extraction Using Wikipedia. In Proceedings of the the Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Natural Language Questions for the Web of Data", "authors": [ { "first": "Mohamed", "middle": [], "last": "Yahya", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Berberich", "suffix": "" }, { "first": "Shady", "middle": [], "last": "Elbassuoni", "suffix": "" }, { "first": "Maya", "middle": [], "last": "Ramanath", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Volker Tresp", "suffix": "" }, { "first": "", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2012, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohamed Yahya, Klaus Berberich, Shady Elbas- suoni, Maya Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural Language Questions for the Web of Data. In Proceedings of Empirical Methods in Natural Language Processing.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "1. the A of S, O -the CEO of Google, Larry Page 2. the A of S is O -the CEO of Google is Larry Page 3. O, S A -Larry Page, Google CEO 4. O, S's A -Larry Page, Google's CEO 5. O, [the] A of S -Larry Page, [the] CEO of Google 6. SAO -Google CEO Larry Page 7. S A, O -Google CEO, Larry Page 8. S's A, O -Google's CEO, Larry Page" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "]1 [CEO]2 [Larry Page]2 started his term in 2011, when [he]2 succeeded [Eric Schmidt]3. [Schmidt]3 has since assumed the role of executive chairman of [the company]1.\" Coreference clusters: (a) a document annotated with coreference clusters; (b) a table showing each cluster with the representative phrases in bold and the Freebase ID to which each cluster maps." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Dependency pattern generation input : Set of attributes A, Seed facts I, Corpus D. P := An empty set of dependency pattern-attribute pairs. foreach sentence s \u2208 D do foreach triple t = (S, A, O) found in s do if t \u2208 I then G(s) = dependency parse of s P = minimal subgraph of G(s) containing the head tokens of S, A and O P = Delexicalize(P , S, A, O) P = P \u222a { P, A } return P Attributes: A ={executive chairman} Seed fact: I = {(Google, executive chairman, Eric Schmidt)} Sentence: s =\"An executive chairman, like Eric Schmidt of Google, wields influence over company operations.\"" }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "Dependency pattern generation using seed facts, corresponding to Algorithm 1: (a) shows the input to the procedure (dependency parse partially shown); (b) P ; (c) P ." }, "FIGREF4": { "num": null, "type_str": "figure", "uris": null, "text": "chairman, like Steve Chase of AOL, is responsible for representing the company.\" \u2193 (AOL, executive chairman, Steve Chase) (b)" }, "FIGREF5": { "num": null, "type_str": "figure", "uris": null, "text": "attributes: {ex-wife, boyfriend, ex-partner} f requency(P ) = 574, coherence(P ) = 0.429 Example: \"Putin has two children with his ex-wife, attributes: {ex-wife, general manager, subsidiary,... } f requency(P ) = 52349038, coherence(P ) = 0.093 Example: \"Chelsea F.C. general manager Jos\u00e9 Mourinho...\" (b) Figure 5: (a) a coherent pattern extracting facts for semantically similar attributes and (b) an incoherent pattern." }, "TABREF0": { "content": "
AttributeFactPhraseVerb form seen
legal affairs (
", "html": null, "type_str": "table", "text": "NPR, legal affairs NPR welcomed Nina Totenberg as correspondent correspondent, Nina Totenberg) its new legal affairs correspondent. economist (Princeton, economist, Princeton economist Paul Krugman Paul Krugman) was awarded the Nobel prize in 2008. ex-boyfriend (Trierweiler, ex-boyfriend, Trierweiler did not have any children Hollande with her ex-boyfriend Hollande.", "num": null }, "TABREF2": { "content": "
{A/N}preplike/INpobj{O/N}prepof/INpobj{S/N}
attributes:
", "html": null, "type_str": "table", "text": "{executive chairman, creative director, ...}", "num": null } } } }