{ "paper_id": "P15-1034", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:12:18.583403Z" }, "title": "Leveraging Linguistic Structure For Open Domain Information Extraction", "authors": [ { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "angeli@cs.stanford.edu" }, { "first": "Melvin", "middle": [ "Johnson" ], "last": "Premkumar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "melvinj@cs.stanford.edu" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "manning@cs.stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Relation triples produced by open domain information extraction (open IE) systems are useful for question answering, inference, and other IE tasks. Traditionally these are extracted using a large set of patterns; however, this approach is brittle on out-of-domain text and long-range dependencies, and gives no insight into the substructure of the arguments. We replace this large pattern set with a few patterns for canonically structured sentences, and shift the focus to a classifier which learns to extract self-contained clauses from longer sentences. We then run natural logic inference over these short clauses to determine the maximally specific arguments for each candidate triple. We show that our approach outperforms a state-of-the-art open IE system on the end-to-end TAC-KBP 2013 Slot Filling task.", "pdf_parse": { "paper_id": "P15-1034", "_pdf_hash": "", "abstract": [ { "text": "Relation triples produced by open domain information extraction (open IE) systems are useful for question answering, inference, and other IE tasks. Traditionally these are extracted using a large set of patterns; however, this approach is brittle on out-of-domain text and long-range dependencies, and gives no insight into the substructure of the arguments. We replace this large pattern set with a few patterns for canonically structured sentences, and shift the focus to a classifier which learns to extract self-contained clauses from longer sentences. We then run natural logic inference over these short clauses to determine the maximally specific arguments for each candidate triple. We show that our approach outperforms a state-of-the-art open IE system on the end-to-end TAC-KBP 2013 Slot Filling task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Open information extraction (open IE) has been shown to be useful in a number of NLP tasks, such as question answering (Fader et al., 2014) , relation extraction (Soderland et al., 2010) , and information retrieval (Etzioni, 2011) . Conventionally, open IE systems search a collection of patterns over either the surface form or dependency tree of a sentence. Although a small set of patterns covers most simple sentences (e.g., subject verb object constructions), relevant relations are often spread across clauses (see Figure 1 ) or presented in a non-canonical form.", "cite_spans": [ { "start": 119, "end": 139, "text": "(Fader et al., 2014)", "ref_id": "BIBREF13" }, { "start": 162, "end": 186, "text": "(Soderland et al., 2010)", "ref_id": "BIBREF33" }, { "start": 215, "end": 230, "text": "(Etzioni, 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 521, "end": 529, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Systems like Ollie (Mausam et al., 2012) approach this problem by using a bootstrapping method to create a large corpus of broad-coverage partially lexicalized patterns. Although this is effective at capturing many of these patterns, it Figure 1: Open IE extractions produced by the system, alongside extractions from the stateof-the-art Ollie system. Generating coherent clauses before applying patterns helps reduce false matches such as (Honolulu; be born in; Hawaii). Inference over the sub-structure of arguments, in turn, allows us to drop unnecessary information (e.g., of Austria), but only when it is warranted (e.g., keep fake in fake praise).", "cite_spans": [ { "start": 19, "end": 40, "text": "(Mausam et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "can lead to unintuitive behavior on out-of-domain text. For instance, while Obama is president is extracted correctly by Ollie as (Obama; is; president), replacing is with are in cats are felines produces no extractions. Furthermore, existing systems struggle at producing canonical argument forms -for example, in Figure 1 the argument Heinz Fischer of Austria is likely less useful for downstream applications than Heinz Fischer.", "cite_spans": [], "ref_spans": [ { "start": 315, "end": 323, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we shift the burden of extracting informative and broad coverage triples away from this large pattern set. Rather, we first pre-process the sentence in linguistically motivated ways to produce coherent clauses which are (1) logically entailed by the original sentence, and (2) easy to segment into open IE triples. Our approach consists of two stages: we first learn a classifier for splitting a sentence into shorter utterances (Section 3), and then appeal to natural logic (S\u00e1nchez Valencia, 1991) to maximally shorten these utterances while maintaining necessary context (Section 4.1). A small set of 14 hand-crafted patterns can then be used to segment an utterance into an open IE triple.", "cite_spans": [ { "start": 490, "end": 514, "text": "(S\u00e1nchez Valencia, 1991)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We treat the first stage as a greedy search problem: we traverse a dependency parse tree recursively, at each step predicting whether an edge should yield an independent clause. Importantly, in many cases na\u00efvely yielding a clause on a dependency edge produces an incomplete utterance (e.g., Born in Honolulu, Hawaii, from Figure 1 ). These are often attributable to control relationships, where either the subject or object of the governing clause controls the subject of the subordinate clause. We therefore allow the produced clause to sometimes inherit the subject or object of its governor. This allows us to capture a large variety of long range dependencies with a concise classifier.", "cite_spans": [], "ref_spans": [ { "start": 323, "end": 331, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From these independent clauses, we then extract shorter sentences, which will produce shorter arguments more likely to be useful for downstream applications. A natural framework for solving this problem is natural logic -a proof system built on the syntax of human language (see Section 4.1). We can then observe that Heinz Fischer of Austria visits China entails that Heinz Fischer visits China. On the other hand, we respect situations where it is incorrect to shorten an argument. For example, No house cats have rabies should not entail that cats have rabies, or even that house cats have rabies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "When careful attention to logical validity is necessary -such as textual entailment -this approach captures even more subtle phenomena. For example, whereas all rabbits eat fresh vegetables yields (rabbits; eat; vegetables), the apparently similar sentence all young rabbits drink milk does not yield (rabbits; drink; milk).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show that our new system performs well on a real world evaluation -the TAC KBP Slot Filling challenge (Surdeanu, 2013) . We outperform both an official submission on open IE, and a baseline of replacing our extractor with Ollie, a state-ofthe-art open IE systems.", "cite_spans": [ { "start": 105, "end": 121, "text": "(Surdeanu, 2013)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is a large body of work on open information extraction. One line of work begins with Text-Runner (Yates et al., 2007) and ReVerb (Fader et al., 2011) , which make use of computationally efficient surface patterns over tokens. With the introduction of fast dependency parsers, Ollie (Mausam et al., 2012) continues in the same spirit but with learned dependency patterns, improving on the earlier WOE system (Wu and Weld, 2010) . The Never Ending Language Learning project (Carlson et al., 2010) has a similar aim, iteratively learning more facts from the internet from a seed set of examples. Exemplar (Mesquita et al., 2013) adapts the open IE framework to nary relationships similar to semantic role labeling, but without the expensive machinery.", "cite_spans": [ { "start": 103, "end": 123, "text": "(Yates et al., 2007)", "ref_id": "BIBREF43" }, { "start": 135, "end": 155, "text": "(Fader et al., 2011)", "ref_id": "BIBREF12" }, { "start": 288, "end": 309, "text": "(Mausam et al., 2012)", "ref_id": "BIBREF23" }, { "start": 413, "end": 432, "text": "(Wu and Weld, 2010)", "ref_id": "BIBREF41" }, { "start": 478, "end": 500, "text": "(Carlson et al., 2010)", "ref_id": "BIBREF5" }, { "start": 608, "end": 631, "text": "(Mesquita et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Open IE triples have been used in a number of applications -for example, learning entailment graphs for new triples (Berant et al., 2011) , and matrix factorization for unifying open IE and structured relations (Yao et al., 2012; Riedel et al., 2013) . In each of these cases, the concise extractions provided by open IE allow for efficient symbolic methods for entailment, such as Markov logic networks or matrix factorization.", "cite_spans": [ { "start": 116, "end": 137, "text": "(Berant et al., 2011)", "ref_id": "BIBREF3" }, { "start": 211, "end": 229, "text": "(Yao et al., 2012;", "ref_id": "BIBREF42" }, { "start": 230, "end": 250, "text": "Riedel et al., 2013)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Prior work on the KBP challenge can be categorized into a number of approaches. The most common of these are distantly supervised relation extractors (Craven and Kumlien, 1999; Wu and Weld, 2007; Mintz et al., 2009; Sun et al., 2011) , and rule based systems (Soderland, 1997; Grishman and Min, 2010; Chen et al., 2010) . However, both of these approaches require careful tuning to the task, and need to be trained explicitly on the KBP relation schema. Soderland et al. (2013) submitted a system to KBP making use of open IE relations and an easily constructed mapping to KBP relations; we use this as a baseline for our empirical evaluation.", "cite_spans": [ { "start": 150, "end": 176, "text": "(Craven and Kumlien, 1999;", "ref_id": "BIBREF8" }, { "start": 177, "end": 195, "text": "Wu and Weld, 2007;", "ref_id": "BIBREF40" }, { "start": 196, "end": 215, "text": "Mintz et al., 2009;", "ref_id": "BIBREF26" }, { "start": 216, "end": 233, "text": "Sun et al., 2011)", "ref_id": "BIBREF36" }, { "start": 259, "end": 276, "text": "(Soderland, 1997;", "ref_id": "BIBREF35" }, { "start": 277, "end": 300, "text": "Grishman and Min, 2010;", "ref_id": "BIBREF15" }, { "start": 301, "end": 319, "text": "Chen et al., 2010)", "ref_id": null }, { "start": 454, "end": 477, "text": "Soderland et al. (2013)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Prior work has used natural logic for RTE-style textual entailment, as a formalism well-suited for formal semantics in neural networks, and as a framework for common-sense reasoning (Mac-Cartney and Manning, 2009; Watanabe et al., 2012; Bowman et al., 2014; Angeli and Manning, 2013) . We adopt the precise semantics of Icard and Moss (2014) . Our approach of finding short entailments from a longer utterance is similar in spirit to work on textual entailment for information extraction (Romano et al., 2006 Figure 2: An illustration of our approach. From left to right, a sentence yields a number of independent clauses (e.g., she Born in a small town -see Section 3). From top to bottom, each clause produces a set of entailed shorter utterances, and segments the ones which match an atomic pattern into a relation triple (see Section 4.1).", "cite_spans": [ { "start": 182, "end": 213, "text": "(Mac-Cartney and Manning, 2009;", "ref_id": null }, { "start": 214, "end": 236, "text": "Watanabe et al., 2012;", "ref_id": "BIBREF39" }, { "start": 237, "end": 257, "text": "Bowman et al., 2014;", "ref_id": "BIBREF4" }, { "start": 258, "end": 283, "text": "Angeli and Manning, 2013)", "ref_id": "BIBREF0" }, { "start": 320, "end": 341, "text": "Icard and Moss (2014)", "ref_id": "BIBREF16" }, { "start": 488, "end": 508, "text": "(Romano et al., 2006", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Inter-Clause Open IE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the first stage of our method, we produce a set of self-contained clauses from a longer utterance. Our objective is to produce a set of clauses which can stand on their own syntactically and semantically, and are entailed by the original sentence (see Figure 2 ). Note that this task is not specific to extracting open IE triples. Conventional relation extractors, entailment systems, and other NLP applications may also benefit from such a system. We frame this task as a search problem. At a given node in the parse tree, we classify each outgoing arc e = p l \u2212 \u2192 c, from the governor p to a dependent c with [collapsed] Stanford Dependency label l, into an action to perform on that arc. Once we have chosen an action to take on that arc, we can recurse on the dependent node. We decompose the action into two parts: (1) the action to take on the outgoing edge e, and (2) the action to take on the governor p. For example, in our motivating example, we are considering the arc: e = took vmod \u2212 \u2212\u2212 \u2192 born. In this case, the correct action is to (1) yield a new clause rooted at born, and (2) interpret the subject of born as the subject of took.", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 263, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We proceed to describe this action space in more detail, followed by an explanation of our training data, and finally our classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The three actions we can perform on a dependency edge are: Stop Do not recurse on this arc, as the subtree under this arc is not entailed by the parent sentence. This is the case, for example, for most leaf nodes (furry cats are cute should not entail the clause furry), and is an important action for the efficiency of the algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Space", "sec_num": "3.1" }, { "text": "With these three actions, a search path through the tree becomes a sequence of Recurse and Yield actions, terminated by a Stop action (or leaf node). For example, a search sequence A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Space", "sec_num": "3.1" }, { "text": "\u2212 \u2212\u2212\u2212\u2212 \u2192 B Y ield \u2212 \u2212\u2212 \u2192 C Stop \u2212 \u2212\u2212 \u2192 D would yield a clause rooted at C. A sequence A Y ield \u2212 \u2212\u2212 \u2192 B Y ield \u2212 \u2212\u2212 \u2192 C Stop \u2212 \u2212\u2212 \u2192 D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurse", "sec_num": null }, { "text": "would yield clauses rooted at both B and C. Finding all such sequences is in general exponential in the size of the tree. In practice, during training we run breadth first search to collect the first 10 000 sequences. During inference we run uniform cost search until our classifier predictions fall below a given threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurse", "sec_num": null }, { "text": "For the Stop action, we do not need to further specify an action to take on the parent node. However, for both of the other actions, it is often the case that we would like to capture a controller in the higher clause. We define three such common actions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurse", "sec_num": null }, { "text": "Subject Controller If the arc we are considering is not already a subject arc, we can copy the subject of the parent node and attach it as a subject of the child node. This is the action taken in the example Born in a small town, she took the midnight train.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurse", "sec_num": null }, { "text": "Object Controller Analogous to the subject controller action above, but taking the object instead. This is the intended action for examples like I persuaded Fred to leave the room. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurse", "sec_num": null }, { "text": "Parent Subject If the arc we are taking is the only outgoing arc from a node, we take the parent node as the (passive) subject of the child. This is the action taken in the example Obama, our 44 th president to yield a clause with the semantics of Obama [is] our 44 th president.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurse", "sec_num": null }, { "text": "Although additional actions are easy to imagine, we found empirically that these cover a wide range of applicable cases. We turn our attention to the training data for learning these actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recurse", "sec_num": null }, { "text": "We collect a noisy dataset to train our clause generation model. We leverage the distant supervision assumption for relation extraction, which creates a noisy corpus of sentences annotated with relation mentions (subject and object spans in the sentence with a known relation). Then, we take this annotation as itself distant supervision for a correct sequence of actions to take: any sequence which recovers the known relation is correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2" }, { "text": "We use a small subset of the KBP source documents for 2010 and 2013 (Surdeanu, 2013) as our distantly supervised corpus. To try to maximize the density of known relations in the training sentences, we take all sentences which have at least one known relation for every 10 tokens in the sentence, resulting in 43 155 sentences. In addition, we incorporate the 23 725 manually annotated examples from .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2" }, { "text": "Once we are given a collection of labeled sentences, we assume that a sequence of actions which leads to a correct extraction of a known relation is a positive sequence. A correct extraction is any extraction we produce from our model (see Section 4) which has the same arguments as the known relation. For instance, if we know that Obama was born in Hawaii from the sentence Born in Hawaii, Obama . . . , and an action sequence produces the triple (Obama, born in, Hawaii), then we take that action sequence as a positive sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2" }, { "text": "Any sequence of actions which results in a clause which produces no relations is in turn considered a negative sequence. The third case to consider is a sequence of actions which produces a relation, but it is not one of the annotated relations. This arises from the incomplete negatives problem in distantly supervised relation extraction (Min et al., 2013) : since our knowledge base is not exhaustive, we cannot be sure if an extracted relation is incorrect or correct but previously unknown. Although many of these unmatched relations are indeed incorrect, the dataset is sufficiently biased towards the STOP action that the occasional false negative hurts end-to-end performance. Therefore, we simply discard such sequences.", "cite_spans": [ { "start": 340, "end": 358, "text": "(Min et al., 2013)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2" }, { "text": "Given a set of noisy positive and negative sequences, we construct training data for our action classifier. All but the last action in a positive sequence are added to the training set with the label Recurse; the last action is added with the label Split. Only the last action in a negative sequence is added with the label Stop. We partition the feature space of our dataset according to the action applied to the parent node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2" }, { "text": "We train a multinomial logistic regression classifier on our noisy training data, using the features in Table 1 . The most salient features are the label of the edge being taken, the incoming edge to the parent of the edge being taken, neighboring edges for both the parent and child of the edge, and the part of speech tag of the endpoints of the edge. The dataset is weighted to give 3\u00d7 weight to examples in the Recurse class, as precision errors in this class are relatively harmless for accuracy, while recall errors are directly harmful to recall.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Inference", "sec_num": "3.3" }, { "text": "Inference now reduces to a search problem. Be- the feature templates are the particular templates used. For instance, the POS signature contains the tag of the parent, the tag of the child, and both tags joined in a single feature. Note that all features are joined with the action to be taken on the parent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.3" }, { "text": "ginning at the root of the tree, we consider every outgoing edge. For every possible action to be performed on the parent (i.e., clone subject, clone root, no action), we apply our trained classifier to determine whether we (1) split the edge off as a clause, and recurse; (2) do not split the edge, and recurse; or (3) do not recurse. In the first two cases, we recurse on the child of the arc, and continue until either all arcs have been exhausted, or all remaining candidate arcs have been marked as not recursable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.3" }, { "text": "We will use the scores from this classifier to inform the score assigned to our generated open IE extractions (Section 4). The score of a clause is the product of the scores of actions taken to reach the clause. The score of an extraction will be this score multiplied by the score of the extraction given the clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.3" }, { "text": "We now turn to the task of generating a maximally compact sentence which retains the core semantics of the original utterance, and parsing the sentence into a conventional open IE subject verb object triple. This is often a key component in downstream applications, where extractions need to be not only correct, but also informative. Whereas an argument like Heinz Fischer of Austria is often correct, a downstream application must apply further processing to recover information about either Heinz Fischer, or Austria. Moreover, it must do so without the ability to appeal to the larger context of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-Clause Open IE", "sec_num": "4" }, { "text": "We adopt a subset of natural logic semantics dictating contexts in which lexical items can be removed. Natural logic as a formalism captures common logical inferences appealing directly to the form of language, rather than parsing to a specialized logical syntax. It provides a proof theory for lexical mutations to a sentence which either preserve or negate the truth of the premise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "For instance, if all rabbits eat vegetables then all cute rabbits eat vegetables, since we are allowed to mutate the lexical item rabbit to cute rabbit. This is done by observing that rabbit is in scope of the first argument to the operator all. Since all induces a downward polarity environment for its first argument, we are allowed to replace rabbit with an item which is more specificin this case cute rabbit. To contrast, the operator some induces an upward polarity environment for its first argument, and therefore we may derive the inference from cute rabbit to rabbit in: some cute rabbits are small therefore some rabbits are small. For a more comprehensive introduction to natural logic, see van Benthem (2008) .", "cite_spans": [ { "start": 707, "end": 721, "text": "Benthem (2008)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "We mark the scopes of all operators (all, no, many, etc.) in a sentence, and from this determine whether every lexical item can be replaced by something more general (has upward polarity), more specific (downward polarity), or neither. In the absence of operators, all items have upwards polarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "Each dependency arc is then classified into whether deleting the dependent of that arc makes the governing constituent at that node more general, more specific (a rare case), or neither. 2 For example, removing the amod edge in cute amod \u2190 \u2212\u2212 \u2212 rabbit yields the more general lexical item rabbit. However, removing the nsubj edge in For most dependencies, this semantics can be hard-coded with high accuracy. However, there are at least two cases where more attention is warranted. The first of these concerns non-subsective adjectives: for example a fake gun is not a gun. For this case, we make use of the list of non-subsective adjectives collected in Nayak et al. (2014) , and prohibit their deletion as a hard constraint.", "cite_spans": [ { "start": 655, "end": 674, "text": "Nayak et al. (2014)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "The second concern is with prepositional attachment, and direct object edges. For example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "whereas Alice went to the playground We learn these attachment affinities empirically from the syntactic n-grams corpus of Goldberg and Orwant (2013) . This gives us counts for how often object and preposition edges occur in the context of the governing verb and relevant neighboring edges. We hypothesize that edges which are frequently seen to co-occur are likely to be essential to the meaning of the sentence. To this end, we compute the probability of seeing an arc of a given type, conditioned on the most specific context we have statistics for. These contexts, and the order we back off to more general contexts, is given in Figure 3 .", "cite_spans": [ { "start": 123, "end": 149, "text": "Goldberg and Orwant (2013)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 633, "end": 641, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "To compute a score s of deleting the edge from the affinity probability p collected from the syntactic n-grams, we simply cap the affinity and subtract it from 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "s = 1 \u2212 min(1, p K )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "where K is a hyperparameter denoting the minimum fraction of the time an edge should occur in a context to be considered entirely unremovable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "In our experiments, we set K = 1 3 . The score of an extraction, then, is the product of the scores of each deletion multiplied by the score from the clause splitting step in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating Deletions with Natural Logic", "sec_num": "4.1" }, { "text": "Once a set of short entailed sentences is produced, it becomes straightforward to segment them into conventional open IE triples. We employ 6 simple dependency patterns, given in cover the majority of atomic relations we are interested in.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Atomic Patterns", "sec_num": "4.2" }, { "text": "When information is available to disambiguate the substructure of compound nouns (e.g., named entity segmentation), we extract additional relations with 5 dependency and 3 TokensRegex (Chang and Manning, 2014) surface form patterns. These are given in Table 3 ; we refer to these as nominal relations. Note that the constraint of named entity information is by no means required for the system. In other applications -for example, applications in vision -the otherwise trivial nominal relations could be quite useful. Our president, Obama, (Our president; be; Obama) ", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 3", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Atomic Patterns", "sec_num": "4.2" }, { "text": "A common use case for open IE systems is to map them to a known relation schema. This can either be done manually with minimal annotation effort, or automatically from available training data. We use both methods in our TAC-KBP evaluation. A collection of relation mappings was constructed by a single annotator in approximately a day, 3 and a relation mapping was learned using the procedure described in this section. We map open IE relations to the KBP schema by searching for co-occurring relations in a large distantly-labeled corpus, and marking open IE and KBP relation pairs which have a high PMI 2 value (B\u00e9atrice, 1994; Evert, 2005) conditioned on their type signatures matching. To compute PMI 2 , we collect probabilities for the open IE and KBP relation co-occurring, the probability of the open IE relation occurring, and the probability of the KBP relation occurring. Each of these probabilities is conditioned on the type signature of the relation. For example, the joint probability of KBP relation r k and open IE relation r o , given a type signature of t 1 , t 2 , would be", "cite_spans": [ { "start": 613, "end": 629, "text": "(B\u00e9atrice, 1994;", "ref_id": "BIBREF2" }, { "start": 630, "end": 642, "text": "Evert, 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": "p(r k , r o | t 1 , t 2 ) = count(r k , r o , t 1 , t 2 ) r k ,r o count(r k , r o , t 1 , t 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": "Omitting the conditioning on the type signature for notational convenience, and defining p(r k ) and p(r o ) analogously, we can then compute The PMI 2 value between the two relations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": "PMI 2 (r k , r o ) = log p(r k , r o ) 2 p(r k ) \u2022 p(r o )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": "Note that in addition to being a measure related to PMI, this captures a notion similar to alignment by agreement (Liang et al., 2006) ; the formula can be equivalently written", "cite_spans": [ { "start": 114, "end": 134, "text": "(Liang et al., 2006)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": "as log [p(r k | r o )p(r o | r k )].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": "It is also functionally the same as the JC WordNet distance measure (Jiang and Conrath, 1997) .", "cite_spans": [ { "start": 68, "end": 93, "text": "(Jiang and Conrath, 1997)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": "Some sample type checked relation mappings are given in Table 4 . In addition to intuitive mappings (e.g., found in \u2192 Org:Founded), we can note some rare, but high precision pairs (e.g., invest fund of \u2192 Org:Founded By). We can also see the noise in distant supervision occasionally permeate the mapping, e.g., with elect president of \u2192 Per:LOC Of Death -a president is likely to die in his own country.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Mapping OpenIE to a Known Relation Schema", "sec_num": "5" }, { "text": "We evaluate our approach in the context of a realworld end-to-end relation extraction task -the TAC KBP Slot Filling challenge. In Slot Filling, we are given a large unlabeled corpus of text, a fixed schema of relations (see Section 5), and a set of query entities. The task is to find all relation triples in the corpus that have as a subject the query entity, and as a relation one of the defined relations. This can be viewed intuitively as populating Wikipedia Infoboxes from a large unstructured corpus of text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6" }, { "text": "We compare our approach to the University of Washington submission to TAC-KBP 2013 (Soderland et al., 2013) . Their system used OpenIE v4.0 (a successor to Ollie) run over the KBP corpus and then they generated a mapping from the extracted relations to the fixed schema. Unlike our system, Open IE v4.0 employs a semantic role component extracting structured SRL frames, alongside a conventional open IE system. Furthermore, the UW submission allows for extracting relations and entities from substrings of an open IE triple argument. For example, from the triple (Smith; was appointed; acting director of Acme Corporation), they extract that Smith is employed by Acme Corporation. We disallow such extractions, passing the burden of finding correct precise extractions to the open IE system itself (see Section 4).", "cite_spans": [ { "start": 83, "end": 107, "text": "(Soderland et al., 2013)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6" }, { "text": "For entity linking, the UW submission uses Tom Lin's entity linker (Lin et al., 2012) ; our submission uses the Illinois Wikifier (Ratinov et al., 2011) without the relational inference component, for efficiency. For coreference, UW uses the Stanford coreference system (Lee et al., 2011) ; we employ a variant of the simple coref system described in (Pink et al., 2014) .", "cite_spans": [ { "start": 67, "end": 85, "text": "(Lin et al., 2012)", "ref_id": "BIBREF21" }, { "start": 130, "end": 152, "text": "(Ratinov et al., 2011)", "ref_id": "BIBREF29" }, { "start": 270, "end": 288, "text": "(Lee et al., 2011)", "ref_id": "BIBREF19" }, { "start": 351, "end": 370, "text": "(Pink et al., 2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6" }, { "text": "We report our results in Table 5 . 4 UW Official refers to the official submission in the 2013 challenge; we show a 3. Table 5 : A summary of our results on the endto-end KBP Slot Filling task. UW official is the submission made to the 2013 challenge. The second row is the accuracy of Ollie embedded in our framework, and of Ollie evaluated with nominal relations from our system. Lastly, we report our system, our system with nominal relations removed, and our system combined with an alternate names detector and rule-based website detector. Comparable systems are marked with a dagger \u2020 or asterisk * . F 1 ) over this submission, evaluated using a comparable approach. A common technique in KBP systems but not employed by the official UW submission in 2013 is to add alternate names based on entity linking and coreference. Additionally, websites are often extracted using heuristic namematching as they are hard to capture with traditional relation extraction techniques. If we make use of both of these, our end-to-end accuracy becomes 28.2 F 1 .", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 5", "ref_id": null }, { "start": 119, "end": 126, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "6" }, { "text": "We attempt to remove the variance in scores from the influence of other components in an endto-end KBP system. We ran the Ollie open IE system (Mausam et al., 2012) in an identical framework to ours, and report accuracy in Table 5 . Note that when an argument to an Ollie extraction contains a named entity, we take the argument to be that named entity. The low performance of this system can be partially attributed to its inability to extract nominal relations. To normalize for this, we report results when the Ollie extractions are supplemented with the nominal relations produced by our system (Ollie + Nominal Rels in Table 5 ). Conversely, we can remove the nominal relation extractions from our system; in both cases we outperform Ollie on the task. Figure 4 : A precision/recall curve for Ollie and our system (without nominals). For clarity, recall is plotted on a range from 0 to 0.15.", "cite_spans": [ { "start": 143, "end": 164, "text": "(Mausam et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 5", "ref_id": null }, { "start": 624, "end": 631, "text": "Table 5", "ref_id": null }, { "start": 758, "end": 766, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "6" }, { "text": "We plot a precision/recall curve of our extractions in Figure 4 in order to get an informal sense of the calibration of our confidence estimates. Since confidences only apply to standard extractions, we plot the curves without including any of the nominal relations. The confidence of a KBP extraction in our system is calculated as the sum of the confidences of the open IE extractions that support it. So, for instance, if we find (Obama; be bear in; Hawaii) n times with confidences c 1 . . . c n , the confidence of the KBP extraction would be n i=0 c i . It is therefore important to note that the curve in Figure 4 necessarily conflates the confidences of individual extractions, and the frequency of an extraction.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 4", "ref_id": null }, { "start": 612, "end": 620, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6.1" }, { "text": "With this in mind, the curves lend some interesting insights. Although our system is very high precision on the most confident extractions, it has a large dip in precision early in the curve. This suggests that the model is extracting multiple instances of a bad relation. Systematic errors in the clause splitter are the likely cause of these errors. While the approach of splitting sentences into clauses generalizes better to out-of-domain text, it is reasonable that the errors made in the clause splitter manifest across a range of sentences more often than the fine-grained patterns of Ollie would.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.1" }, { "text": "On the right half of the PR curve, however, our system achieves both higher precision and extends to a higher recall than Ollie. Furthermore, the curve is relatively smooth near the tail, suggesting that indeed we are learning a reasonable estimate of confidence for extractions that have only one supporting instance in the text -empirically, 46% of our extractions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.1" }, { "text": "In total, we extract 42 662 862 open IE triples which link to a pair of entities in the corpus (i.e., are candidate KBP extractions), covering 1 180 770 relation types. 202 797 of these relation types appear in more than 10 extraction instances; 28 782 in more than 100 instances, and 4079 in more than 1000 instances. 308 293 relation types appear only once. Note that our system over-produces extractions when both a general and specific extraction are warranted; therefore these numbers are an overestimate of the number of semantically meaningful facts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.1" }, { "text": "For comparison, Ollie extracted 12 274 319 triples, covering 2 873 239 relation types. 1 983 300 of these appeared only once; 69 010 appeared in more than 10 instances, 7951 in more than 100 instances, and 870 in more than 1000 instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.1" }, { "text": "We have presented a system for extracting open domain relation triples by breaking a long sentence into short, coherent clauses, and then finding the maximally simple relation triples which are warranted given each of these clauses. This allows the system to have a greater awareness of the context of each extraction, and to provide informative triples to downstream applications. We show that our approach performs well on one such downstream application: the KBP Slot Filling task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The system currently misses most most such cases due to insufficient support in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the Stanford Dependencies representation(de Marneffe and Manning, 2008).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The official submission we compare against claimed two weeks for constructing their manual mapping, although a version of their system constructed in only 3 hours performs nearly as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "All results are reported with the anydoc flag set to true in the evaluation script, meaning that only the truth of the extracted knowledge base entry and not the associated provenance is scored. In absence of human evaluators, this is in order to not penalize our system unfairly for extracting a new correct provenance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their thoughtful feedback. Stanford University gratefully acknowledges the support of a Natural Language Understanding-focused gift from Google Inc. and the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Philosophers are mortal: Inferring the truth of unseen facts", "authors": [ { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabor Angeli and Christopher D. Manning. 2013. Philosophers are mortal: Inferring the truth of un- seen facts. In CoNLL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Combining distant and partial supervision for relation extraction", "authors": [ { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "Jean", "middle": [ "Y" ], "last": "Wu", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabor Angeli, Julie Tibshirani, Jean Y. Wu, and Christopher D. Manning. 2014. Combining dis- tant and partial supervision for relation extraction. In EMNLP.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Approche mixte pour l'extraction automatique de terminologie: statistique lexicale et filtres linguistiques", "authors": [ { "first": "", "middle": [], "last": "Daille B\u00e9atrice", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DAILLE B\u00e9atrice. 1994. Approche mixte pour l'extraction automatique de terminologie: statis- tique lexicale et filtres linguistiques. Ph.D. thesis, Th\u00e8se de Doctorat. Universit\u00e9 de Paris VII.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Global learning of typed entailment rules", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Goldberger", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In Proceedings of ACL, Portland, OR.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Recursive neural networks can learn logical semantics", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1406.1827" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Christopher Potts, and Christo- pher D. Manning. 2014. Recursive neural networks can learn logical semantics. CoRR, (arXiv:1406.1827).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Toward an architecture for neverending language learning", "authors": [ { "first": "Andrew", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Betteridge", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Kisiel", "suffix": "" }, { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" }, { "first": "Tom M", "middle": [], "last": "Estevam R Hruschka", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2010, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for never- ending language learning. In AAAI.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "TokensRegex: Defining cascaded regular expressions over tokens", "authors": [ { "first": "X", "middle": [], "last": "Angel", "suffix": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angel X. Chang and Christopher D. Manning. 2014. TokensRegex: Defining cascaded regular expres- sions over tokens. Technical Report CSTR 2014-02, Department of Computer Science, Stanford Univer- sity.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Constructing biological knowledge bases by extracting information from text sources", "authors": [ { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Kumlien", "suffix": "" } ], "year": 1999, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting informa- tion from text sources. In AAAI.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Stanford typed dependencies representation", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe and Christopher D. Man- ning. 2008. The Stanford typed dependencies rep- resentation. In Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Search needs a shake-up", "authors": [ { "first": "Oren", "middle": [ "Etzioni" ], "last": "", "suffix": "" } ], "year": 2011, "venue": "Nature", "volume": "476", "issue": "7358", "pages": "25--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Etzioni. 2011. Search needs a shake-up. Nature, 476(7358):25-26.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The statistics of word cooccurrences: word pairs and collocations", "authors": [ { "first": "Stefan", "middle": [ "Evert" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Evert. 2005. The statistics of word cooccur- rences: word pairs and collocations. Ph.D. thesis, Universit at Stuttgart.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Identifying relations for open information extraction", "authors": [ { "first": "Anthony", "middle": [], "last": "Fader", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In EMNLP.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Open question answering over curated and extracted knowledge bases", "authors": [ { "first": "Anthony", "middle": [], "last": "Fader", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2014, "venue": "KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In KDD.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A dataset of syntactic-ngrams over time from a very large corpus of english books", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Orwant", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In *SEM.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "New York University KBP 2010 slot-filling system", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Bonan", "middle": [], "last": "Min", "suffix": "" } ], "year": 2010, "venue": "Proc. TAC 2010 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Grishman and Bonan Min. 2010. New York University KBP 2010 slot-filling system. In Proc. TAC 2010 Workshop.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Recent progress on monotonicity. Linguistic Issues in Language Technology", "authors": [ { "first": "Thomas", "middle": [], "last": "Icard", "suffix": "" }, { "first": ",", "middle": [], "last": "Iii", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Moss", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Icard, III and Lawrence Moss. 2014. Recent progress on monotonicity. Linguistic Issues in Lan- guage Technology.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Overview of the tac 2010 knowledge base population track", "authors": [ { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Joe", "middle": [ "Ellis" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Third Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Grif- fitt, and Joe Ellis. 2010. Overview of the tac 2010 knowledge base population track. In Third Text Analysis Conference.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semantic similarity based on corpus statistics and lexical taxonomy", "authors": [ { "first": "J", "middle": [], "last": "Jay", "suffix": "" }, { "first": "David", "middle": [ "W" ], "last": "Jiang", "suffix": "" }, { "first": "", "middle": [], "last": "Conrath", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 10th International Conference on Research on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay J Jiang and David W Conrath. 1997. Semantic similarity based on corpus statistics and lexical tax- onomy. Proceedings of the 10th International Con- ference on Research on Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "CoNLL Shared Task", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Ju- rafsky. 2011. Stanford's multi-pass sieve corefer- ence resolution system at the conll-2011 shared task. In CoNLL Shared Task.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Alignment by agreement", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In NAACL-HLT.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "No noun phrase left behind: detecting and typing unlinkable entities", "authors": [ { "first": "Thomas", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Mausam", "middle": [], "last": "", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2012, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Lin, Mausam, and Oren Etzioni. 2012. No noun phrase left behind: detecting and typing un- linkable entities. In EMNLP-CoNLL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "An extended model of natural logic", "authors": [ { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the eighth international conference on computational semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the eighth international conference on computa- tional semantics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Open language learning for information extraction", "authors": [ { "first": "Michael", "middle": [], "last": "Mausam", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Bart", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2012, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In EMNLP.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Effectiveness and efficiency of open relation extraction", "authors": [ { "first": "Filipe", "middle": [], "last": "Mesquita", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Schmidek", "suffix": "" }, { "first": "Denilson", "middle": [], "last": "Barbosa", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filipe Mesquita, Jordan Schmidek, and Denilson Bar- bosa. 2013. Effectiveness and efficiency of open relation extraction. In EMNLP.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Distant supervision for relation extraction with an incomplete knowledge base", "authors": [ { "first": "Bonan", "middle": [], "last": "Min", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "David", "middle": [], "last": "Gondek", "suffix": "" } ], "year": 2013, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In NAACL-HLT.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" } ], "year": 2009, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In ACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A dictionary of nonsubsective adjectives", "authors": [ { "first": "Neha", "middle": [], "last": "Nayak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Kowarsky", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neha Nayak, Mark Kowarsky, Gabor Angeli, and Christopher D. Manning. 2014. A dictionary of nonsubsective adjectives. Technical Report CSTR 2014-04, Department of Computer Science, Stan- ford University, October.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Analysing recall loss in named entity slot filling", "authors": [ { "first": "Glen", "middle": [], "last": "Pink", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "R", "middle": [ "James" ], "last": "Curran", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Glen Pink, Joel Nothman, and R. James Curran. 2014. Analysing recall loss in named entity slot filling. In EMNLP.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Local and global algorithms for disambiguation to wikipedia", "authors": [ { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2011, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike An- derson. 2011. Local and global algorithms for dis- ambiguation to wikipedia. In ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Relation extraction with matrix factorization and universal schemas", "authors": [ { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Benjamin M", "middle": [], "last": "Marlin", "suffix": "" } ], "year": 2013, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In NAACL-HLT.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Investigating a generic paraphrase-based approach for relation extraction", "authors": [ { "first": "Lorenza", "middle": [], "last": "Romano", "suffix": "" }, { "first": "Milen", "middle": [], "last": "Kouylekov", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Lavelli", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido Dagan, and Alberto Lavelli. 2006. Investigat- ing a generic paraphrase-based approach for relation extraction. EACL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Studies on natural logic and categorial grammar", "authors": [ { "first": "V\u00edctor Manuel S\u00e1nchez S\u00e1nchez", "middle": [], "last": "Valencia", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V\u00edctor Manuel S\u00e1nchez S\u00e1nchez Valencia. 1991. Stud- ies on natural logic and categorial grammar. Ph.D. thesis, University of Amsterdam.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Adapting open information extraction to domain-specific relations", "authors": [ { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Roof", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Mausam", "middle": [], "last": "", "suffix": "" }, { "first": "Oren", "middle": [ "Etzioni" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "AI Magazine", "volume": "31", "issue": "3", "pages": "93--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Soderland, Brendan Roof, Bo Qin, Shi Xu, Mausam, and Oren Etzioni. 2010. Adapting open information extraction to domain-specific relations. AI Magazine, 31(3):93-102.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Open information extraction to KBP relations in 3 hours", "authors": [ { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "John", "middle": [], "last": "Gilmer", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Bart", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2013, "venue": "Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Soderland, John Gilmer, Robert Bart, Oren Et- zioni, and Daniel S. Weld. 2013. Open information extraction to KBP relations in 3 hours. In Text Anal- ysis Conference.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning text analysis rules for domain-specific natural language processing", "authors": [ { "first": "G", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "", "middle": [], "last": "Soderland", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen G Soderland. 1997. Learning text analysis rules for domain-specific natural language process- ing. Ph.D. thesis, University of Massachusetts.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "New York University 2011 system for KBP slot filling", "authors": [ { "first": "Ang", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bonan", "middle": [], "last": "Min", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Text Analytics Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ang Sun, Ralph Grishman, Wei Xu, and Bonan Min. 2011. New York University 2011 system for KBP slot filling. In Proceedings of the Text Analytics Conference.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Overview of the tac2013 knowledge base population evaluation: English slot filling and temporal slot filling", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" } ], "year": 2013, "venue": "Sixth Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu. 2013. Overview of the tac2013 knowledge base population evaluation: English slot filling and temporal slot filling. In Sixth Text Analy- sis Conference.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A brief history of natural logic", "authors": [ { "first": "Johan", "middle": [], "last": "Van Benthem", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan van Benthem. 2008. A brief history of natural logic. Technical Report PP-2008-05, University of Amsterdam.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "A latent discriminative model for compositional entailment relation recognition using natural logic", "authors": [ { "first": "Yotaro", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Junta", "middle": [], "last": "Mizuno", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nichols", "suffix": "" }, { "first": "Naoaki", "middle": [], "last": "Okazaki", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2012, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yotaro Watanabe, Junta Mizuno, Eric Nichols, Naoaki Okazaki, and Kentaro Inui. 2012. A latent discrim- inative model for compositional entailment relation recognition using natural logic. In COLING.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Autonomously semantifying wikipedia", "authors": [ { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Daniel S Weld", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the sixteenth ACM conference on information and knowledge management", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Wu and Daniel S Weld. 2007. Autonomously se- mantifying wikipedia. In Proceedings of the six- teenth ACM conference on information and knowl- edge management. ACM.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Open information extraction using wikipedia", "authors": [ { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Daniel S Weld", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Wu and Daniel S Weld. 2010. Open information extraction using wikipedia. In ACL. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Probabilistic databases of universal schema", "authors": [ { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Webscale Knowledge Extraction", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Limin Yao, Sebastian Riedel, and Andrew McCal- lum. 2012. Probabilistic databases of universal schema. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web- scale Knowledge Extraction.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "TextRunner: Open information extraction on the web", "authors": [ { "first": "Alexander", "middle": [], "last": "Yates", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Broadhead", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" } ], "year": 2007, "venue": "ACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In ACL-HLT.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Yields a new clause on this dependency arc. A canonical case of this action is the arc suggest ccomp \u2212 \u2212\u2212\u2212 \u2192 brush in Dentists suggest that you should brush your teeth, yielding you should brush your teeth. Recurse Recurse on this dependency arc, but do not yield it as a new clause. For example, in the sentence faeries are dancing in the field where I lost my bike, we must recurse through the intermediate constituent the field where I lost my bike -which itself is not relevant -to get to the clause of interest: I lost my bike.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "runs would yield the unentailed (and nonsensical) phrase runs. The last, rare, case is an edge that causes the resulting item to be more specific -e.g., quantmod: about quantmod \u2190 \u2212\u2212\u2212\u2212\u2212 \u2212 200 is more general than 200.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Bob entails that Alice went to the playground, it is not meaningful to infer that Alice is friends prep with \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 Bob entails Alice is friends. Analogously, Alice played dobj \u2212\u2212\u2192 baseball on Sunday entails that Alice played on Sunday; but, Obama signed dobj \u2212\u2212\u2192 the bill on Sunday should not entail the awkward phrase *Obama signed on Sunday.", "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "html": null, "text": "Born in Honolulu, Hawaii, Obama is a US Citizen.", "content": "
Our SystemOllie
(Obama; is; US citizen)(Obama; is; a US citizen)
(Obama; born in;(Obama; be born in; Honolulu)
Honolulu, Hawaii)(Honolulu; be born in; Hawaii)
(Obama; is citizen of; US)
Friends give true praise.
Enemies give fake praise.
Our SystemOllie
(friends; give; true praise)(friends; give; true praise)
(friends; give; praise)
Our SystemOllie
(Heinz Fischer; visits; US)(Heinz Fischer of Austria;
visits; the US)
", "type_str": "table" }, "TABREF1": { "num": null, "html": null, "text": ").Born in a small town, she took the midnight train going anywhere.", "content": "
vmod
prep indobjprep in
amod detnsubjdetnnvmoddobjnsubjamod det
she Born in a small town
(input)(extracted clause)
\u2193\u2193
she took the midnight train going anywhereshe took the midnight trainshe Born in small town
Born in a small town, she took the midnight train she took midnight trainshe Born in a town
Born in a town, she took the midnight train. . .she Born in town
\u2193\u2193
(she; took; midnight train)(she; born in; small town) (she; born in; town)
", "type_str": "table" }, "TABREF3": { "num": null, "html": null, "text": "", "content": "
, which
", "type_str": "table" }, "TABREF4": { "num": null, "html": null, "text": "The six dependency patterns used to segment an atomic sentence into an open IE triple.", "content": "", "type_str": "table" }, "TABREF6": { "num": null, "html": null, "text": "A selection of the mapping from KBP to lemmatized open IE relations, conditioned on the types of the arguments being correct. The top one or two relations are shown for 7 person and 6 organization relations. Incorrect or dubious mappings are marked with an asterisk.", "content": "
InputExtraction
Durin, son of Thorin(Durin; is son of; Thorin)
Thorin's son, Durin(Thorin; 's son; Durin)
IBM CEO Rometty(Rometty; is CEO of; IBM)
President Obama(Obama; is; President)
Fischer of Austria(Fischer; is of; Austria)
IBM's research group(IBM; 's; research group)
US president Obama(Obama; president of; US)
", "type_str": "table" }, "TABREF7": { "num": null, "html": null, "text": "The eight patterns used to segment a noun phrase into an open IE triple. The first five are dependency patterns; the last three are surface patterns.", "content": "", "type_str": "table" } } } }