{ "paper_id": "H05-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:34:48.691606Z" }, "title": "Robust Textual Inference via Graph Matching", "authors": [ { "first": "Aria", "middle": [ "D" ], "last": "Haghighi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "region": "CA" } }, "email": "aria42@stanford.edu" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "region": "CA" } }, "email": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "region": "CA" } }, "email": "manning@cs.stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a system for deciding whether a given sentence can be inferred from text. Each sentence is represented as a directed graph (extracted from a dependency parser) in which the nodes represent words or phrases, and the links represent syntactic and semantic relationships. We develop a learned graph matching approach to approximate entailment using the amount of the sentence's semantic content which is contained in the text. We present results on the Recognizing Textual Entailment dataset (Dagan et al., 2005), and show that our approach outperforms Bag-Of-Words and TF-IDF models. In addition, we explore common sources of errors in our approach and how to remedy them.", "pdf_parse": { "paper_id": "H05-1049", "_pdf_hash": "", "abstract": [ { "text": "We present a system for deciding whether a given sentence can be inferred from text. Each sentence is represented as a directed graph (extracted from a dependency parser) in which the nodes represent words or phrases, and the links represent syntactic and semantic relationships. We develop a learned graph matching approach to approximate entailment using the amount of the sentence's semantic content which is contained in the text. We present results on the Recognizing Textual Entailment dataset (Dagan et al., 2005), and show that our approach outperforms Bag-Of-Words and TF-IDF models. In addition, we explore common sources of errors in our approach and how to remedy them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A fundamental stumbling block for several NLP applications is the lack of robust and accurate semantic inference. For instance, question answering systems must be able to recognize, or infer, an answer which may be expressed differently from the query. Information extraction systems must also be able recognize the variability of equivalent linguistic expressions. Document summarization systems must generate succinct sentences which express the same content as the original document. In Machine Translation evaluation, we must be able to recognize legit-imate translations which structurally differ from our reference translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One sub-task underlying these applications is the ability to recognize semantic entailment; whether one piece of text follows from another. In contrast to recent work which has successfully utilized logicbased abductive approaches to inference (Moldovan et al., 2003; Raina et al., 2005b) , we adopt a graphbased representation of sentences, and use graph matching approach to measure the semantic overlap of text. Graph matching techniques have proven to be a useful approach for tractable approximate matching in other domains including computer vision. In the domain of language, graphs provide a natural way to express the dependencies between words and phrases in a sentence. Furthermore, graph matching also has the advantage of providing a framework for structural matching of phrases that would be difficult to resolve at the level of individual words.", "cite_spans": [ { "start": 244, "end": 267, "text": "(Moldovan et al., 2003;", "ref_id": "BIBREF6" }, { "start": 268, "end": 288, "text": "Raina et al., 2005b)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We describe our approach in the context of the 2005 Recognizing Textual Entailment (RTE) Challenge (Dagan et al., 2005) , but note that our approach easily extends to other related inference tasks. The system presented here was one component of our research group's 2005 RTE submission (Raina et al., 2005a) which was the top-ranking system according to one of the two evaluation metrics.", "cite_spans": [ { "start": 99, "end": 119, "text": "(Dagan et al., 2005)", "ref_id": "BIBREF2" }, { "start": 286, "end": 307, "text": "(Raina et al., 2005a)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Task Definition and Data", "sec_num": "2" }, { "text": "In the 2005 RTE domain, we are given a set of pairs, each consisting of two parts: 1) the text, a Figure 1 : An example parse tree and the corresponding dependency graph. Each phrase of the parse tree is annotated with its head word, and the parenthetical edge labels in the dependency graph correspond to semantic roles.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 106, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Task Definition and Data", "sec_num": "2" }, { "text": "small passage, 1 and the hypothesis, a single sentence. Our task is to decide if the hypothesis is \"entailed\" by the text. Here, \"entails\" does not mean strict logical implication, but roughly means that a competent speaker with basic world-knowledge would be happy to conclude the hypothesis given the text. This criterion has an aspect of relevance logic as opposed to material implication: while various additional background information may be needed for the hypothesis to follow, the text must substantially support the hypothesis. Despite the informality of the criterion and the fact that the available world knowledge is left unspecified, human judges show extremely good agreement on this task -3 human judges independent of the organizers calculated agreement rates with the released data set ranging from 91-96% (Dagan et al., 2005) . We believe that this in part reflects that the task is fairly natural to human beings. For a flavor of the nature (and difficulty) of the task, see Table 1 .", "cite_spans": [ { "start": 823, "end": 843, "text": "(Dagan et al., 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 994, "end": 1001, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Task Definition and Data", "sec_num": "2" }, { "text": "We give results on the data provided for the RTE task which consists of 567 development pairs and 800 test pairs. In both sets the pairs are divided into 7 tasks -each containing roughly the same number of entailed and not-entailed instances -which were used as both motivation and means for obtaining and constructing the data items. We will use the following toy example to illustrate our representation and matching technique:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition and Data", "sec_num": "2" }, { "text": "Text: In 1994, Amazon.com was founded by Jeff Bezos.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition and Data", "sec_num": "2" }, { "text": "Hypothesis: Bezos established a company.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition and Data", "sec_num": "2" }, { "text": "1 Usually a single sentence, but occasionally longer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition and Data", "sec_num": "2" }, { "text": "Perhaps the most common representation of text for assessing content is \"Bag-Of-Words\" or \"Bag-of-N-Grams\" (Papineni et al., 2002) . However, such representations lose syntactic information which can be essential to determining entailment. Consider a Question Answer system searching for an answer to When was Israel established? A representation which did not utilize syntax would probably enthusiastically return an answer from (the 2005 RTE text): The National Institute for Psychobiology in Israel was established in 1979. In this example, it's important to try to match relationships as well as words. In particular, any answer to the question should preserve the dependency between Israel and established. However, in the proposed answer, the expected dependency is missing although all the words are present.", "cite_spans": [ { "start": 107, "end": 130, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF7" }, { "start": 495, "end": 526, "text": "Israel was established in 1979.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Need for Dependencies", "sec_num": "3.1" }, { "text": "Our approach is to view sentences as graphs between words and phrases, where dependency relationships, as in (Lin and Pantel, 2001) , are characterized by the path between vertices.", "cite_spans": [ { "start": 109, "end": 131, "text": "(Lin and Pantel, 2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "The Need for Dependencies", "sec_num": "3.1" }, { "text": "Given this representation, we judge entailment by measuring not only how many of the hypothesis vertices are matched to the text but also how well the relationships between vertices in the hypothesis are preserved in their textual counterparts. For the remainder of the section we outline how we produce graphs from text, and in the next section we introduce our graph matching model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Need for Dependencies", "sec_num": "3.1" }, { "text": "Starting with raw English text, we use a version of the parser described in (Klein and Manning, 2003) , to obtain a parse tree. Then, we derive a dependency tree representation of the sentence using a slightly modified version of Collins' head propagation rules (Collins, 1999) , which make main verbs not auxiliaries the head of sentences. Edges in the dependency graph are labeled by a set of hand-created tgrep expressions. These labels represent \"surface\" syntax relationships such as subj for subject and amod for adjective modifier, similar to the relations in Minipar (Lin and Pantel, 2001 ). The dependency graph is the basis for our graphical representation, but it is enhanced in the following ways: Sultan Al-Shawi, a.k.a the Attorney, said during a funeral held for the victims, \"They were all children of Iraq killed during the savage bombing.\".", "cite_spans": [ { "start": 76, "end": 101, "text": "(Klein and Manning, 2003)", "ref_id": "BIBREF4" }, { "start": 262, "end": 277, "text": "(Collins, 1999)", "ref_id": "BIBREF1" }, { "start": 575, "end": 596, "text": "(Lin and Pantel, 2001", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "From Text To Graphs", "sec_num": "3.2" }, { "text": "The Attorney, said at the funeral, \"They were all Iraqis killed during the brutal shelling.\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Text To Graphs", "sec_num": "3.2" }, { "text": "Napster, which started as an unauthorized songswapping Web site, has transformed into a legal service offering music downloads for a monthly fee.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Documents (CD)", "sec_num": null }, { "text": "Napster illegally offers music downloads. The country's largest private employer, Wal-Mart Stores Inc., is being sued by a number of its female employees who claim they were kept out of jobs in management because they are women.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Documents (CD)", "sec_num": null }, { "text": "Wal-Mart sued for sexual discrimination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False", "sec_num": null }, { "text": "True Table 1 : Some Textual Entailment examples. The last three demonstrate some of the harder instances.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "False", "sec_num": null }, { "text": "1. Collapse Collocations and Named-Entities: We \"collapse\" dependency nodes which represent named entities (e.g., Jeff Bezos in Figure fig example) and also collocations listed in Word-Net, including verbs and their adjacent particles (e.g., blow off in He blew off his work) .", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 138, "text": "Figure fig", "ref_id": null } ], "eq_spans": [], "section": "False", "sec_num": null }, { "text": "2. Dependency Folding: As in (Lin and Pantel, 2001 ), we found it useful to fold certain dependencies (such as modifying prepositions) so that modifiers became labels connecting the modifier's governor and dependent directly. For instance, in the text graph in Figure 2 , we have changed in from a word into a relation between its head verb and the head of its NP complement.", "cite_spans": [ { "start": 29, "end": 50, "text": "(Lin and Pantel, 2001", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 261, "end": 270, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "False", "sec_num": null }, { "text": "4. Coreference Links: Using a co-rereference resolution tagger, coref links are added through-out the graph. These links allowed connecting the referent entity to the vertices of the referring vertex. In the case of multiple sentence texts, it is our only \"link\" in the graph between entities in the two sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False", "sec_num": null }, { "text": "For the remainder of the paper, we will refer to the text as T and hypothesis as H, and will speak of them in graph terminology. In addition we will use H V and H E to denote the vertices and edges, respectively, of H.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False", "sec_num": null }, { "text": "We take the view that a hypothesis is entailed from the text when the cost of matching the hypothesis graph to the text graph is low. For the remainder of this section, we outline a general model for assigning a match cost to graphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "For hypothesis graph H, and text graph T , a matching M is a mapping from the vertices of H to those of T . For vertex v in H, we will use M (v) to denote its \"match\" in T . As is common in statistical machine translation, we allow nodes in H to map to fictitious NULL vertices in T if necessary. Suppose the cost of matching M is Cost(M ). If M is the set of such matchings, we define the cost of matching H to T to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "MatchCost(H, T ) = min M \u2208M Cost(M )", "eq_num": "(1)" } ], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "Suppose we have a model, VertexSub(v, M (v)), which gives us a cost in [0, 1], for substituting vertex v in H for M (v) in T . One natural cost model is to use the normalized cost for each of the vertex substitutions in M :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "VertexCost(M ) = 1 Z v\u2208H V w(v)VertexSub(v, M (v))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "(2) Here, w(v) represents the weight or relative importance for vertex v, and Z =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "v\u2208H V w(v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "is a normalization constant. In our implementation, the weight of each vertex was based on the part-ofspeech tag of the word or the type of named entity, if applicable. However, there are several other possibilities including using TF-IDF weights for words and phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "Notice that when Cost(M ) takes the form of (2), computing MatchCost(H, T ) is equivalent to finding the minimal cost bipartite graph-matching, which can be efficiently computed using linear programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "We would like our cost-model to incorporate some measure of how relationships in H are preserved in T under M . Ideally, a matching should preserve all local relationships; i.e, if v \u2192", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "v \u2032 \u2208 H E , then M (v) \u2192 M (v \u2032 ) \u2208 T E .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "When this condition holds for all edges in H, H is isomorphic to a subgraph of T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "What we would like is an approximate notion of isomorphism, where we penalize the distortion of each edge relation in H. Consider an edge e = (v, v \u2032 ) \u2208 H E , and let \u03c6 M (e) be the path from", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "M (v) to M (v \u2032 ) in T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "Again, suppose we have a model, PathSub(e, \u03c6 M (e)) for assessing the \"cost\" of substituting a direct relation e \u2208 H E for its counterpart, \u03c6 M (e), under the matching. This leads to a formulation similar to (2), where we consider the normalized cost of substituting each edge relation in H with a path in T :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "RelationCost(M ) = 1 Z e\u2208H E w(e)PathSub(e, \u03c6 M (e))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "(3) where Z = e\u2208H E w(e) is a normalization constant. As in the vertex case, we have weights for each hypothesis edge, w(e), based upon the edge's label; typically subject and object relations are more important to match than others. Our final matching cost is given by a convex mixture of ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "(M ) = \u03b1VertexCost(M ) + (1 \u2212 \u03b1)RelationCost(M ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "Notice that minimizing Cost(M ) is computationally hard since if our PathSub model assigns zero cost only for preserving edges, then RelationCost(M ) = 0 if and only if H is isomorphic to a subgraph of T . Since subgraph isomophism is an NP-complete problem, we cannot hope to have an efficient exact procedure for minimizing the graph matching cost. As an approximation, we can efficiently find the matching M * which minimizes VertexCost(\u2022); we then perform local greedy hillclimbing search, beginning from M * , to approximate the minimal matching. The allowed operations are changing the assignment of any hypothesis vertex to a text one, and, to avoid ridges, swapping two hypothesis assignments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment by Graph Matching", "sec_num": "4" }, { "text": "In the previous section we described our graph matching model in terms of our VertexSub model, which gives a cost for substituting one graph vertex for another, and PathSub, which gives a cost for substituting the path relationship between two paths in one graph for that in another. We now outline these models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node and Edge Substitution Models", "sec_num": "5" }, { "text": "Our VertexSub(v, M (v)) model is based upon a sliding scale, where progressively higher costs are given based upon the following conditions: (Fellbaum, 1998) . In particular we use the top 3 senses of both words to determine synsets. sen et al., 2004) . In particular, we use the measure described in (Resnik, 1995) . We found it useful to only use similarities above a fixed threshold to ensure precision. \u2022 LSA Match: v and M (v) are distributionally similar according to a freely available Latent Semantic Indexing package, 2 or for verbs similar according to VerbOcean (Chklovski and Pantel, 2004) . Although the above conditions often produce reasonable matchings between text and hypothesis, we found the recall of these lexical resources to be far from adequate. More robust lexical resources would almost certainly boost performance.", "cite_spans": [ { "start": 141, "end": 157, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF3" }, { "start": 234, "end": 251, "text": "sen et al., 2004)", "ref_id": null }, { "start": 301, "end": 315, "text": "(Resnik, 1995)", "ref_id": "BIBREF11" }, { "start": 573, "end": 601, "text": "(Chklovski and Pantel, 2004)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Vertex substitution cost model", "sec_num": "5.1" }, { "text": "Our PathSub(v \u2192 v \u2032 , M (v) \u2192 M (v \u2032 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "model is also based upon a sliding scale cost based upon the following conditions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "\u2022 Exact Match: M (v) \u2192 M (v \u2032 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "is an en edge in T with the same label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "\u2022 Partial Match: M (v) \u2192 M (v \u2032 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "is an en edge in T , not necessarily with the same label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "\u2022 Ancestor Match: M (v) is an ancestor of M (v \u2032 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "We use an exponentially increasing cost for longer distance relationships.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "\u2022 Kinked Match: M (v) and M (v \u2032 ) share a common parent or ancestor in T . We use an exponentially increasing cost based on the maximum of the node's distances to their least common ancestor in T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "These conditions capture many of the common ways in which relationships between entities are distorted in semantically related sentences. For instance, in our system, a partial match will occur whenever an edge type differs in detail, for instance use of the preposition towards in one case and to in the other. An ancestor match will occur whenever an indirect relation leads to the insertion of an intervening node in the dependency graph, such as matching John is studying French farming vs. John is studying French farming practices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path substitution cost model", "sec_num": "5.2" }, { "text": "Is it possible to learn weights for the relative importance of the conditions in the VertexSub and PathSub models? Consider the case where match costs are given only by equation 2and vertices are weighted", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "uniformly (w(v) = 1). Suppose that \u03a6(v, M (v))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "is a vector of features 3 indicating the cost according to each of the conditions listed for matching v to M (v). Also let w be weights for each element of \u03a6(v, M (v)). First we can model the substitution cost for a given matching as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "VertexSub(v, M (v)) = exp (w T \u03a6(v, M (v))) 1 + exp (w T \u03a6(v, M (v)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "Letting s(\u2022) be the 1-sigmoid function used in the right hand side of the equation above, our final matching cost as a function of w is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c(H, T ; w) = min M \u2208M 1 |H V | v\u2208H s(w T \u03a6(v, M (v)))", "eq_num": "(4)" } ], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "Suppose we have a set of text/hypothesis pairs, {(T (1) , H (1) ), . . . , (T (n) , H (n) )}, with labels y (i) which are 1 if H (i) is entailed by T (i) and 0 otherwise. Then we would like to choose w to minimize costs for entailed examples and maximize it for non-entailed pairs:", "cite_spans": [ { "start": 108, "end": 111, "text": "(i)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "\u2113(w) = i:y (i) =1 log c(H (i) , T (i) ; w) + i:y (i) =0 log(1 \u2212 c(H (i) , T (i) ; w))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "Unfortunately, \u2113(w) is not a convex function. Notice that the cost of each matching, M , implicitly depends on the current setting of the weights w. It can be shown that since each c(H, T ; w) involves minimizing M \u2208 M, which depends on w, it is not convex. Therefore, we can't hope to globally optimize our cost functions over w and must settle for an approximation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "One approach is to use coordinate ascent over M and w. Suppose that we begin with arbitrary weights and given these weights choose M (i) to minimize each c(H (i) , T (i) ; w). Then we use a relaxed form of the cost function where we use the matchings found in the last step:", "cite_spans": [ { "start": 133, "end": 136, "text": "(i)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "c(H (i) , T (i) ; w) = 1 |H V | v\u2208H s(w T \u03a6(v, M (i) (v)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "Then we maximize w with respect to \u2113(w) with each c(\u2022) replaced with the cost-function\u0109(\u2022). This step involves only logistic regression. We repeat this procedure until our weights converge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "To test the effectiveness of the above procedure we compared performance against baseline settings using a random split on the development set. Picking each weight uniformly at random resulted in 53% accuracy. Setting all weights identically to an arbitrary value gave 54%. The procedure above, where the weights are initialized to the same value, resulted in an accuracy of 57%. However, we believe there is still room for improvement since carefully-hand chosen weights results in comparable performance to the learned weights on the final test set. We believe this setting of learning under matchings is a rather general one and could be beneficial to other domains such as Machine Translation. In the future, we hope to find better approximation techniques for this problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Weights", "sec_num": "5.3" }, { "text": "One systematic source of error coming from our basic approach is the implicit assumption of upwards monotonicity of entailment; i.e., if T entails H then adding more words to T should also give us a sentence which entails H. This assumption, also made by other recent abductive approaches (Moldovan et al., 2003) , does not hold for several classes of examples. Our formalism does not at present provide a general solution to this issue, but we include special case handling of the most common types of cases, which we outline below. 4 These checks are done after graph matching and assume we have stored the minimal cost matching.", "cite_spans": [ { "start": 289, "end": 312, "text": "(Moldovan et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Checks", "sec_num": "6" }, { "text": "Text: Clinton's book is not a bestseller", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negation Check", "sec_num": null }, { "text": "To catch such examples, we check that each hypothesis verb is not matched to a text word which is negated (unless the verb pairs are antonyms) and vice versa. In this instance, the is in H, denoted by is H , is matched to is T which has a negation modifier, not T , absent for is H . So the negation check fails.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis: Clinton's book is a bestseller", "sec_num": null }, { "text": "Text: Clonaid claims to have cloned 13 babies worldwide.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factive Check", "sec_num": null }, { "text": "Hypothesis: Clonaid has cloned 13 babies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factive Check", "sec_num": null }, { "text": "Non-factive verbs (claim, think, charged, etc.) in contrast to factive verbs (know, regret, etc.) have sentential complements which do not represent true propositions. We detect such cases, by checking that each verb in H that is matched in T does not have a non-factive verb for a parent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factive Check", "sec_num": null }, { "text": "Text: The Osaka World Trade Center is the tallest building in Western Japan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative Check", "sec_num": null }, { "text": "Hypothesis: The Osaka World Trade Center is the tallest building in Japan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative Check", "sec_num": null }, { "text": "In general, superlative modifiers (most, biggest, etc.) invert the typical monotonicity of entailment and must be handled as special cases. For any noun n with a superlative modifier (part-of-speech JJS) in H, we must ensure that all modifier relations of M (n) are preserved in H. In this example, building H has a superlative modifier tallest H , so we must ensure that each modifier relation of Japan T , a noun Additionally, during error analysis on the development set, we spotted the following cases where our VertexSub function erroneously labeled vertices as similar, and required special case consideration:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative Check", "sec_num": null }, { "text": "\u2022 Antonym Check: We consistently found that the WordNet::Similarity modules gave highsimilarity to antonyms. 5 We explicitly check whether a matching involved antonyms and reject unless one of the vertices had a negation modifier. \u2022 Numeric Mismatch: Since numeric expressions typically have the same part-of-speech tag (CD), they were typically matched when exact matches could not be found. However, mismatching numerical tokens usually indicated that H was not entailed, and so pairs with a numerical mismatch were rejected.", "cite_spans": [ { "start": 109, "end": 110, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Superlative Check", "sec_num": null }, { "text": "For our experiments we used the devolpement and test sets from the Recognizing Textual Entailment challenge (Dagan et al., 2005) . We give results for our system as well as for the following systems:", "cite_spans": [ { "start": 108, "end": 128, "text": "(Dagan et al., 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "7" }, { "text": "\u2022 Bag-Of-Words: We tokenize the text and hypothesis and strip the function words, and stem the resulting words. The cost is given by the fraction of the hypothesis not matched in the text. \u2022 TF-IDF: Similar to Bag-Of-Words except that there is a tf.idf weight associated with each hypothesis word so that more \"important\" words are higher weight for matching. We also present results for two graph matching (GM) systems. The GM-General system fits a single global threshold from the development set. The GM-ByTask system fits a different threshold for each of the tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "7" }, { "text": "Our results are summarized in Table 2 . As the result indicates, the task is particularly hard; all RTE participants scored between 50% and 60% in terms of overall accuracy (Dagan et al., 2005) . Nevevertheless, both GM systems perform better than either Bag-Of-Words or TF-IDF. CWS refers to Confidence Weighted Score (also known as average precision). This measure is perhaps a more insightful measure, since it allows the inclusion of a ranking of answers by confidence and assesses whether you are correct on the pairs that you are most confident that you know the answer to. To assess CWS, our n answers are sorted in decreasing order by the confidence we return, and then for each i, we calculate a i , our accuracy on our i most confident predictions. Then CWS = 1 n n i=1 a i . We also present results on a per-task basis in Table 3. Interestingly, there is a large variation in performance depending on the task.", "cite_spans": [ { "start": 173, "end": 193, "text": "(Dagan et al., 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "7" }, { "text": "We have presented a learned graph matching approach to approximating textual entailment which outperforms models which only match at the word level, and is competitive with recent weighed abduction models (Moldovan et al., 2003) . In addition, we explore problematic cases of nonmonotonicity in entailment, which are not naturally handled by either subgraph matching or the so-called \"logic form\" (Moldovan et al., 2003) and have proposed a way to capture common cases of this phenomenon. We believe that the methods employed in this work show much potential for improving the state-of-the-art in computational semantic inference.", "cite_spans": [ { "start": 205, "end": 228, "text": "(Moldovan et al., 2003)", "ref_id": "BIBREF6" }, { "start": 397, "end": 420, "text": "(Moldovan et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": ". Semantic Role Labeling: We also augment the graph representation with Probank-style semantic roles via the system described in(Toutanova et al., 2005). Each predicate adds an arc labeled with the appropriate semantic role to the head of the argument phrase. This helps to create links between words which share a deep semantic relation not evident in the surface syntax. Additionally, modifying phrases are labeled with their semantic types (e.g., in 1991 is linked by a Temporal edge in the text graph ofFigure 2), which should be useful in Question Answering tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at http://infomap.stanford.edu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the case of our \"match\" conditions, these features will be binary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "All the examples are actual, or slightly altered, RTE examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This isn't necessarily incorrect, but is simply not suitable for textual inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Many thanks to Rajat Raina, Christopher Cox, Kristina Toutanova, Jenny Finkel, Marie-Catherine de Marneffe, and Bill MacCartney for providing us with linguistic modules and useful discussions. This work was supported by the Advanced Research and Development Activity (ARDA)'s Advanced Question Answering for Intelligence (AQUAINT) program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "9" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "VerbOcean: Mining the web for fine-grained semantic verb relations", "authors": [ { "first": "Timothy", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2004, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Chklovski and Patrick Pantel. 2004. VerbO- cean: Mining the web for fine-grained semantic verb relations. In EMNLP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Head-driven statistical models for natural language parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The PASCAL recognizing textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the PASCAL Challenges Workshop Recognizing Textual Entailment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognizing textual entailment challenge. In Proceedings of the PASCAL Challenges Workshop Recognizing Textual Entailment.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "ACL", "volume": "", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In ACL, pages 423-430.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "DIRT -discovery of inference rules from text", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "323--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. DIRT -discovery of inference rules from text. In Knowledge Discovery and Data Mining, pages 323-328.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Cogex: A logic prover for question answering", "authors": [ { "first": "Dan", "middle": [ "I" ], "last": "Moldovan", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Sanda", "middle": [ "M" ], "last": "Harabagiu", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Maiorano", "suffix": "" } ], "year": 2003, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan I. Moldovan, Christine Clark, Sanda M. Harabagiu, and Steven J. Maiorano. 2003. Cogex: A logic prover for question answering. In HLT-NAACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Wordnet::similarity -measuring the relatedness of concepts", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Parwardhan", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Michelizzi", "suffix": "" } ], "year": 2004, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Pedersen, Siddharth Parwardhan, and Jason Miche- lizzi. 2004. Wordnet::similarity -measuring the relat- edness of concepts. In AAAI.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Robust textual inference using diverse knowledge sources", "authors": [ { "first": "Rajat", "middle": [], "last": "Raina", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Cox", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Michels", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Mac-Cartney", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the First PASCAL Challenges Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajat Raina, Aria Haghighi, Christopher Cox, Jenny Finkel, Jeff Michels, Kristina Toutanova, Bill Mac- Cartney, Marie-Catherine de Marneffe, Christopher D. Manning, and Andrew Y. Ng. 2005a. Robust textual inference using diverse knowledge sources. In Pro- ceedings of the First PASCAL Challenges Workshop. Southampton, UK.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Robust textual inference via learning and abductive reasoning", "authors": [ { "first": "Rajat", "middle": [], "last": "Raina", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of AAAI 2005", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajat Raina, Andrew Y. Ng, and Christopher D. Man- ning. 2005b. Robust textual inference via learning and abductive reasoning. In Proceedings of AAAI 2005. AAAI Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using information content to evaluate semantic similarity in a taxonomy", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "IJCAI", "volume": "", "issue": "", "pages": "448--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1995. Using information content to evalu- ate semantic similarity in a taxonomy. In IJCAI, pages 448-453.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Joint learning improves semantic role labeling", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Cristiopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Association of Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Aria Haghighi, and Cristiopher Man- ning. 2005. Joint learning improves semantic role la- beling. In Association of Computational Linguistics (ACL).", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "text": "Example graph matching (\u03b1 = 0.55) for example pair. Dashed lines represent optimal matching. the vertex and relational match costs: Cost", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Exact Match: v and M (v) are identical words/ phrases. \u2022 Stem Match: v and M (v)'s stems match or one is a derivational form of the other; e.g., matching coaches to coach. \u2022 Synonym Match: v and M (v) are synonyms according to WordNet", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "Hypernym Match: v is a \"kind of\" M (v), as determined by WordNet. Note that this feature is asymmetric. \u2022 WordNet Similarity: v and M (v) are similar according to WordNet::Similarity (Peder", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "POS Match: v and M (v) have the same part of speech. \u2022 No Match: M (v) is NULL.", "uris": null, "num": null }, "TABREF3": { "type_str": "table", "text": "Accuracy and confidence weighted score (CWS) for test set using various techniques. dependent of building T , has a Western T modifier not in H. So its fails the superlative check.", "html": null, "content": "", "num": null }, "TABREF5": { "type_str": "table", "text": "Accuracy and confidence weighted score (CWS) split by task on the RTE test set.", "html": null, "content": "
", "num": null }, "TABREF7": { "type_str": "table", "text": "Analysis of results on some RTE examples along with out guesses and confidence probabilities inference of", "html": null, "content": "
", "num": null } } } }