|
{ |
|
"paper_id": "H05-1047", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:34:27.572146Z" |
|
}, |
|
"title": "A Semantic Approach to Recognizing Textual Entailment", |
|
"authors": [ |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Tatu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Language Computer Corporation Richardson", |
|
"location": { |
|
"postCode": "75080", |
|
"region": "TX", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Language Computer Corporation Richardson", |
|
"location": { |
|
"postCode": "75080", |
|
"region": "TX", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Exhaustive extraction of semantic information from text is one of the formidable goals of state-of-the-art NLP systems. In this paper, we take a step closer to this objective. We combine the semantic information provided by different resources and extract new semantic knowledge to improve the performance of a recognizing textual entailment system.", |
|
"pdf_parse": { |
|
"paper_id": "H05-1047", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Exhaustive extraction of semantic information from text is one of the formidable goals of state-of-the-art NLP systems. In this paper, we take a step closer to this objective. We combine the semantic information provided by different resources and extract new semantic knowledge to improve the performance of a recognizing textual entailment system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "While communicating, humans use different expressions to convey the same meaning. Therefore, numerous NLP applications, such as, Question Answering, Information Extraction, or Summarization require computational models of language that recognize if two texts semantically overlap. Trying to capture the major inferences needed to understand equivalent semantic expressions, the PASCAL Network proposed the Recognizing Textual Entailment (RTE) challenge (Dagan et al., 2005) . Given two text fragments, the task is to determine if the meaning of one text (the entailed hypothesis, H) can be inferred from the meaning of the other text (the entailing text, T ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 453, |
|
"end": 473, |
|
"text": "(Dagan et al., 2005)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognizing Textual Entailment", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given the wide applicability of this task, there is an increased interest in creating systems which detect the semantic entailment between two texts. The systems that participated in the Pascal RTE challenge competition exploit various inference elements which, later, they combine within statistical models, scoring methods, or machine learning frameworks. Several systems (Bos and Markert, 2005; Herrera et al., 2005; Jijkoun and de Rijke, 2005; Kouylekov and Magnini, 2005; Newman et al., 2005) measured the word overlap between the two text strings. Using either statistical or Word-Net's relations, almost all systems considered lexical relationships that indicate entailment. The degree of similarity between the syntactic parse trees of the two texts was also used as a clue for entailment by several systems (Herrera et al., 2005; Kouylekov and Magnini, 2005; de Salvo Braz et al., 2005; Raina et al., 2005) . Several groups used logic provers to show the entailment between T and H (Bayer et al., 2005; Bos and Markert, 2005; Fowler et al., 2005; Raina et al., 2005) and some of them made use of world knowledge axioms to increase the logic prover's power of inference (Bayer et al., 2005; Bos and Markert, 2005; Fowler et al., 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 397, |
|
"text": "(Bos and Markert, 2005;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 419, |
|
"text": "Herrera et al., 2005;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 447, |
|
"text": "Jijkoun and de Rijke, 2005;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 476, |
|
"text": "Kouylekov and Magnini, 2005;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 497, |
|
"text": "Newman et al., 2005)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 816, |
|
"end": 838, |
|
"text": "(Herrera et al., 2005;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 839, |
|
"end": 867, |
|
"text": "Kouylekov and Magnini, 2005;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 895, |
|
"text": "de Salvo Braz et al., 2005;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 896, |
|
"end": 915, |
|
"text": "Raina et al., 2005)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 991, |
|
"end": 1011, |
|
"text": "(Bayer et al., 2005;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1012, |
|
"end": 1034, |
|
"text": "Bos and Markert, 2005;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1035, |
|
"end": 1055, |
|
"text": "Fowler et al., 2005;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1056, |
|
"end": 1075, |
|
"text": "Raina et al., 2005)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1178, |
|
"end": 1198, |
|
"text": "(Bayer et al., 2005;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1199, |
|
"end": 1221, |
|
"text": "Bos and Markert, 2005;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1222, |
|
"end": 1242, |
|
"text": "Fowler et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognizing Textual Entailment", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we describe a novel technique which employs a set of semantic axioms in its attempt to exhaustively extract semantic knowledge from texts. In order to show the contribution that our semantic information extraction method brings, we append it as an additional module to an already existing system that participated in the RTE challenge. Our system (Fowler et al., 2005) , first, transforms the text T and the hypothesis H into semantically enhanced logic forms, and, then, the integrated logic prover tries to prove or disprove the entailment using a set of world-knowledge axioms (die of blood loss \u2192 bleed to death), linguistic rewriting rules which break down complex syntactic structures, like coordinating conjunctions, and WordNet-based lexical chains axioms (buy/VB/1 \u2192 pay/VB/1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 383, |
|
"text": "(Fowler et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognizing Textual Entailment", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We believe that a logic-based semantic approach is highly appropriate for the RTE task 1 . Text T semantically entails H if its meaning logically implies the meaning of H. Because the set of semantic relations encoded in a text represents its meaning, we need to identify all the semantic relations that hold between the constituents of T and, subsequently, between the constituents of H to understand the meaning of each text. It should be noted that state-of-the-art semantic parsers extract only some of the semantic relations encoded in a given text. To complete this information, we need semantic axioms that augment the extracted knowledge and, thus, provide a better coverage of the text's semantics. Once we gather this information, we state that text T entails hypothesis H if and only if we find similar relations between a concept from T and a semantically analogous concept from H. By analogous concepts, we mean identical concepts, or words connected by a chain of SYNONYMY, HYPERNYMY or morphological derivation relations in WordNet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Because the set of semantic elements identified by a semantic parser does not necessarily convey the complete meaning of a sentence, we shall use a set of semantic axioms to infer the missing pieces of information. By combining two semantic relations or by using the FrameNet's frame elements identified in a given text, we derive new semantic information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to show if T entails H, we analyze their meanings. Our approach to semantic entailment involves the following stages:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. We convert each text into logic form (Moldovan and Rus, 2001 ). This conversion includes part-ofspeech tagging, parse tree generation, and name entity recognition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 63, |
|
"text": "(Moldovan and Rus, 2001", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2. Using our semantic parser, we identify some of the semantic relations encoded in the analyzed texts. We note that state-of-the-art semantic parsers cannot discover all the semantic relations conveyed implicitly or explicitly by the text. This problem compromises our system's performance. To obtain the complete set of semantic relations that represents the meaning of the given texts, we introduce a new step in our algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. We add semantic axioms to the already created set of world knowledge, NLP, and WordNet-based lexical chain (Moldovan and Novischi, 2002) axioms that assist the logic prover in its search for proofs. We developed semantic axioms that show how two semantic relations can be combined. This will allow the logic prover to combine, whenever possible, semantic instances in order to infer new semantic relationships. The instances of relations that participate in semantic combinations can be either provided by the text or annotated between WordNet synsets. We also exploit other sources of semantic information from the text. For example, the frames encoded in the text sentence provide information which complements the meaning given by the semantic relations. Our second type of axioms derive semantic relations between the frame elements of a given FrameNet frame.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 139, |
|
"text": "(Moldovan and Novischi, 2002)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We claim that the process of applying the semantic axioms, given the semantic relations detected by a semantic parser, will capture the complete semantic information expressed by a text fragment. In this paper, we show the usefulness of this procedure for the RTE task, but we are convinced that it can be used by any system which plans to extract the entire semantic information from a given text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We load the COGEX logic prover (Moldovan et al., 2003) which operates by \"reductio ad absurdum\" with H's negated form and T 's predicates. These clauses are weighted in the order in which they should be chosen to participate in the search. To ensure that H will be the last clause to participate, we assign it the largest value. The logic prover searches for new inferences that can be made using the smallest weight clauses. It also assigns a value to each inference based on the axiom it used to derive it. This process continues until the set of clauses is empty. If a refutation is found, the proof is complete. If a contradiction cannot be found, then the predicate arguments are relaxed and, if the argument relaxation fails, then predicates are dropped until a proof by refutation is found. Its score will be computed by deducting points for each argument relaxation and predicate removal. If this value falls below a threshold, then T does not entail H. Otherwise, the (T, H) pair is a true entailment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 54, |
|
"text": "(Moldovan et al., 2003)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We present a textual entailment example to show the steps of our approach. This proof will not T John and his son, George, emigrated with Mike, John's uncle, to US in 1969. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LF T John(x1) \u2227 son(x2) \u2227 George(x3) \u2227 ISA(x3, x2) \u2227 KIN(x1, x3) \u2227 emigrate(e1) \u2227 AGT(x1, e1) \u2227 AGT(x2, e1) \u2227 Mike(x4) \u2227 uncle(x5) \u2227 ISA(x4, x5) \u2227 KIN(x1, x4) \u2227 US(x6) \u2227 LOC(e1, x6) \u2227 1969(x7) \u2227 TMP(e1, x7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "KIN(w1, w2) \u2194 KIN(w2, w1) KIN(x1, x3) \u2192 KIN(x3, x1) (KIN(John, George) \u2192 KIN(George, John)) TAxiom 2 KIN \u2022 KIN = KIN (KIN(w1, w2) \u2227 KIN(w2, w3) \u2192 KIN(w1, w3)) KIN(x3, x1) \u2227 KIN(x1, x4) \u2192 KIN(x3, x4) (KIN(George, M ike)) TAxiom 3 DEPARTING F \u2192 LOC(T heme.f e, Goal.f e) (LOC(John, U S) \u2227 LOC(George, U S)) TAxiom 4 DEPARTING F \u2192 LOC(Cotheme.f e, Goal.f e) (LOC(M ike, U S)) TSemantics KIN(J ohn, George), KIN(J ohn, M ike), KIN(George, M ike), LOC(J ohn, U S), LOC(George, U S), LOC(M ike, U S), TMP(emigrate, 1969), AGT(J ohn, emigrate), AGT(George, emigrate) H", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "George and his relative, Mike, came to America. Table 1 ). Then, the logic prover uses the newly added semantic axioms to derive extra semantic information from T and H (for example, George and Mike are relatives, but T does not explicitly specify this), after another preprocessing step which identifies the frame elements of each frame encoded in the two texts (T Departing , T Kinship , H Arriving , H Kinship ). In our example, the axioms T Axiom 1 and T Axiom 2 denote the symmetry and the transitivity of the KIN-SHIP relation. T Axiom 3 , T Axiom 4 and H Axiom 1 are the frame-related axioms used by the logic prover. The T Semantics and H Semantics rows (Table 1) summarize the meaning of T and H. We note that half of these semantic instances were extracted using the semantic axioms. Once the lexical chains between the concepts in T and the ones from H are computed, the entailment becomes straightforward. We represented, graphically, the meaning of the two texts in Figure 1 . We also show the links between the analogous concepts that help prove the entailment.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 55, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 662, |
|
"end": 671, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 979, |
|
"end": 987, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LF H George(x1) \u2227 relative(x2) \u2227 Mike(x3) \u2227 ISA(x3, x2) \u2227 KIN(x1, x3) \u2227 come(e1) \u2227 AGT(x1, e1) \u2227 AGT(x2, e1) \u2227 America(x4) \u2227 LOC(e1, x2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the coming sections of the paper, we detail the process of semantic axiom generation. We start with a summary of the axioms that combine two semantic relations. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For this study, we adopt a revised version of the semantic relation set proposed by (Moldovan et al., 2004) . Table 2 enumerates the semantic relations that we consider 2 . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 107, |
|
"text": "(Moldovan et al., 2004)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 117, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "POSSESSION (POS) MAKE-PRODUCE (MAK) RECIPIENT (REC) THEME-PATIENT (THM) KINSHIP (KIN) INSTRUMENT (INS) FREQUENCY (FRQ) RESULT (RSL) PROPERTY-ATTRIBUTE (PAH) LOCATION-SPACE (LOC) INFLUENCE (IFL) STIMULUS (STI) AGENT (AGT) PURPOSE (PRP) ASSOCIATED WITH (OTH) EXTENT (EXT) TEMPORAL (TMP) SOURCE-FROM (SRC) MEASURE (MEA) PREDICATE (PRD) DEPICTION (DPC) TOPIC (TPC) SYNONYMY-NAME (SYN) CAUSALITY (CSL) PART-WHOLE (PW) MANNER (MNR) ANTONYMY (ANT) JUSTIFICATION (JST) HYPERNYMY (ISA) MEANS (MNS) PROBABILITY OF EXISTENCE (PRB) GOAL (GOL) ENTAIL (ENT) ACCOMPANIMENT (ACC) POSSIBILITY (PSB) BELIEF (BLF) CAUSE (CAU) EXPERIENCER (EXP) CERTAINTY (CRT) MEANING (MNG)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our goal is to devise semantic axioms for combinations of two relations, R 1 and R 2 , by observing the semantic connection between the w 1 and w 3 words for which there exists at least one other word, w 2 , such that R 1 (w 1 , w 2 ) and R 2 (w 2 , w 3 ) hold true 3 . Harabagiu and Moldovan (1998) tackled the problem of semantic combinations, for the first time. Their set of relations included the WordNet1.5 annotations and 12 relationships derived from the Word-Net glosses 4 . In our research, unlike (Harabagiu and Moldovan, 1998) , the semantic combinations use the relations identified in text with a rather minimal contribution from the WordNet relations. Harabagiu and Moldovan (1998) also investigate the number of possible semantic combinations. Based on their properties, we can have up to eight combinations between any two semantic relations and their inverses, not counting the combinations between a semantic relation and itself 5 . For instance, given an asymmetric relation and a symmetric one which share the same part-of-speech for their arguments, we can produce four combinations. ISA \u2022 ANT, ISA \u22121 \u2022 ANT, ANT \u2022 ISA, and ANT \u2022 ISA \u22121 are the four possible distinct combinations between HYPERNYMY and ANTONYMY. \"\u2022\" symbolizes the semantic composition between two relations compatible with respect to the part-of-speech of their arguments: for any two concepts, w 1 and w 3 ,", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 299, |
|
"text": "Harabagiu and Moldovan (1998)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 538, |
|
"text": "(Harabagiu and Moldovan, 1998)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 667, |
|
"end": 696, |
|
"text": "Harabagiu and Moldovan (1998)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(R i \u2022R j )(w 1 , w 3 ) if and only if \u2203w 2 , a third concept, such that R i (w 1 , w 2 ) and R j (w 2 , w 3 ) hold. By R \u22121 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EFICIARY, PURPOSE, ATTRIBUTE, REASON, STATE, LOCA-TION, THEME, TIME, and MANNER relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "5 Harabagiu and Moldovan (1998) lists the exact number of possible combinations for several WordNet relations and partof-speech classes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 2, |
|
"end": 31, |
|
"text": "Harabagiu and Moldovan (1998)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "we denote the inverse of relation R: if R(x, y), then", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "R \u22121 (y, x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "While analyzing the combinations, we observed some regularities within the semantic composition process. For example,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "R \u22121 1 \u2022 R \u22121 2 = (R 2 \u2022 R 1 ) \u22121 for any, not necessarily distinct, semantic relations R 1 and R 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "6 . If one of the relations is symmetric (R \u22121 = R), the statement is still valid. Using (R \u22121 ) \u22121 = R and the previous equality, we can reduce by half the number of semantic combinations that we have to compute for R 1 = R 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We plan to create a 40 \u00d7 40 matrix with all the possible combinations between any two semantic relations from the set we consider. Theoretically, we can have up to 27,556 semantic combinations, but only 25.79% of them are possible 7 (for example, MNR(r, v) and SYN(n, n) cannot be combined). Many combinations are not semantically significant either because they are very rare, like, KIN(n, n)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 TMP(n, v), or because they do not result into one of the 40 relations, for instance, PAH(a, n) \u2022 AGT(n, v) 8 . We identified two approaches to the problem mentioned above. The first tries to fill one matrix cell at a time in a consecutive manner. The second approach tries to solve the semantic combinations we come upon in text corpora. As a result, we analyzed the RTE development corpus and we devised rules for some of the R i \u2022R j combinations that we encountered. We validated these axioms by man- (LOCATION(x, l1) \u2227 PART-WHOLE(l1, l2) \u2192 LOCATION(x, l2)) Example: John lives in Dallas, Texas. LOCATION(J ohn, Dallas) and PART-WHOLE(Dallas, T exas) imply that LOCATION(J ohn, T exas). ISA \u2022 ATTRIBUTE = ATTRIBUTE (ISA(x, y) \u2227 ATTRIBUTE(y, a) \u2192 ATTRIBUTE(x, a)) Example: Mike is a rich man. If ISA(M ike, man) and ATTRIBUTE(man, rich), then ATTRIBUTE(M ike, rich). Similar statements can be made for other \"attributes\": LOCATION, TIME, SOURCE, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 506, |
|
"end": 522, |
|
"text": "(LOCATION(x, l1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "LOCATION \u2022 PART-WHOLE = LOCATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "ISA \u2022 LOCATION = LOCATION (ISA(x, y) \u2227 LOCATION(y, l) \u2192 LOCATION(x, l)) Example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The man in the car, George, is an old friend of mine. ISA(George, man) and LOCATION(man, car) \u2192 LOCATION(George, car) KINSHIP \u2022 KINSHIP = KINSHIP (KINSHIP(x, y) \u2227 KINSHIP(y, z) \u2192 KINSHIP(x, z)) See example in Section 2. THEME \u2022 ISA \u22121 = THEME (THEME(e, y) \u2227 ISA(x, y) \u2192 THEME(e, x)) Example: Yesterday, John ate some fruits: an apple and two oranges. THEME(eat, f ruit) \u2227 ISA(apple, f ruit) \u2192 THEME(eat, apple) THEME \u2022 PART-WHOLE \u22121 = THEME (THEME(e, y) \u2227 PART-WHOLE(x, y) \u2192 THEME(e, x)) Example: Five Israelis, including two children, were killed yesterday. THEME(kill, Israeli) \u2227 PART-WHOLE(child, Israeli) \u2192 THEME(kill, child) Similar statements can be made for all the thematic roles: AGENT, EXPERIENCER, INSTRUMENT, CAUSE, LOCATION, etc. AGENT \u2022 ISA \u22121 = AGENT (AGENT(e, y) \u2227 ISA(x, y) \u2192 AGENT(e, x)) AGENT \u2022 PART-WHOLE \u22121 = AGENT (AGENT(e, y) \u2227 PART-WHOLE(x, y) \u2192 AGENT(e, x)) ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(w 1 , w 3 ) pairs which satisfy (R i \u2022R j )(w 1 , w 3 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We have identified 64 semantic axioms that show how semantic relations can be combined. These axioms use relations such as PART-WHOLE, ISA, LOCATION, AT-TRIBUTE, or AGENT. We listed several example rules in Table 3 . The 64 axioms can be applied independent of the concepts involved in the semantic composition. We have also identified rules that can be applied only if the concepts that participate satisfy a certain condition or if the relations are of a certain type. For example, LOC \u2022 LOC = LOC only if the LOC relation shows inclusion (John is in the car in the garage \u2192 LOC(J ohn, garage). John is near the car behind the garage \u2192 LOC(J ohn, garage)).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 214, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combinations of two semantic relations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The Berkeley FrameNet project 9 (Baker et al., 1998) is a lexicon-building effort based on the theory of frame semantics which defines the meanings of lexical units with respect to larger conceptual structures, called frames. Individual lexical units point to specific frames and establish a binding pattern to specific elements within the frame. FrameNet describes the underlying frames for different lexical units and examines sentences related to the frames using the BNC corpus. The result is an XML database that 9 http://framenet.icsi.berkeley.edu contains a set of frames, a set of frame elements for each frame, and a set of frame annotated sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 52, |
|
"text": "(Baker et al., 1998)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FrameNet Can Help", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "With respect to a given target, the frame elements contribute to the understanding of the sentence. But they only link each argument to the target word (for example, THM(theme, target) or AGT(theme, target), LOC(place, target), etc.). Often enough, we can find relations between the frame elements of a given frame. These new instances of semantic relations take as arguments the frame elements of a certain frame, when they are expressed in the text. For example, given the DEPART-ING frame, we can say that the origin of the theme is the source (SRC(theme, source) ) and that the new location of the theme is the goal frame element (LOC(theme, goal) ). Moreover, if the text specifies the cotheme frame element, then we can make similar statements about it (SRC(cotheme, source) and LOC(cotheme, goal)). These new relation instances increase the semantic information that can be derived from text.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 566, |
|
"text": "(SRC(theme, source)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 634, |
|
"end": 651, |
|
"text": "(LOC(theme, goal)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Frame-based semantic axioms", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "So far, we manually inspected 54 frames and analyzed the relationships between their frame elements by examining their definitions and the annotated corpus provided with the FrameNet data. For each frame, we retained only the rules independent of the CLOTHING PARTS F \u2192 PW(subpart, clothing) CLOTHING PARTS F \u2192 PW(material, subpart) Example: \"Hello, Hank\" they said from the depths of the Table 4 : Frames-related semantic rules frame's lexical units. We identified 132 semantic axioms that hold in most cases 10 . We show some examples in Table 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 389, |
|
"end": 396, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 547, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Frame-based semantic axioms", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "There are cases when the rules that we identified should not be applied. Let's examine the sentence John intends to leave the kitchen. If we consider only the DEPARTING frame and its corresponding rules, without looking at the context, then our conclusions (\u00ac LOC(J ohn, kitchen) and SRC(J ohn, kitchen)) will be false. This sentence states an intention of motion, not the actual action. Therefore, our semantic axioms apply only when the context they are in, allows it. To overcome this problem, we do not apply the axioms for target words found in planning contexts, contexts related to beliefs, intentions, desires, etc. As an alternative, we keep track of plans, intentions, desires, etc. and, if, later on, we confirm them, then we apply the semantic axioms. Also, when we analyze a sentence, the frame whose rules we apply needs to be chosen carefully. the motion) because the boat sank. Here, the rules given by sink.v's frame should be given priority over the carry.v's rules. We can generalize and conclude that, given a sentence that contains more than one target (therefore, maybe multiple frames), the dominant frame, the one whose rules should be applied, is the frame given by the predicative verb. In the previous sentence, the dominant frame is the one given by sink.v and its rules should be applied before the axioms of the CARRYING frame. It should be noted that some of the axioms semantically related to the CARRYING frame still apply (for example, SRC(emigrants, U K) or SRC(boat, U K)). Unlike LOC(emigrants, Spain), the previous relations do not conflict with the semantics given by sink.v and its location (the Strait of Gibraltar).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context importance", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The benchmark corpus for the RTE task consists of seven subsets with a 50%-50% split between the positive entailment examples and the negative ones. Each subgroup corresponds to a different NLP application: Information Retrival (IR), Comparable Documents (CD), Reading Comprehension (RC), Question Answering (QA), Information Extraction (IE), Machine Translation (MT), and Paraphrase Acquisition (PP ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The RTE data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We measured the applicability of our set of semantic rules, by counting the number of times they extract new semantic information from text. Table 6 shows, in percentages, the coverage of the semantic axioms when applied to the texts T and the hypotheses H. We also show the number of times the semantic rules solve a (T, H) entailment without employing any other type of axioms. Clearly, because the texts T convey much more information than H, they are the ones that benefit the most from our semantic axioms. The hypotheses H are more straightforward and a semantic parser can extract all their semantic information. Also, the rules tend to solve more positive (T, H) entailments. Because there are seven subsets corresponding to different NLP applications that make up the RTE data, we analyzed the contribution of our semantic axioms to each of the seven tasks. Table 5 shows the axioms' impact on each type of data. The logic-based approach proves to be useful to tasks like Information Extraction, Reading Comprehension, or Comparable Documents, and it doesn't seem to be the right choice for the more lexical-orientated applications like Paraphrase Acquisition, Machine Translation, and Information Retrieval.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 148, |
|
"text": "Table 6", |
|
"ref_id": "TABREF10" |
|
}, |
|
{ |
|
"start": 867, |
|
"end": 874, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic axiom applicability", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To show the impact of our semantic axioms, we measured the contribution they bring to a system that participated in the RTE challenge. The ACC and F columns (Table 7) show the performance of the system before and after we added our semantic rules to the list of axioms needed by the logic prover. Table 7 : The accuracy(ACC) and f-measure(F) performance values of our system", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 166, |
|
"text": "(Table 7)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 304, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RTE performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The results show that richer semantic connectivity between text concepts improve the performance of a semantic entailment system. The overall accuracy increases with around 5% on the test data and almost 8% on the development set. We obtained performance improvements for all application settings, except for the Paraphrase Acquisition task. For this application, we obtained the smallest axiom coverage ( Table 5 ). The impact of the semantic axioms on each NLP application data set correlates with the improvement that the addition of the rules brought to the system's accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 413, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RTE performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Our error analysis showed that the system did not take full advantage of our semantic axioms, because the semantic parser did not identify all the semantic relations needed as building blocks by the axioms. We noticed a significant decrease in the logic prover's usage of world-knowledge axioms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RTE performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In this paper, we present a logic-based semantic approach for the recognizing textual entailment task. The system participating in the RTE competition used a set of world-knowledge, NLP, and lexical chain-based axioms and an in-house logic prover which received as input the logic forms of the two texts enhanced with semantic relation instances. Because the state-of-the-art semantic parsers cannot extract the complete semantic information encoded in text, the need for semantic calculus in NLP became evident. We introduce semantic axioms that either combine two semantic instances or label relations between the frame elements of a given frame. Preliminary statistical results show that incorporating semantic rules into the logic prover can double the semantic connectivity between the concepts of the analyzed text. Our process of identifying more semantic instances leads to a smaller dependency of the logic-based RTE system on world knowledge axioms, while improving its overall accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "After all, the entailment, inference, and equivalence terms originated from logic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "See(Moldovan et al., 2004) for definitions and examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "R(x, y) indicates that relation R holds between x and y. 4 This set includes the AGENT, OBJECT, INSTRUMENT, BEN-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The equality holds only if the two composition terms exist. 7 On average, each semantic relation has 2.075 pairs of arguments. For example, SRC can connect two nouns (US investor), or an adjective and a noun (American investor) and, depending on its arguments, SRC will participate in different combinations. Out of the 27,556 combinations, only 7,109 are syntactically possible.8 n, v, a, and r stand for noun, verb, adjective, and adverb, respectively. As an example, R(n, n) means that relation R can connect two nouns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The Berkeley FrameNet project", |
|
"authors": [ |
|
{ |
|
"first": "Collin", |
|
"middle": [], |
|
"last": "Baker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fillmore", |
|
"middle": [], |
|
"last": "Charles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Love", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of COLING/ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collin Baker, Fillmore Charles, and John Love. 1998. The Berkeley FrameNet project. In Proceedings of COLING/ACL.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "MITRE's Submissions to the EU Pascal RTE Challenge", |
|
"authors": [ |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bayer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Burger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Ferro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel Bayer, John Burger, Lisa Ferro, John Henderson, and Alexander Yeh. 2005. MITRE's Submissions to the EU Pascal RTE Challenge. In Proceedings of the PASCAL RTE Challenge.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Combining Shallow and Deep NLP Methods for Recognizing Textual Entailment", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos and Katja Markert. 2005. Combining Shal- low and Deep NLP Methods for Recognizing Textual Entailment. In Proceedings of the PASCAL RTE Chal- lenge.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The PASCAL Recognising Textual Entailment Challenge", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Ido Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Glickman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL Recognising Textual Entailment Challenge. In Proceedings of the PASCAL RTE Chal- lenge.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "An Inference Model for Semantic Entailment in Natural Language", |
|
"authors": [ |
|
{ |
|
"first": "Rodrigo", |
|
"middle": [], |
|
"last": "De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salvo", |
|
"middle": [], |
|
"last": "Braz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vasin", |
|
"middle": [], |
|
"last": "Punyakanok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Sammons", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rodrigo de Salvo Braz, Roxana Girju, Vasin Pun- yakanok, Dan Roth, and Mark Sammons. 2005. An Inference Model for Semantic Entailment in Natural Language. In Proceedings of the PASCAL RTE Chal- lenge.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Applying COGEX to Recognize Textual Entailment", |
|
"authors": [ |
|
{ |
|
"first": "Abraham", |
|
"middle": [], |
|
"last": "Fowler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Hauser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Hodges", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Niles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Novischi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Stephan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abraham Fowler, Bob Hauser, Daniel Hodges, Ian Niles, Adrian Novischi, and Jens Stephan. 2005. Applying COGEX to Recognize Textual Entailment. In Pro- ceedings of the PASCAL RTE Challenge.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Knowledge Processing on Extended WordNet", |
|
"authors": [ |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "WordNet: an Electronic Lexical Database and Some of its Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanda Harabagiu and Dan Moldovan. 1998. Knowledge Processing on Extended WordNet. In WordNet: an Electronic Lexical Database and Some of its Applica- tions.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Textual Entailment Recognision Based on Dependency Analysis and WordNet", |
|
"authors": [ |
|
{ |
|
"first": "Jess", |
|
"middle": [], |
|
"last": "Herrera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anselmo", |
|
"middle": [], |
|
"last": "Peas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felisa", |
|
"middle": [], |
|
"last": "Verdejo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jess Herrera, Anselmo Peas, and Felisa Verdejo. 2005. Textual Entailment Recognision Based on Depen- dency Analysis and WordNet. In Proceedings of the PASCAL RTE Challenge.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Recognizing Textual Entailment Using Lexical Similarity", |
|
"authors": [ |
|
{ |
|
"first": "Valentin", |
|
"middle": [], |
|
"last": "Jijkoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Maarten De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Valentin Jijkoun and Maarten de Rijke. 2005. Recogniz- ing Textual Entailment Using Lexical Similarity. In Proceedings of the PASCAL RTE Challenge.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Recognizing Textual Entailment with Tree Edit Distance Algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Milen", |
|
"middle": [], |
|
"last": "Kouylekov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Milen Kouylekov and Bernardo Magnini. 2005. Rec- ognizing Textual Entailment with Tree Edit Distance Algorithms. In Proceedings of the PASCAL RTE Chal- lenge.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Lexical Chains for Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Novischi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Moldovan and Adrian Novischi. 2002. Lexical Chains for Question Answering. In Proceedings of COLING.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Logic Form Transformation of WordNet and its Applicability to Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vasile", |
|
"middle": [], |
|
"last": "Rus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Moldovan and Vasile Rus. 2001. Logic Form Trans- formation of WordNet and its Applicability to Ques- tion Answering. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "COGEX A Logic Prover for Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Maiorano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the HLT/NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX A Logic Prover for Question Answering. In Proceedings of the HLT/NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Models for the Semantic Classification of Noun Phrases", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adriana", |
|
"middle": [], |
|
"last": "Badulescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Tatu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Antohe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of HLT/NAACL, Computational Lexical Semantics workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Moldovan, Adriana Badulescu, Marta Tatu, Daniel Antohe, and Roxana Girju. 2004. Models for the Se- mantic Classification of Noun Phrases. In Proceed- ings of HLT/NAACL, Computational Lexical Seman- tics workshop.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "UCD IIRG Approach to the Textual Entailment Challenge", |
|
"authors": [ |
|
{ |
|
"first": "Eamonn", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Stokes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Dunnion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Carthy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eamonn Newman, Nicola Stokes, John Dunnion, and Joe Carthy. 2005. UCD IIRG Approach to the Textual Entailment Challenge. In Proceedings of the PASCAL RTE Challenge.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Robust Textual Inference using Diverse Knowledge Sources", |
|
"authors": [ |
|
{ |
|
"first": "Rajat", |
|
"middle": [], |
|
"last": "Raina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Cox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Michels", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Mac-Cartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL RTE Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rajat Raina, Aria Haghighi, Christopher Cox, Jenny Finkel, Jeff Michels, Kristina Toutanova, Bill Mac- Cartney, Marie-Catherine de Marneffe, Christopher Manning, and Andrew Ng. 2005. Robust Textual Inference using Diverse Knowledge Sources. In Pro- ceedings of the PASCAL RTE Challenge.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "T Semantics and H Semantics . The solid arrows represent the relations identified by the semantic parser. The dotted arrows symbolize the lexical chains between concepts in T and their analogous concepts in H (U S T and America H belong to the same WordNet synset). The dash arrows denote the relations inferred by combining two semantic relations. The long dash arrows indicate the relations between frame elements.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "). TDeparting [John and his son, George,] T heme.f e emigrated [with Mike, John's uncle,] Cotheme.f e to [US] Goal.f e in [1969] T ime.f e . T Kinship [John] Ego.f e and his son, [George,] Alter.f e emigrated with [Mike] Alter.f e , [John] Ego.f e 's uncle, to US in 1969. TAxiom 1", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "Entailment proof example.Table 2lists the semantic relations and their abbreviations. Sections 3.2 and 4.1 will detail the semantics behind the axioms T Axiom 1 , T Axiom 2 , T Axiom 3 , T Axiom 4 , and H Axiom 1 .", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "The set of semantic relations", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "Examples of semantic combination axioms ually checking all the LA Times corpus", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "[fur] M aterial [collars] Subpart,T arget of [their]W earer [coats] Clothing . PW(f ur, collar) and PW(collar, coat) CLOTHING F \u2192 PAH(descriptor, garment) \u2228 PAH(descriptor, material) Example: She didn't bring heels with her so she decided on [gold]Descriptor [leather] M aterial [flip-flops]Garment,T arget. PAH(gold, leather) \u2228 PAH(gold, f lip \u2212 f lop) KINSHIP F \u2192 KIN(ego, alter) Example: The new subsidiary is headed by [Rupert Soames] Alter , [son]T arget [of the former British Ambassador to France and EC vice-president]Ego. KIN(Rupert Soames, the former British Ambassador to France and EC vice-president) GETTING F \u2192 POS(recipient, theme) GETTING F \u2192 \u00ac POS(source, theme) (only if the source is a person) Example: In some cases, [the BGS libraries]Recipient had [obtained]T arget [copies of theses] T heme [from the authors]Source [by purchase or gift]Means, and no loan records were available for such copies. POS(the BGS libraries, copies of theses) and \u00ac POS(authors, copies of theses) GETTING F \u2192 SRC(theme, source) (if the source is not a person) Example: He also said that [Iran]Recipient [acquired]T arget [fighter-bomber aircraft] T heme [from countries other than the USA and the Soviet Union]Source. SRC(fighter-bomber aircraft, countries other than the USA and the Soviet Union)", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"text": "For example, in the sentence [A boat] Agent [carrying] T arget [would-be Moroccan illegal emigrants] T heme [from UK] P ath start [to Spain ] P ath end sank in the Strait of Gibraltar on June 8, the CARRYING frame's axioms do not apply. The boat nor the emigrants reach Spain (the path end of 10 Section 4.2 lists some exception cases.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"text": "). The RTE data set includes 1367 English (T, H) pairs from the news domain (political, eco-", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Semantic Axioms</td><td>CD</td><td/><td>IE</td><td/><td/><td>IR</td><td>MT</td><td/><td>PP</td><td/><td>QA</td><td/><td>RC</td></tr><tr><td/><td>T</td><td>F</td><td>T</td><td>F</td><td>T</td><td>F</td><td>T</td><td>F</td><td>T</td><td>F</td><td>T</td><td>F</td><td>T</td><td>F</td></tr><tr><td>applied to all T s applied to all Hs solution for (T, H)</td><td>13.33 1.33 9.33</td><td>21.33 9.33 0</td><td>26.66 5 20</td><td>10 10 0</td><td>6.66 0 4.44</td><td colspan=\"2\">Test data (%) 4.44 11.66 0 1.66 0 10</td><td>10 1.66 1.66</td><td>8 8 0</td><td>0 0 0</td><td>15.38 1.53 10.77</td><td>7.69 0 1.53</td><td>21.43 0 10</td><td>17.14 1.43 5.71</td></tr><tr><td>applied to all T s applied to all Hs solution for (T, H)</td><td>22 4 10</td><td>27.08 8.33 2.08</td><td>34.28 5.71 22.85</td><td>5.71 2.85 5.71</td><td colspan=\"3\">Development data (%) 8.57 8.57 18.51 0 2.85 7.4 5.71 2.85 18.51</td><td>18.51 3.7 3.7</td><td>5.12 0 2.56</td><td>9.3 0 0</td><td>28.88 2.22 20</td><td>0 0 0</td><td>9.61 0 7.69</td><td>9.8 1.96 0</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"text": "The impact of the semantic axioms on each NLP application data set. T and F stand for True and", |
|
"type_str": "table", |
|
"content": "<table><tr><td>False entailments, respectively.</td></tr><tr><td>nomical, etc.). The development set consists of 567 examples and the test set contains the remaining 800 pairs.</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"text": "Applicability on the RTE data", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |