{ "paper_id": "U05-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:08:25.189968Z" }, "title": "Learning of Graph Rules for Question Answering", "authors": [ { "first": "Diego", "middle": [], "last": "Molla", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University Sydney", "location": { "country": "Australia" } }, "email": "" }, { "first": "Menno", "middle": [], "last": "Van Zaanen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University Sydney", "location": { "country": "Australia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "AnswerFinder is a framework for the development of question-answering systems. An-swerFinder is currently being used to test the applicability of graph representations for the detection and extraction of answers. In this paper we briefly describe AnswerFinder and introduce our method to learn graph patterns that link questions with their corresponding answers in arbitrary sentences. The method is based on the translation of the logical forms of questions and answer sentences into graphs, and the application of operations based on graph overlaps and the construction of paths within graphs. The method is general and can be applied to any graph-based representation of the contents of questions and answers.", "pdf_parse": { "paper_id": "U05-1005", "_pdf_hash": "", "abstract": [ { "text": "AnswerFinder is a framework for the development of question-answering systems. An-swerFinder is currently being used to test the applicability of graph representations for the detection and extraction of answers. In this paper we briefly describe AnswerFinder and introduce our method to learn graph patterns that link questions with their corresponding answers in arbitrary sentences. The method is based on the translation of the logical forms of questions and answer sentences into graphs, and the application of operations based on graph overlaps and the construction of paths within graphs. The method is general and can be applied to any graph-based representation of the contents of questions and answers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text-based question answering (henceforth QA) is the process whereby an answer to an arbitrary question formulated in plain English is found by searching through unedited text documents and returned to the user. The current availability of increasingly large volumes of text for human consumption has prompted an intensive research in QA. A well-known forum for the evaluation of QA systems is the questionanswering track of the Text REtrieval Conference 1 (Voorhees, 2001) , where systems developed by some of the most active researchers in the area are compared within the context of a common task. In addition, QA technology is being deployed in practical applications. For example, several Web-based question-answering systems are currently available (e.g. START 2 , AnswerBus 3 ), and recently popular Web search engines have started incorporating automated question-answering techniques (e.g. Google 4 , as for September 2005).", "cite_spans": [ { "start": 457, "end": 473, "text": "(Voorhees, 2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The development of successful QA technology requires solid foundations both in the areas of software engineering and natural language processing. The nature of text-based question answering requires the use of a wide range of techniques, some of which are described in (Hirschman and Gaizauskas, 2001; Voorhees, 2001) . For example, traditional document retrieval techniques are typically used to preselect the documents or document fragments that may contain the answer to the question. In addition, information extraction techniques are commonly used to extract all the named entities in the question and the preselected text, on the ground that fact-based questions typically expect one of these named entities as an answer. To analyse the questions, techniques range from the use of regular expressions to the use of machine-learning techniques that classify the questions according to the type of the expected answer. Finally, to find the answer, techniques may vary from a bag of words comparison of keywords used in the question and the answer sentence, to the use of full parsers and logical proof tools such as OTTER 5 . Additional resources are typically used, notably the Word-Net lexical resource. 6 Consequently, the most successful QA systems are complex pieces of engineering that require frequent development and testing, such as (Moldovan et al., 2003) . An unwelcome side-effect of this is that much of the effort spent in developing a QA system is spent, not in the developing of QA methodologies, but in defining the optimal parameters of a system.", "cite_spans": [ { "start": 269, "end": 301, "text": "(Hirschman and Gaizauskas, 2001;", "ref_id": "BIBREF4" }, { "start": 302, "end": 317, "text": "Voorhees, 2001)", "ref_id": "BIBREF20" }, { "start": 1210, "end": 1211, "text": "6", "ref_id": null }, { "start": 1346, "end": 1369, "text": "(Moldovan et al., 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, QA presents challenging theoretical issues. One of the most salient theoretical challenges is related to the problem of paraphrasing. There are many ways of expressing the same piece of information. For example, the simple question Where was Peter born? can be similarly asked as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. In what city was Peter born?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. What is the birthplace of Peter?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What is Peter's birthplace?", "sec_num": "2." }, { "text": "Whereas it may not be difficult to manually devise rules that account for the most popular ways of rephrase a question, variations in the sentences containing the answer are much less predictable. A human would not have any problem to find the answer to the above questions in the following examples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name Peter's birthplace", "sec_num": "4." }, { "text": "1. Peter was born in Paris.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name Peter's birthplace", "sec_num": "4." }, { "text": "Paris is Peter's birthplace.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "France.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "4. Mrs Smith gave birth to Peter in Paris.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "However, a machine would need to have access to lexical, syntactic, and world knowledge information if it is to find the answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "The above are simple constructed examples. Real text with much more complex examples abounds, but the examples above suffice to illustrate the problem encountered by any textbased question-answering system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "For further details about the problem of paraphrasing within the context of QA, see (Rinaldi et al., 2003) .", "cite_spans": [ { "start": 84, "end": 106, "text": "(Rinaldi et al., 2003)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "Some systems have attempted to systematically build rules that link questions with answer sentences. For example, (Soubbotin, 2001 ) used a complex hierarchy of rules on surface strings. Other systems, such as (Echihabi et al., 2004) , use a method for the automatic learning of surface-level rules. Other systems, such as (Bouma et al., 2005) , use hand-crafted rules based on syntactic information.", "cite_spans": [ { "start": 114, "end": 130, "text": "(Soubbotin, 2001", "ref_id": "BIBREF16" }, { "start": 210, "end": 233, "text": "(Echihabi et al., 2004)", "ref_id": "BIBREF2" }, { "start": 323, "end": 343, "text": "(Bouma et al., 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "Our hypothesis is that the accuracy of question-answering systems would improve if these rules are based on linguistic features located at a deeper level. Furthermore, to handle the problem of paraphrasing, the rules must be automatically learnt based on a representative corpus of questions and answers. In this paper we present our current work for developing and testing this hypothesis. Our work is being integrated in the AnswerFinder QA system, which is briefly described in Section 2. Section 3 describes the Logical Graph notation that we use to represent the logical contents of questions and answer sentences. Section 4 presents the rules based on Logical Graphs, and how they are automatically learnt from a corpus of question/answer pairs. Section 5 shows the use of these rules to find the exact answer to a question, and Section 6 shows the results of our evaluations. Sections 7 and 8 point to related research and give the final conclusions, respectively. The design of the system is functional and object-oriented. Focusing on function (instead of data) makes it easier to replace functions of the system with others. The system is implemented in C++. C++ was selected because it is a high-level language on the one hand, but can also be used in a more low-level way. It interfaces well with C, which allows for easy integration of many external systems. Furthermore, the resulting executable is relatively fast.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "AnswerFinder consists of two main components, the client and the server. The client can get information from the server about the algorithms and files/document collections it provides to clients. The client can also send information to the server requesting question(s) to be processed using specific algorithms and data collections. The server can be fully configured via XML. For example, if the client calls the server without any configuration information, the server replies with an XML document listing all the available services. The client can then call the server with an XML file containing all the configuration information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "The request the server receives from the client contains all information needed to process the question(s). It specifies the document collection and the algorithms that should be used. The server then runs the required services by creating algorithm objects. An algorithm defines the full question-answering process, and it may use sub-algorithms for specific phases (such as question classification, document preselection, etc). The sub-algorithms are designed so that they can be called by any algorithm. Thus, different ways of trying QA techniques can be easily implemented by defining new algorithms that call the specific sub-algorithms with specific parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "The following sub-algorithms are currently defined in AnswerFinder. They are classified by the question-answering phase in which they are used: Document Selection. Sub-algorithms in this phase are used to preselect the candidate documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "\u2022 NIST Doc Selection:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "This subalgorithm returns the documents provided by NIST for the TREC QA task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "Question Classification. Sub-algorithms in this phase are used to analyse and classify the question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "\u2022 Regexp Q Classification: This is a set of regular expressions that determines the type of the expected answer according to a simple hierarchy of types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "Sentence Selection. Sub-algorithms in this phase are used to determine what sentences are likely to contain an answer. These subalgorithms can be cascaded to provide the final ranking of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "\u2022 Word Overlap: Count the number of words in common between the question and the answer sentence. This sub-algorithm allows the use of a list of stop words that are not considered in the overlap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "\u2022 Grammatical Relations Overlap: Count the number of grammatical relations in common. We used a subset of the grammatical relations defined by (Carroll et al., 1998) .", "cite_spans": [ { "start": 143, "end": 165, "text": "(Carroll et al., 1998)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "\u2022 Logical Form Rules: Count the number of logical form predicates in common, after applying a set of logical form rules. The process is explained in (Moll\u00e1 and Gardiner, 2004 ).", "cite_spans": [ { "start": 149, "end": 174, "text": "(Moll\u00e1 and Gardiner, 2004", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "\u2022 Logical Graph Rules: Count the graph overlap between the question and the answer after applying graph transformation rules. This process is explained in the remainder of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "Named Entity. sub-algorithms in this phase are used to detect all named entities in the text (person and organisation names, locations, etc).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "\u2022 LingPipe: This sub-algorithm uses the Alias-i LingPipe named entity recogniser. 11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paris, Peter's birthplace, is located in", "sec_num": "3." }, { "text": "We are developing a graph notation for the expression of the logical contents of questions and answer sentences. Our Logical Graphs are inspired on Conceptual Graphs (Sowa, 1979) , though our graphs do not attempt to encode the full semantics of a sentence. Instead, the focus of our Logical Graphs is on robustness and practicability.", "cite_spans": [ { "start": 166, "end": 178, "text": "(Sowa, 1979)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Logical Graphs", "sec_num": "3" }, { "text": "Robustness. It should be possible to automatically produce the Logical Graph of any sentence, even of those sentences that are not fully grammatical. The importance of this feature becomes obvious once one looks at the quality of the English used in typical corpora used for QA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graphs", "sec_num": "3" }, { "text": "Practicability. The Logical Graphs should be automatically constructed in relatively short run time. The operations with the graphs should be computable within relatively short time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graphs", "sec_num": "3" }, { "text": "Like Sowa's Conceptual Graphs, our Logical Graphs are directed, bipartite graphs with two types of vertices, concepts and relations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graphs", "sec_num": "3" }, { "text": "Concepts. Examples of concepts are objects dog, table, events and states run, love, and properties red, quick. Concepts may be arranged in a network of word relations (such as ontologies), though our method does not yet exploit this possibility in full.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graphs", "sec_num": "3" }, { "text": "Relations. Relations act as links between concepts. Traditional examples of relations are grammatical roles and prepositions. However, to facilitate the production of the Logical Graphs we have decided to use a labelling of relations which is relatively close to the syntactic level of linguistic information. For example, instead of using the usual thematic roles agent, patient, and so forth, we use syntactic roles subject, object, etc. For convenience, and to avoid entering into a debate about the possible names of the syntactic roles, we have decided to use numbers. Thus, the relation 1 indicates the link to the first argument of a verb (that is, what is usually a subject). The relation 2 indicates the link to the second argument of a verb (usually the direct object), and so forth. Figure 1 shows various examples of Logical Graphs. The first example shows the use of a relation 1 to express the subject of the go event, and two relations, to and by, that represent two prepositions. The second example shows the use of lattice structures to represent complex entities (such as the ones formed when a conjunction is used). This use of lattices is inspired from the treatment of plurals and complex events (Link, 1983; Moll\u00e1, 1997) . Finally, the third example shows the expression of clauses and control verbs. These examples only cover a few of the linguistic features but we hope they will suffice to show the expressive power of our Logical Graphs.", "cite_spans": [ { "start": 1217, "end": 1229, "text": "(Link, 1983;", "ref_id": "BIBREF6" }, { "start": 1230, "end": 1242, "text": "Moll\u00e1, 1997)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 794, "end": 802, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Logical Graphs", "sec_num": "3" }, { "text": "The Logical Graphs are constructed automatically from the output of the Conexor dependency-based parser (Tapanainen and J\u00e4rvinen, 1997) . The choice of the parser was arbitrary, and it would be easy to produce the same or similar graphs from the output of any dependency-based parser. It would be also possible to use the output of a constituency-based parser by applying well-known methods to convert from constituency structures to dependency structures like those described by Schneider (1998), or practical methods like the one described by Harabagiu et al. (2000) .", "cite_spans": [ { "start": 104, "end": 135, "text": "(Tapanainen and J\u00e4rvinen, 1997)", "ref_id": "BIBREF19" }, { "start": 545, "end": 568, "text": "Harabagiu et al. (2000)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Logical Graphs", "sec_num": "3" }, { "text": "The Logical Graph rules used by AnswerFinder are based on the concepts of graph overlap and path between two subgraphs in a graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "The graph overlap between two sentences is the overlap of the Logical Graphs of the two sentences. A na\u00efve definition of the overlap between two graphs would be the graph consisting of all the common concepts and relations. The actual definition of an overlap, however, must be made more complicated on the light of the existence of repeated vertex labels. The third example of Figure 1 , for example, shows that the relations named 1 and 2 appear twice in the same graph. Concept labels can also be repeated in a graph if the sentence uses the same word to express two different concepts. For example, the sentence John bought a book and Mary bought a magazine describes two distinct events of buying.", "cite_spans": [], "ref_spans": [ { "start": 378, "end": 386, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "Graph overlaps must therefore be defined on the basis of a correspondence relation so that each vertex (edge) of a graph correlates with one and only one vertex (edge) in the other graph (Montes-y-G\u00f3mez et al., 2001) . Thus, there is a projection from the graph overlap to a subgraph of each of the original graphs, such that there is a correspondence from every vertex (or edge) of the graph overlap to a vertex (or edge) of the projected subgraphs. Figure 2 shows an example of two graph overlaps and their projections to subgraphs in the original graphs.", "cite_spans": [ { "start": 187, "end": 216, "text": "(Montes-y-G\u00f3mez et al., 2001)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 451, "end": 459, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "There could be several overlaps between two graphs. Of these, the most useful ones are the maximal overlaps, that is, the overlaps that are not subgraphs of any other overlaps. There could still exist several maximal overlaps between two graphs. For example, Figure 2 shows two different maximal overlaps between the Logical Graphs of two sentences.", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 267, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "A path between two subgraphs in a graph G is a subgraph of G that connects the two subgraphs. As is the case with graph overlaps, there may be several paths between two subgraphs, especially when the graphs have a high density of edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "Each rule r will contain three components. For the sake of completion the components are listed here but their use will be described in detail in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "r o An overlap between a question and its answer sentence. This overlap is used to determine when the rule should trigger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "r p A path between the overlap and the actual answer in the answer sentence. This path is used to find the location of the exact answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "r a A graph representation of the exact answer. and and table 2 see 1 mary john 1 see 2 table 2 see 1 john 1 see 2 table Figure 2 : Graph overlaps of sentences John saw a book and Mary saw a table and John saw a table.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 153, "text": "and and table 2 see 1 mary john 1 see 2 table 2 see 1 john 1 see 2 table Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "The two overlaps are shown in thick lines. The dashed lines show the correspondence relation from the graph vertices of each overlap and the projected subgraphs in the original graphs (the correspondence relation from the edges is not shown to improve readability).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Graph Rules", "sec_num": "4" }, { "text": "With the help of a training set of questions and sentences containing the answers, a set of Logical Graph rules can be learnt. Figure 3 shows an example of a rule learnt between two sentences. The graph notation has been simplified by replacing the relation vertices with labelled edges. The algorithm for learning rules is fairly straightforward and is shown in Figure 4 . Rules learnt with this algorithm are very specific to the question/answer pair. For example, the Figure 3 would only trigger for questions about Peter and it would not trigger, say, for the question Where was Mary born?. The rule needs to be generalised. Our generalisation method is very simple: relations do not generalise (relations express syntactic or semantic relations and it is not advisable to over-generalise them), and concepts generalise to \" \" (that is, concepts that would unify with anything). The generalisation process applies to every concept except those that belong to a specific list of \"stop concepts\" (in analogy to the idea of stop words in Information Retrieval). The current list of stop concepts is:", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 135, "text": "Figure 3", "ref_id": null }, { "start": 363, "end": 371, "text": "Figure 4", "ref_id": "FIGREF1" }, { "start": 471, "end": 479, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "and, or, not, nor, if, otherwise, have, be, become, do, make", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "The resulting generalised rules may then overgeneralise and therefore they must be weighted according to their ability to detect the correct answer in the training corpus. The weight W(r) of a rule r is computed following the formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "W(r) = # correct answers found # answers found 5 Graph-based Question Answering", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "Given a question q with graph Q and a sentence s with graph S, the process to find the answer iterates over all the rules. A rule r triggers if the overlap component of the rule r o is a subgraph of Q (which can be easily determined by checking that ovl(r o , Q) = r o ). When that happens, the graph of the question is expanded with the rule path r p , producing a new graph Q rp . The resulting graph is more likely to produce a large overlap with an answer sentence similar to the one that generated the rule and, most importantly, the graph contains an indication of where the answer is located. Once the graph of the question has been expanded with the path, one only needs to compute the overlap between this expanded graph and that of the answer sentence ovl(Q rp , S). If the overlap retains part of the exact answer that was marked up by the graph rule, then we have found a possible answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "The above method will cover simple cases, but it needs to be extended to cover two special cases that arise from the fact that the question/sentence pairs that generated the rule are likely to be different from the actual question and sentence being tested. First of all, a question may trigger several rules, and each rule may extract a different answer from the answer sentence. And second, it is possible that the overlap between the expanded graph and the sentence does not contain the complete answer but part of it. We will explain how to handle these two cases below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "To identify the correct answer among a set of possible answers it is necessary to establish a measure of \"answerhood\" so that the correct answer has a higher score than the score of other candidates. The rule weight gives an indication of the quality of the answer extracted, but we also need to keep in account the similarity (or otherwise) between the text originating the rule and the text being tested. Given that the graph of the question has been expanded with the path linking the question and the exact answer determined in the training corpus, then the size of the overlap between the expanded graph of the test question and the graph of the test answer can be used as an estimation. Thus, the measure of answerhood A(pa) of a possible answer pa is the product of the weight of the rule used r, and the size of the best overlap between the graph of the question sentence expanded with the rule path and the graph S of the answer sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "A(a) = W(r) \u00d7 size(ovl(Q rp , S))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "The size of a graph is computed as the weighted sum of all concepts and relations in the path. The formula to determine the weight of each concept and relation is inspired on the use of the Inverse Document Frequency (IDF) measure used in Document Retrieval. The actual formula that we use is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "W i = 1 log N log N n 35.1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "When did Jack Welch become chairman of General Electric? Jack Welch took over GE in 1981. Welch became GE's chief executive in April 1981. Welch was named chief executive in 1981.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Logical Graph Rules", "sec_num": "4.1" }, { "text": "How many people did he fire from GE? He sold off underperforming divisions and fired about 100,000 people. More than 100,000 GE jobs have been axed under Welch. The formula includes the constant factor 1/ log N to ensure that the values range between 0 and 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "35.4", "sec_num": null }, { "text": "We have conducted an initial evaluation of the use of these rules within the task of question answering. For this evaluation we created a training and testing corpus based on the first 111 questions of the question-answering track of TREC 2004 (Voorhees, 2004) . For each question, we applied Ken Litkowsky's patterns 12 to automatically extract sentences in the AQUAINT corpus containing possible answers. These sentences were checked manually, and only sentences containing the answer and a justification were selected. As a result we obtained about 560 question/answer pairs. The exact answers in the answer sentences of the training corpus were manually marked up to ensure a corpus without wrong answers. Figure 5 shows an extract of the training corpus.", "cite_spans": [ { "start": 244, "end": 260, "text": "(Voorhees, 2004)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 710, "end": 718, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "6" }, { "text": "The question/answer training corpus was split in 5 sets, and a 5-fold cross-validation was performed. Table 1 shows the results. In this table, the accuracy indicates the percentage of questions that are answered correctly. The MRR measure is as used in the TREC evaluations (Voorhees, 2001) , and it measures the mean reciprocal rank for ranks from 1 to 5. For example, if the correct answer was ranked 3 (i.e. the system ranked two wrong answers higher than the correct answer), then the reciprocal rank is 1/3. If the correct answer was ranked beyond 5, then the reciprocal rank is 0. The MRR is the mean of the reciprocal ranks across all questions. Given that the results indicate an MRR with value higher than the accuracy, we can deduce that the system sometimes finds the correct answer but it does not assign it the top rank. Figure 6 shows the distribution of weights among the rules learnt (this is the sum of all the rules produced in all the five runs). To avoid computational overhead of handling rules with low weight we decided to set a threshold of 0.5. Any rules with weight below 0.5 were discarded. The figure indicates two clear regions, one with rules of weight lower than 0.6 and one with rules of weight above 0.9. It is probably desirable to set a threshold of 0.9 for a larger training corpus to ensure that only good quality rules are used. We refrained from doing so with our small corpus to avoid ending up with too few rules.", "cite_spans": [ { "start": 275, "end": 291, "text": "(Voorhees, 2001)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 1", "ref_id": null }, { "start": 835, "end": 843, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "6" }, { "text": "It is difficult to compare the above results with those of existing evaluations for various reasons. First of all, the system used in our evaluation did not have the usual modules of a full-blown QA system as described in Section 1. Second, the task is a simplification of a real QA in that the answer is known to exist in the answer candidate. An area of further work is therefore to integrate this method into the An-swerFinder QA system and to evaluate the system in a task such as the QA track of TREC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy", "sec_num": null }, { "text": "There have been several attempts to automatically learn the correspondence between a question and an answer. For example, (Echihabi et al., 2004) describes three approaches to use question/answer rules, two of which use machine learning methods. In one approach, the system uses a methodical series of web searches containing a question phrase and the answer to collect a corpus of substrings linking the question phrase with the answer. The other machine-learning method described by Echichabi et al. uses techniques based on statistical machine translation to automatically learn the \"translation\" between a question and an answer.", "cite_spans": [ { "start": 122, "end": 145, "text": "(Echihabi et al., 2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "A system that uses syntactic information in machine learning for QA has been recently published by Shen et al. (2005) . This system is based on the extraction of dependency chains connecting a question word with an answer. The information is combined with other statistical features and fed to a Maximum Entropy model that ranks the answer candidates. The use of dependency chains in this system is similar in principle with our use of graph paths in that it provides a way to connect a question with its answer. We have not had time however to study this system in detail.", "cite_spans": [ { "start": 99, "end": 117, "text": "Shen et al. (2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Another system that uses syntactic information to develop patterns is described by Bouma et al. (2005) . Their system uses the output of a dependency parser combined with a set of equivalence rules between sets of dependency relations that paraphrase each other. In contrast with our method, however, the rules were developed manually and there were no indications in the paper about how to develop a method to automatically learn the rules. A method to discover similar sets of dependencies has been described by Lin and Pantel (2001) , so in principle it is feasible to learn paraphrase rules and apply them to QA. However, the paraphrase rules described by these two systems do not attempt to connect a question with an answer, as we do.", "cite_spans": [ { "start": 83, "end": 102, "text": "Bouma et al. (2005)", "ref_id": "BIBREF0" }, { "start": 514, "end": 535, "text": "Lin and Pantel (2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "The only system using logical-form rules that we are aware of is AnswerFinder at the time of participation in TREC 2004 (Moll\u00e1 and Gardiner, 2004) . The rules were based on An-swerFinder's minimal logical forms, and they were built manually. The system presented in our paper is a continuation of this research. Other than AnswerFinder, we are not aware of any QA system that attempts to learn rules based on logical forms.", "cite_spans": [ { "start": 120, "end": 146, "text": "(Moll\u00e1 and Gardiner, 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "There is some work on the use of conceptual graphs for information retrieval (Montesy-G\u00f3mez et al., 2000; Mishne, 2004) . However, we are not aware of any publication about the use of conceptual graphs (or any other form of graph representation) for question answering other than our own.", "cite_spans": [ { "start": 77, "end": 105, "text": "(Montesy-G\u00f3mez et al., 2000;", "ref_id": null }, { "start": 106, "end": 119, "text": "Mishne, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We have introduced a methodology for the learning of graph patterns between questions and answers. Rules are learnt on the basis of two graph concepts: graph overlap, and paths between two subgraphs in a graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further Work", "sec_num": "8" }, { "text": "The techniques presented here use graph representations of the logical contents between questions and answer sentences. These techniques are being tested in AnswerFinder, a framework for the development of questionanswering techniques that is easily configurable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further Work", "sec_num": "8" }, { "text": "We believe that our method can generalise to any graph representation of questions and answer sentences. Further work will include the use of alternative graph representations, including the output of a dependency-based parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further Work", "sec_num": "8" }, { "text": "Finally, we plan to continue our evaluation of the method by integrating it into the An-swerFinder system and other QA systems to fully assess its potential.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further Work", "sec_num": "8" }, { "text": "http://trec.nist.gov 2 http://www.ai.mit.edu/projects/infolab/ 3 http://www.answerbus.com/index.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.google.com 5 http://www-unix.mcs.anl.gov/AR/otter/ 6 http://wordnet.princeton.edu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://trec.nist.gov 8 http://www.clef-campaign.org/ 9 http://www-nlpir.nist.gov/projects/duc/ 10 http://www.pascal-network.org/Challenges/RTE/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://alias-i.com/lingpipe/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Ken Litkowsky's patterns are available from the TREC website (http://trec.nist.gov).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is funded by the Australian Research Council, ARC Discovery Grant no DP0450750.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Reasoning over dependency relations for qa", "authors": [ { "first": "Gosse", "middle": [], "last": "Bouma", "suffix": "" }, { "first": "Jori", "middle": [], "last": "Mur", "suffix": "" }, { "first": "Gertjan", "middle": [], "last": "Van Noord", "suffix": "" } ], "year": 2005, "venue": "Proc. IJCAI-05 Workshop on Knowledge and Reasoning for Answering Questions", "volume": "", "issue": "", "pages": "15--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gosse Bouma, Jori Mur, and Gertjan van No- ord. 2005. Reasoning over dependency rela- tions for qa. In Proc. IJCAI-05 Workshop on Knowledge and Reasoning for Answering Questions, pages 15-20.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Parser evaluation: a survey and a new proposal", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Sanfilippo", "suffix": "" } ], "year": 1998, "venue": "Proc. LREC98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll, Ted Briscoe, and Antonio Sanfil- ippo. 1998. Parser evaluation: a survey and a new proposal. In Proc. LREC98.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "How to select an answer string?", "authors": [ { "first": "Abdessamad", "middle": [], "last": "Echihabi", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Melz", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Ravichandran", "suffix": "" } ], "year": 2004, "venue": "Advances in Textual Question Answering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdessamad Echihabi, Ulf Hermjakob, Eduard Hovy, Daniel Marcu, Eric Melz, and Deepak Ravichandran. 2004. How to select an an- swer string? In Tomek Strzalkowski and Sanda Harabagiu, editors, Advances in Tex- tual Question Answering. Kluwer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Falcon: Boosting knowledge for answer engines", "authors": [ { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pa\u015fca", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "R\u0203zvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Roxana", "middle": [], "last": "G\u00eerju", "suffix": "" }, { "first": "Vasile", "middle": [], "last": "Rus", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Mor\u0203rescu", "suffix": "" } ], "year": 2000, "venue": "NIST Special Publication", "volume": "", "issue": "", "pages": "479--488", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanda Harabagiu, Dan Moldovan, Marius Pa\u015fca, Rada Mihalcea, Mihai Surdeanu, R\u0203zvan Bunescu, Roxana G\u00eerju, Vasile Rus, and Paul Mor\u0203rescu. 2000. Falcon: Boosting knowledge for answer engines. In Ellen M. Voorhees and Donna K. Harman, editors, Proc. TREC-9, number 500-249 in NIST Spe- cial Publication, pages 479-488. NIST.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Natural language question answering: The view from here", "authors": [ { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Gaizauskas", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "4", "pages": "275--300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lynette Hirschman and Rob Gaizauskas. 2001. Natural language question answering: The view from here. Natural Language Engineer- ing, 7(4):275-300.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Discovery of inference rules for question-answering", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "4", "pages": "343--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. Discov- ery of inference rules for question-answering. Natural Language Engineering, 7(4):343-360.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The logical analysis of plurals and mass terms: a lattice-theoretical approach", "authors": [ { "first": "Godehard", "middle": [], "last": "Link", "suffix": "" } ], "year": 1983, "venue": "Meaning, Use and Interpretation of Language", "volume": "", "issue": "", "pages": "250--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Godehard Link. 1983. The logical analysis of plurals and mass terms: a lattice-theoretical approach. In Rainer Bauerle, Christoph Schwarze, and Arnim von Stechov, editors, Meaning, Use and Interpretation of Lan- guage, pages 250-209. de Gruyter, Berlin.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Source code retrieval using conceptual graphs", "authors": [ { "first": "Gilad", "middle": [], "last": "Mishne", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gilad Mishne. 2004. Source code retrieval us- ing conceptual graphs. Master's thesis, Uni- versity of Amsterdam.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Performance issues and error analysis in an open-domain question answering system", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pas Ca", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" } ], "year": 2003, "venue": "ACM Transactions on Information Systems", "volume": "21", "issue": "2", "pages": "133--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Moldovan, Marius Pas ca, Sanda Harabagiu, and Mihai Surdeanu. 2003. Performance issues and error analysis in an open-domain question answering system. ACM Transactions on Information Systems, 21(2):133-154.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Answerfinder -question answering by combining lexical, syntactic and semantic information", "authors": [ { "first": "Diego", "middle": [], "last": "Moll\u00e1", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Gardiner", "suffix": "" } ], "year": 2004, "venue": "Proc. ALTW 2004", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Moll\u00e1 and Mary Gardiner. 2004. An- swerfinder -question answering by combining lexical, syntactic and semantic information. In Ash Asudeh, C\u00e9cile Paris, and Stephen Wan, editors, Proc. ALTW 2004, pages 9-16, Sydney, Australia. Macquarie University.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Aspectual Composition and Sentence Interpretation: a formal approach", "authors": [ { "first": "Diego", "middle": [], "last": "Moll\u00e1", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Moll\u00e1. 1997. Aspectual Composition and Sentence Interpretation: a formal approach. Ph.D. thesis, University of Edinburgh.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Information retrieval with conceptual graph matching", "authors": [ { "first": "Manuel", "middle": [], "last": "Montes-Y-G\u00f3mez", "suffix": "" }, { "first": "Aurelio", "middle": [], "last": "L\u00f3pez-L\u00f3pez", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" } ], "year": 2000, "venue": "Proc. DEXA-2000, number 1873 in Lecture Notes in Computer Science", "volume": "", "issue": "", "pages": "312--321", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Montes-y-G\u00f3mez, Aurelio L\u00f3pez-L\u00f3pez, and Alexander Gelbukh. 2000. Information retrieval with conceptual graph matching. In Proc. DEXA-2000, number 1873 in Lecture Notes in Computer Science, pages 312-321. Springer-Verlag.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Flexible comparison of conceptual graphs", "authors": [ { "first": "Manuel", "middle": [], "last": "Montes-Y-G\u00f3mez", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Baeza-Yates", "suffix": "" } ], "year": 2001, "venue": "Proc. DEXA-2001, number 2113 in Lecture Notes in Computer Science", "volume": "", "issue": "", "pages": "102--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Montes-y-G\u00f3mez, Alexander Gelbukh, and Ricardo Baeza-Yates. 2001. Flexi- ble comparison of conceptual graphs. In Proc. DEXA-2001, number 2113 in Lecture Notes in Computer Science, pages 102-111. Springer-Verlag.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Exploiting paraphrases in a question answering system", "authors": [ { "first": "Fabio", "middle": [], "last": "Rinaldi", "suffix": "" }, { "first": "James", "middle": [], "last": "Dowdall", "suffix": "" }, { "first": "Kaarel", "middle": [], "last": "Kaljurand", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Hess", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Moll\u00e1", "suffix": "" } ], "year": 2003, "venue": "Proc. Workshop in Paraphrasing at ACL2003", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Rinaldi, James Dowdall, Kaarel Kalju- rand, Michael Hess, and Diego Moll\u00e1. 2003. Exploiting paraphrases in a question answer- ing system. In Proc. Workshop in Paraphras- ing at ACL2003, Sapporo, Japan.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A linguistic comparison of constituency, dependency and link grammar", "authors": [ { "first": "Gerold", "middle": [], "last": "Schneider", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerold Schneider. 1998. A linguistic compar- ison of constituency, dependency and link grammar. Master's thesis, University of Zurich. Unpublished.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Exploring syntactic relation patterns for question answering", "authors": [ { "first": "Dan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "M", "middle": [], "last": "Geert-Jan", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Kruijff", "suffix": "" }, { "first": "", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2005, "venue": "Natural Language Processing IJCNLP 2005: Second International Joint Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Shen, Geert-Jan M. Kruijff, and Dietrich Klakow. 2005. Exploring syntactic relation patterns for question answering. In Robert Dale, Kam-Fai Wong, Jian Su, and Oi Yee Kwong, editors, Natural Language Processing IJCNLP 2005: Second International Joint Conference, Jeju Island, Korea, October 11- 13, 2005. Proceedings. Springer-Verlag.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Patterns of potential answer expression as clues to the right answers", "authors": [ { "first": "M", "middle": [ "M" ], "last": "Soubbotin", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. M. Soubbotin. 2001. Patterns of potential answer expression as clues to the right an- swers. In Ellen M. Voorhees and Donna K.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Proc. TREC 2001, number 500-250 in NIST Special Publication. NIST", "authors": [ { "first": "", "middle": [], "last": "Harman", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harman, editors, Proc. TREC 2001, number 500-250 in NIST Special Publication. NIST.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semantics of conceptual graphs", "authors": [ { "first": "John", "middle": [ "F" ], "last": "Sowa", "suffix": "" } ], "year": 1979, "venue": "Proc. ACL 1979", "volume": "", "issue": "", "pages": "39--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "John F. Sowa. 1979. Semantics of conceptual graphs. In Proc. ACL 1979, pages 39-44.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A non-projective dependency parser", "authors": [ { "first": "Pasi", "middle": [], "last": "Tapanainen", "suffix": "" }, { "first": "Timo", "middle": [], "last": "J\u00e4rvinen", "suffix": "" } ], "year": 1997, "venue": "Proc. ANLP-97. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pasi Tapanainen and Timo J\u00e4rvinen. 1997. A non-projective dependency parser. In Proc. ANLP-97. ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The TREC question answering track", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "4", "pages": "361--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen M. Voorhees. 2001. The TREC question answering track. Natural Language Engineer- ing, 7(4):361-378.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Overview of the trec 2004 question answering track", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 2004, "venue": "Proc. TREC 2004, number 500-261 in NIST Special Publication. NIST", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen M. Voorhees. 2004. Overview of the trec 2004 question answering track. In Ellen M. Voorhees and Lori P. Buckland, editors, Proc. TREC 2004, number 500-261 in NIST Special Publication. NIST.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Figure 1: Examples of logical graphs", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Learning of graph rules rule in", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Extract of the training corpus n = total number of sentences using the concept (or relation) i N = total number of sentences", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Figure 6: Distribution of weights among the rules learnt", "num": null, "uris": null }, "TABREF2": { "type_str": "table", "num": null, "text": "= the graph of the question G s = the graph of the answer sentence G a = the graph of the exact answer FOR every overlap O between G q and G s FOR every path P between O and G a Build a rule R of the formR o = O R", "html": null, "content": "
FOR every question/answerSentence pair
G q
1bear2peterpropwhere
Q:Where was Peter born?
petergenitivebirthplace1be2paris
A:Peter's birthplace was Paris
petergenitivebirthplace1be2paris
The Rule (r o in regular lines, r p in dashed
lines, r a in thick lines)
Figure 3: A logical graph rule
" } } } }