{ "paper_id": "J05-3007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:50:51.934888Z" }, "title": "New Directions in Question Answering", "authors": [ { "first": "Mark", "middle": [ "T" ], "last": "Maybury", "suffix": "", "affiliation": { "laboratory": "", "institution": "Marius Pa\u015fca Google Inc", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "J05-3007", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "With goals as intuitive and desirable as they are challenging, the field of automated question answering has generated growing interest in the past few years. The increased momentum is apparent in the spread of research groups working on the topic, the number of relevant Ph.D. theses, papers that are now a common occurrence in the proceedings of top-rated conferences related to the topic, the scale of commercial endeavors, and the steady financial commitments of various agencies to governmentsponsored evaluations and research programs. The last of these constitutes the main justification of New Directions in Question Answering, a relatively recent contribution to the field. As shown in Table 1 , the book suggests new milestones and directions of research which concurrently increase: r the requirements imposed on the system (e.g., time-sensitive search and detection of obscure relations) (Chapter 2); r scope (e.g., open or restricted application domains) (Chapter 6); r complexity (e.g., fact-seeking vs. exploratory questions with opinion answers) (Chapter 7); r the granularity of information sources (e.g., databases) (Chapter 9) versus unstructured text (Chapter 17) versus full-fledged knowledge bases (Chapter 19); r generally, the expectations of the users of the system (e.g., regular users versus experts), including intelligence analysts.", "cite_spans": [], "ref_spans": [ { "start": 695, "end": 702, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Even if considered in isolation rather than combined together, the majority of these goals are already very far reaching. Most readers will have an immediate intuition as to how difficult it would be in practice to answer, with reliable consistency, questions of seemingly unbounded complexity such as Has there been any change in the official opinion from China toward the 2001 annual U.S. report on human rights since its release? (page 83), How has pollution in the Black Sea affected the fishing industry, and what are the sources of this pollution? (page 134), or What part did ITT (International Telephone and Telegraph) and Anaconda Copper play in the Chilean 1970 election? (page 210) . Even if the question complexity is limited to the factual type, current fact-seeking question-answering technology has only a moderate impact on global-scale information-seeking environments such as Web search. The fact that quite a few chapters of the book are motivated as departures from fact-seeking question answering should ensure that the book will attract the attention of intrigued and avid readers.", "cite_spans": [ { "start": 583, "end": 626, "text": "ITT (International Telephone and Telegraph)", "ref_id": null }, { "start": 631, "end": 692, "text": "Anaconda Copper play in the Chilean 1970 election? (page 210)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Most of the 21 chapters of the book are extended versions of papers from a 2003 AAAI symposium on question answering (Maybury 2003) . As a strength, the chapters provide a multitude of refreshing, often orthogonal perspectives on question answering. The chapters are mostly independent units, with few cross-references. It is therefore possible to focus first on the chapters that seem most relevant and read other chapters later. In fact, out-of-order perusal may reduce the occasional overlap in the sections that motivate various chapters individually, as they often share, as a common theme, the desire to move beyond fact-seeking question answering.", "cite_spans": [ { "start": 117, "end": 131, "text": "(Maybury 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is impractical to describe all chapters individually. For an audience interested in more-practical methods that are accompanied by thorough evaluations, I recommend chapters 5, 17, and 18. In particular, Chapter 5 (Ralph Weischedel, Jinxi Xu, and Ana Licuanan) takes a look at answering biographical questions. Since such questions often take very simple forms (e.g., Who is Edgar Froese?), the challenge relative to other classes of questions does not lie in question analysis. Instead, the inherent difficulty of biographical questions, and of the related class of definitional questions (e.g., What is a metronome?), is in selecting and assembling an answer from text fragments that are inherently scattered across multiple documents. The chapter combines clear explanations of the method and system architecture with a nice walk-through example that shows the information flow among different stages of processing of a sample question. Most notably, the evaluation section more than makes up for the relatively small size of the test set (less than 30 questions) by analyzing the system output from different angles and pursuing both a subjective and an automated evaluation of the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Chapter 17 (Harris Wu, Dragomir Radev, and Weiguo Fan) is among the few that consider fact-seeking questions. Following a pragmatic approach, the chapter introduces a series of experiments and comparative evaluations of a task that is more relaxed than those that motivate other chapters and yet has high potential impact in practice. More precisely, the authors efficiently address the problem of extracting moreaccurate document snippets or summaries in response to users' questions, in addition to output returned by Web search engines. Chapter 18 (John Prager, Jennifer Chu-Carroll, and Krzysztof Czuba) is one of my favorites. It is a very effective and enlightening tour through a collection of different answer extraction techniques embedded in a single system, which could also be seen as a meta-question-answering system. The various extraction techniques (statistical, pattern-based, definitional, dossier-based) are often radically different, yet complement one another well, resulting in one of the most modular architectures for question answering described to date.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If the main reason to approach the book is an interest in exploratory work and attempts to formalize various problems related to question answering, without a focus on complete evaluation, Chapters 2, 7, 8, and 13 offer an enjoyable lecture. Chapter 2 (Eric Nyberg et al.) aims at capturing the requirements of advanced question answering and their impact on system design. The challenges that face the push toward question-answering systems of increased complexity become apparent to the attentive reader after reading the chapter, especially challenges of practicability and scalability. Indeed, such issues become important for any system that would actually attempt to perform AI-style planning in a broad domain (Section 2.3.2). Similarly, it may be very challenging to find common linguistic representations (Section 2.3.1) to use across highly modular systems for encoding internal information, as the information sources themselves can vary widely from unstructured text on one end of the spectrum to full-blown knowledge bases on the other end. Chapter 7 (Claire Cardie et al.) describes a representation scheme for annotating opinions, as opposed to facts, as they occur in textual documents. The intuition is that identifying and extracting opinions automatically from text would be essential in any attempts to answer questions with multiple possible perspectives, such as What is the general opinion from the press on the recent U2 world tour? The proposed annotation scheme and its motivations are described in detail, exemplified in a few examples, and shown to provide promising interannotator agreement scores despite the difficulty of the task. Chapter 8 (James Pustejovski et al.) provides an overview of an annotation standard for representing events and temporal expressions as they occur in text. The representation language is the product of significant research efforts and numerous iterative improvements. The current version of the language was fueled by a series of governmentsponsored workshops dedicated to the topic of temporal awareness in the context of question answering (Pustejovsky 2002) . In Chapter 13 (Marc Light et al.) , an empirical analysis of a corpus of questions enables the authors to identify examples of reuse scenarios, in which future questions could be answered better by using information previously available to the system (e.g., in the form of previously submitted questions or answers already returned to users). The authors acknowledge that some of the proposed categories of reuse are very difficult to implement in working system modules.", "cite_spans": [ { "start": 2105, "end": 2123, "text": "(Pustejovsky 2002)", "ref_id": null }, { "start": 2140, "end": 2159, "text": "(Marc Light et al.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Chapters 19 (Richard Waldinger et al.) , 20 (Farah Benamara and Patrick Saint-Dizier), and 21 (Deborah McGuinness and Paulo Pinheiro da Silva) delve into knowledge-based question answering and supporting inferential processes for verifying candidate answers and providing justifications. For the audience looking for an insider's view into existing question-answering systems, Chapters 14 (Noriko Tomuro and Steven Lytinen) and 16 (Boris Katz et al.) will be particularly interesting. Rather than proposing any new directions, they share some of the practical lessons learned while providing users with answers from FAQ files (Chapter 14) and from the Web seen as a database of facts (Chapter 16).", "cite_spans": [ { "start": 12, "end": 38, "text": "(Richard Waldinger et al.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Some of the advanced methods proposed in the book are likely to become relatively more language-dependent, as they require larger and more-complex resources of various kinds. Given the variety of topics already covered by its chapters, the book limits the scope of the discussion to questions and answer sources in English. Many lessons learned through experiments on question answering in different languages can be found in Chen and Lin (2002) , Kando and Ishikawa (2004) , and Peters and Borri (2004) .", "cite_spans": [ { "start": 426, "end": 445, "text": "Chen and Lin (2002)", "ref_id": "BIBREF0" }, { "start": 448, "end": 473, "text": "Kando and Ishikawa (2004)", "ref_id": null }, { "start": 480, "end": 503, "text": "Peters and Borri (2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Question answering is situated at the confluence of a large number of related areas: information retrieval (Gaizauskas, Hepple, and Greenwood 2004), natural language processing (Ravin, Prager, and Harabagiu 2001; de Rijke and Webber 2003) , information extraction, and knowledge representation and reasoning (Harabagiu and Chaudhri 2002) , to name only a few. Overall, the book represents a useful reference for readers interested in possible future developments in the field. There are a few scattered typos and very rare inconsistencies, including a paragraph on page 173 that somehow slipped into the final manuscript while still marked as To review. It will be interesting to compare the present and near-future progress in question answering against the aggressive milestones from the proposal in the first chapter of the book.", "cite_spans": [ { "start": 177, "end": 212, "text": "(Ravin, Prager, and Harabagiu 2001;", "ref_id": "BIBREF3" }, { "start": 213, "end": 238, "text": "de Rijke and Webber 2003)", "ref_id": null }, { "start": 308, "end": 337, "text": "(Harabagiu and Chaudhri 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Answering at EACL-03, Budapest, Hungary, April. Gaizauskas, Robert, Mark Hepple, and Mark A. Greenwood, editors. 2004 ", "cite_spans": [ { "start": 13, "end": 117, "text": "EACL-03, Budapest, Hungary, April. Gaizauskas, Robert, Mark Hepple, and Mark A. Greenwood, editors. 2004", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Natural Language Processing for Question", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Proceedings of the Workshop on Multilingual Summarization and Question Answering at COLING-02", "authors": [ { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Hsin-Hsi and Chin-Yew Lin, editors. 2002. Proceedings of the Workshop on Multilingual Summarization and Question Answering at COLING-02, Taipei, Taiwan, August. de Rijke, Maarten and Bonnie Webber, editors. 2003. Proceedings of the Workshop on Japan, June. National Institute of Informatics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Proceedings of the AAAI Spring Symposium on New Directions in Question Answering", "authors": [], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maybury, Mark, editor. 2003. Proceedings of the AAAI Spring Symposium on New Directions in Question Answering, Stanford, CA, March.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Working Notes of the Fifth Cross-Language Evaluation Forum (CLEF-04)", "authors": [ { "first": "Carol", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Borri", "suffix": "" } ], "year": 2002, "venue": "Final Report of the Workshop on TERQAS: Time and Event Recognition in Question Answering Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peters, Carol and Francesca Borri, editors. 2004. Working Notes of the Fifth Cross-Language Evaluation Forum (CLEF-04), Bath, UK, September. Pustejovsky, James, editor. 2002. Final Report of the Workshop on TERQAS: Time and Event Recognition in Question Answering Systems,Bedford, MA, January-July. ARDA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Proceedings of the Workshop on Open-Domain Question Answering at ACL-01", "authors": [ { "first": "Yael", "middle": [], "last": "Ravin", "suffix": "" }, { "first": "John", "middle": [], "last": "Prager", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ravin, Yael, John Prager, and Sanda Harabagiu, editors. 2001. Proceedings of the Workshop on Open-Domain Question An- swering at ACL-01, Toulouse, France, July.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Marius Pa\u015fca is a senior research scientist at Google Inc. He earned a Ph.D. in computer science from Southern Methodist University in 2001, with a thesis on open-domain question answering from large text collections. His current research interests include natural language processing and text mining for information retrieval", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marius Pa\u015fca is a senior research scientist at Google Inc. He earned a Ph.D. in computer science from Southern Methodist University in 2001, with a thesis on open-domain question answering from large text collections. His current research interests include natural language processing and text mining for information retrieval. His address is Google Inc, 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "num": null, "content": "
1Question Answering: An Introduction (Mark Maybury)
2Software Architectures for Advanced QA (Eric Nyberg, John Burger, Scott Mardis, and David
Ferrucci)
3Bringing Commercial Question Answering to the Web (Brian Ulicny)
4Answering Definitional Questions: A Hybrid Approach (Sasha Blair-Goldensohn, Kathleen R.
McKeown, and Andrew Hazen Schlaikjer)
5A Hybrid Approach to Answering Biographical Questions (Ralph Weischedel, Jinxi Xu, and
Ana Licuanan)
6Question Answering in Terminology-Rich Technical Domains (Fabio Rinaldi, Michael Hess,
James Dowdall, Diego Moll\u00e1, and Rolf Schwitter)
7Low-Level Annotations and Summary Representations of Opinions for Multiperspective
QA (Claire Cardie, Janyce M. Wiebe, Theresa Wilson, and Diane J. Litman)
", "html": null, "text": "List of chapters in New Directions in Question Answering. Lytinen) 15 Holistic Query Expansion Using Graphical Models (Daniel Mahler) 16 Viewing the Web as a Virtual Database for Question Answering (Boris Katz, Sue Felshin, Jimmy Lin, and Gregory Marton) 17 Toward Answer-Focused Summarization Using Search Engines (Harris Wu, Dragomir R." } } } }