{ "paper_id": "U04-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:07:55.959256Z" }, "title": "Evaluation of a Query-biased Document Summarisation Approach for the Question Ansnwering Task", "authors": [ { "first": "Mingfang", "middle": [], "last": "Wu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ross", "middle": [], "last": "Wilkinson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "C\u00e9cile", "middle": [], "last": "Paris", "suffix": "", "affiliation": {}, "email": "cecile.paris@csiro.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents an approach on a querybiased document summarisation and an evaluation of the use of such an approach for question answering tasks. We observed a significant difference in the user's performance by presenting a list of documents customized to the task type, compared with a generic document summarization approach. This indicates that paying attention to the task and the searcher interaction may provide substantial improvement in task performance.", "pdf_parse": { "paper_id": "U04-1005", "_pdf_hash": "", "abstract": [ { "text": "This paper presents an approach on a querybiased document summarisation and an evaluation of the use of such an approach for question answering tasks. We observed a significant difference in the user's performance by presenting a list of documents customized to the task type, compared with a generic document summarization approach. This indicates that paying attention to the task and the searcher interaction may provide substantial improvement in task performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "People are searching information to meet their information needs from their tasks at hand. However, most search engines interact with user in a \"one size fits all\" fashion and ignore the user's preferences, search context or the task context. The burden is then placed on the user to scan, navigate, and read the retrieved documents to identify what s/he wants. We believe that paying attention to the nature of the information task and the needs of the searcher may provide benefits beyond those available through more accurate matching. As Saracevic [9] pointed out, the key to the future of information systems and searching processes lies not only in increased sophistication of technology, but also in increased understanding of human involvement with information.", "cite_spans": [ { "start": 552, "end": 555, "text": "[9]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the study presented in this paper, we examine searchers' ability to carry out a question answering task [13] . Unlike the task of the non-interactive TREC question answer track [10] , where the question answering focuses on fact-based, short answer questions such as \"Who is the first prime minister of Australia\". We looked at the type of question answering task that is more complex than the task of finding a single fact. The answer to this type of questions would not generally be available in a single document, but would require facts to be extracted from several documents. For example, an Australian cattle farmer would like an information access system that could tell s/he \"which countries are the top ten importers of Australian beef?\". An ideal answer should consist of a list of country names together with corresponding beef import data. This answer could be synthesized from scattered information collected from various sources, such as a news article about Japanese meat imports and an analysis report on Australian beef in the European market.", "cite_spans": [ { "start": 107, "end": 111, "text": "[13]", "ref_id": "BIBREF13" }, { "start": 180, "end": 184, "text": "[10]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The successful completion of such a task requires an answer to be obtained, citing the relevant source documents. If we assume that we do not have an advanced language engine that can understand such questions and then synthesize answers to them, a searcher will be involved in the process, beyond simply initiating a query and reading a list of answers. Some of the elements that might lead to successful answering might include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 support for query formulation (and reformulation) \u2022 effective matching and ordering of candidate documents \u2022 delivery of a useful form of the list of the candidate documents \u2022 support for extraction of answers from documents \u2022 synthesis of the answer There has been quite a bit of study on how to support query formulation, and the bulk of IR research has been devoted to better matching. Research into question answering technology (for automatic approaches) or text editing (for manual approaches) is needed for the last two activities. In this work, we have concentrated on the task of delivering a useful form of the list of the candidate documents. The research question we investigated is: given a same list of retrieved documents, will the variation in document summary/surrogate improve searcher's performance on question answering task?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Under the evaluation framework of the TREC (Text REtreieval Conference) interactive track [7] , we conducted two experiments that compared two types of candidate lists in two experimental systems. One system (the control system) uses the document title and the first N words of a document as the document's summary, while the other system (the testing system) uses the document title and the best three \"answer chunks\" extracted from the documents as the document's summary. The second confirming experiment repeated the first experiment, but with different search engine, test collection and subjects. The purpose of the second experiment is to confirm the strong results from the first experiment and to test whether the methodology could be generalized to web data.", "cite_spans": [ { "start": 90, "end": 93, "text": "[7]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows: Section 2 discusses our motivation and approach. Section 3 describes the experimental setup, including experiment design and test collections. Section 4 presents the experiments' results and provides detailed analysis. Section 5 provides the conclusions we have drawn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our previous studies [12] , we investigated the use of clustering and classification methods to organize the retrieved documents, we found that while subjects could use the structured delivery format to locate groups of relevant documents, the subjects often either failed to identify a relevant document from the document summary or were unable to locate the answer component present within a relevant document.", "cite_spans": [ { "start": 24, "end": 28, "text": "[12]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation and approach", "sec_num": "2" }, { "text": "We hypothesize that one of the reasons for potential gains from structured delivery not being realized is that in our previous test systems the tools that were provided to differentiate the answer containing documents from non-answer containing documents were inadequate for the task of question answering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and approach", "sec_num": "2" }, { "text": "In our previous testing systems, a retrieved document is represented by its title. While a document's title may tell what the document is about, very often an answer component exists within a small chunk of the document, and this small chunk may not be related to the main theme of the document. For example, for the question \"Which was the last dynasty of China: Qing or Ming?\", the titles of the first two documents presented to a searcher are: \"Claim Record Sale For Porcelain Ming Vase\" and \"Chinese Dish for Calligraphy Brushes Brings Record Price\". The themes of the two documents are Ming vases and Chinese dishes respectively, but there are sentences in each document that mention the time span of the Ming Dynasty and of the Qing Dynasty. By reading only the titles, searchers miss a chance to easily and quickly determine the answer, even the answer components are in the top ranked documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and approach", "sec_num": "2" }, { "text": "In this work, we still use document retrieval, but focus on the surrogate or summary of the retrieved documents. Some experiments have evaluated the suitability of taking extracted paragraphs or sentences as a document summary [2] , [6] , [8] . The produced summary by these methods is purely based on individual document and basically a condensed version of a document -it requires the user less reading time to get to know the gist of the document. There are little studies that have shown whether the use of these summaries is suitable for the interactive question answering task.", "cite_spans": [ { "start": 227, "end": 230, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 233, "end": 236, "text": "[6]", "ref_id": "BIBREF6" }, { "start": 239, "end": 242, "text": "[8]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation and approach", "sec_num": "2" }, { "text": "In our approach, a document is summarized and represented by its title and the three best answerindicative sentences (AIS). The three best AIS are dynamically generated after each query search, based on the following criteria:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and approach", "sec_num": "2" }, { "text": "\u2022 An AIS should contain at least one query word. \u2022 The AIS are first ranked according to the number of unique query words contained in each AIS. If two AIS have the same number of unique query words, they will be ranked according to their order of appearance in the document. \u2022 Only the top three AIS are selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and approach", "sec_num": "2" }, { "text": "Our hypothesis is that the title and answerindicative sentences should provide a better indication of whether a document might help answer a given question. This is because documents can easily be completely off the topic of interest to the searcher, but still be relevant because they contain a part of the answer to the question. Therefore, our experiment focused on the comparison and evaluation of two systems using different summaries. The control system First20 uses the title and the first twenty words as the document summary, and the test system AIS3 uses the title and best three answer-indicative sentences as the document summary. Performance will be evaluated in terms of searchers' abilities to locate answer components, searchers' subjective perceptions of the systems, and the efforts required by searchers to determine answers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and approach", "sec_num": "2" }, { "text": "The experimental design concerns three major factors: system, question, and searcher, with focus on the comparison of two experimental systems. Thus, we adopted a factorial, Latin-square experiment design. In this design, each searcher uses each system to search a block of questions; questions are rotated completely within two blocks. For an experiment involving two systems and eight questions, a block of sixteen searchers is needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental design", "sec_num": null }, { "text": "In each experiment, the two experimental systems use the same underlying search engine. The Experiment I used the MG [11] search engine, while the Experiment II used the Padre search engine [4] .", "cite_spans": [ { "start": 117, "end": 121, "text": "[11]", "ref_id": "BIBREF11" }, { "start": 190, "end": 193, "text": "[4]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "System description", "sec_num": null }, { "text": "In each experiment, the two experimental systems provide natural language querying only. For each query, both systems present a searcher with the summary of the top 100 retrieved documents in five consecutive pages, with each page containing 20 documents. Each system has a main window for showing these summary pages. A document reading window will pop up when a document title is clicked. If a searcher finds an answer component from the document reading window, s/he can click the \"Save Answer\" button in this window and a save window will pop up for the searcher to record the newly found answer component and modify previously saved answer components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System description", "sec_num": null }, { "text": "The difference between the two systems is the form and content of the result presented in the main windows. The main window of the control system (First20) is shown in Figure 1 . The main windows of the test systems (AIS3) are shown in Figure 2 and Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 176, "text": "Figure 1", "ref_id": null }, { "start": 236, "end": 244, "text": "Figure 2", "ref_id": null }, { "start": 249, "end": 257, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "System description", "sec_num": null }, { "text": "The AIS3 windows in each experiment are slightly different. In Experiment I (Figure 2 ), each answer-indicative sentence is linked to the content of the document and the sentence in the document is highlighted and brought to the top of the window. In Experiment II (Figure 3 ), we remove these links to make the interface closer to the interface of First20 and the three AIS truly the summary. There is a save icon beside each AIS (in Figure 2 ) or each document title (in Figure3) in AIS3, this icon has the same function as the Save Answer button in the document reading window. If a searcher finds a fact from the following three answer-indicative sentences, s/he can save the fact directly from this (summary) page by clicking the icon.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 85, "text": "(Figure 2", "ref_id": null }, { "start": 265, "end": 274, "text": "(Figure 3", "ref_id": null }, { "start": 435, "end": 443, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "System description", "sec_num": null }, { "text": "The document collection used by Experiment I contains all newswire articles. Experiment II used a partial collection from the main web track (W10G) [1] . This collection is a snapshot of the WWW; all documents in the collection are web pages. To concentrate on document summaries instead of browsing, we removed all links and images inside a web page -for the purpose of this experiment; each web page was treated as a standalone document. ", "cite_spans": [ { "start": 148, "end": 151, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Document collection", "sec_num": null }, { "text": "There are two types of questions in the experiments. The Type 1 questions are of the form <>; for example, \"Name four films in which Orson Welles appeared.\". The Type 2 questions are of the form <>; for example, \"Which was the last dynasty of China: Qing or Ming?\". For the Type 1 questions (question 1-4) , a complete answer consists of n answer components, plus a list of supporting documents. For the Type 2 questions (question 5-8), two facts are usually needed to make the comparison, plus supporting documents. Experiment I used a set of eight questions developed by TREC9i participants. To prepare a set of questions for Experiment II, we started with the eight questions from TREC9i. We then removed those questions that could not be fully answered from the document collection used in Experiment II. Additional questions were added either by modifying questions from the main web track, or were developed by an independent volunteer.", "cite_spans": [], "ref_spans": [ { "start": 331, "end": 345, "text": "(question 1-4)", "ref_id": null } ], "eq_spans": [], "section": "Questions", "sec_num": null }, { "text": "A searcher's performance is evaluated in terms of the success rate. For each search question, the saved answers and their supporting documents were collected for judging. There are two levels of judgement: one is whether the searcher finds the required number of answer components (for questions of Type 1) or whether the found facts are enough to infer the answer (for questions of Type 2); another is whether the submitted facts (or answers) are supported by the saved documents. For the success rate, a search session is given a score between 0 and 1: each correctly identified fact supported by a saved document contributes a score of 1/n to the search score, where n is the number of required answer components (or facts) for the question", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": null }, { "text": "Generally, we followed the procedure recommended by the TREC interactive track [7] . During the experiments, the subjects performed the following tasks:", "cite_spans": [ { "start": 79, "end": 82, "text": "[7]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental procedure", "sec_num": null }, { "text": "\u2022 Pre-search preparation: consisting of introduction to the experiment, answering a pre-search questionnaire, demonstration of the main functions of each experimental system, and hands-on practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental procedure", "sec_num": null }, { "text": "\u2022 Searching session: each subject attempts four questions on each of the systems, answering a pre-search questionnaire and a post-search questionnaire per question, and a post-system questionnaire per system. Subjects have a maximum of five minutes per question search. \u2022 Answering an exit questionnaire.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental procedure", "sec_num": null }, { "text": "All searchers were recruited via an internal university newsgroup: all were students from the department of computer science. The average age of searchers was 23, with 4.7 years of online search experience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjects", "sec_num": null }, { "text": "Subjects were asked about their familiarity about each question. Overall, subjects claimed low familiarity with all questions (all under 3 on a 5point Likert scale). In experiment I, the average familiarity of questions from each system is 1.5 (AIS) and 1.58 (First20). In experiment II, the scores are 2.1 (AIS) and 2.0 (First20). No significant correlations are found between familiarity and success", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjects", "sec_num": null }, { "text": "To determine the success of a system at supporting a user performing an information task, it is important to know how well the task is done, how much effort is required, and whether an information system is perceived as helpful. We use independent assessment for performance, system logging for effort, and questionnaires for perception.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXPERIMENTAL RESULTS", "sec_num": "4" }, { "text": "We aimed to determine whether searchers could answer questions more successfully with the First20 system or the AIS3 system. Our results show that searchers using AIS3 had a higher success rate than those using First20 for all questions except for Question 5. Overall, by using AIS3, searchers' performance is improved by 38%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment I", "sec_num": null }, { "text": "In this experiment, the three variables to consider are the question, the searcher, and the system. Although the Latin-square design should minimize the effect of question and searcher, it is possible that question or searcher effects may still occur. An ANalysis Of Variance (ANOVA) model was used to test the significance of individual factor and the interactions between the factors. Here, the success rate is the dependent variable, and system, question, and searcher are three independent variables. A major advantage of using the ANOVA model is that the effect of each independent variable as well as their interactions are analyzed, whereas for the t-test, we can compare only one independent variable under different treatments. Table 1 shows the result of the three-way ANOVA test on success rates. It tells us that the system effect and question effect are significant, but that the searcher effect and the interaction effects are not. ", "cite_spans": [], "ref_spans": [ { "start": 737, "end": 744, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment I", "sec_num": null }, { "text": "Experiment II was aimed to confirm the strong result from the experiment I. We planned to repeat the above experiment with a quite different document collection, another set of questions, and different searchers. However, we found that the technique for selecting AIS used in Experiment I could not be applied directly to web documents. Unlike news articles that have coherent text with a well-define discourse structure, web pages are often a chaotic jumble of phrases, links, graphics, and formatting commands. On the other hand, compared with news articles, web documents have more structural information. Although their markup is more for presentation effect than to indicate their logical structure, some information between two tags (for example:
  • \u2026
  • ) can be regarded as a semantically coherent unit and treated as a sentence. Therefore, in addition to the techniques used in Experiment I to segment documents into sentences, we also used some document mark-up as \"sentence\" indicators. Table 2 shows the ANOVA test on the experiment II data. The table shows results similar to those in Table 1 : only the system and the question have significant effect on the success rate. Overall, AIS3 leads to a performance improvement of 34% over First20.", "cite_spans": [], "ref_spans": [ { "start": 1000, "end": 1007, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1100, "end": 1107, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment II", "sec_num": null }, { "text": "Based on the searchers' performance in both experiments, our hypothesis that the AIS is a better form of document summary than the first N words for the question answering task is supported. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment II", "sec_num": null }, { "text": "The effort of a searcher in determining answers to a question can be measured by the number of queries sent, the number of summary pages viewed, and the number of documents read.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searcher effort", "sec_num": "4.2" }, { "text": "On average, searchers sent fewer queries, viewed fewer summary pages, and read fewer documents from AIS3 than from First20 in both experiments (refer to Table 3 ).", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Searcher effort", "sec_num": "4.2" }, { "text": "We note that searchers generally did not use more than one summary page per query, nor did they need to read many documents to carry out the task. Considering the summary page of AIS3 displays more text than that in First20, we may tentatively conclude that searchers read similar amount of text, but AIS3 provides higher quality information than the First20 does, since we know searcher performance is better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searcher effort", "sec_num": "4.2" }, { "text": "The perception of searchers of the systems is captured by three questions in exit questionnaire. The three questions are", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searcher preference", "sec_num": null }, { "text": "\u2022 Q1: Which of the two systems did you find easier to learn to use? \u2022 Q2: Which of the two systems did you find easier to use? \u2022 Q3: Which of the two systems did you like the best overall?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searcher preference", "sec_num": null }, { "text": "The distribution of the searchers' choices is shown in Table 4 . Combining the results from the two experiments' questionnaires, for question 1, 15% of subjects selected First20, while 56% of subjects selected AIS3; for question 2, 19% of subjects selected First20, while 71% of subjects selected AIS3; for question 3, 22% of subjects preferred First20, while 75% preferred AIS3.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 62, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Searcher preference", "sec_num": null }, { "text": "In this paper, we report two user studies on interactive question answering task. By constructing a delivery interface that takes into account the nature of the task, we saw that searchers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 issued fewer queries \u2022 read fewer documents \u2022 found more answers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We conducted two experiments that would allow us to determine searcher performance, searcher effort and searcher preference. Our results show that searchers' performance when using an AIS3 system is improved over using a First20 system, based on objective assessment; this result is consistent in both experiments. The performance difference between two experimental systems is statistically significant. The data suggests that searchers using AIS3 require less effort, although cognitive load experiments are required to confirm this. Finally, AIS3 is preferred by most searchers. Thus, the experiments support our hypothesis that AIS3 is a better indication of document suitability than First20, for the question answering task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Different search tasks may require different delivery methods. For example: the clustering of retrieved documents can be used for the task of finding relevant documents [5] , and the classification of retrieved documents can be used for the purposing of browsing. However, for the task of question answering, we found that none of these delivery methods performed better than a ranked list [12] . The experiments presented in this paper indicate that a relatively simple document summary can significantly improve the searcher's performance in question answering task.", "cite_spans": [ { "start": 169, "end": 172, "text": "[5]", "ref_id": "BIBREF5" }, { "start": 390, "end": 394, "text": "[12]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Engineering a Multi-purpose Test Collection for Web Retrieval Experiments. Information Processing and Management", "authors": [ { "first": "P", "middle": [], "last": "Bailey", "suffix": "" }, { "first": "N", "middle": [], "last": "Craswell", "suffix": "" }, { "first": "D", "middle": [], "last": "Hawking", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bailey P., Craswell N. and Hawking D. Engineering a Multi-purpose Test Collection for Web Retrieval Experiments. Information Processing and Management. 2001.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "OCELOT: A System for summarizing web pages", "authors": [ { "first": "A", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "V", "middle": [ "O" ], "last": "Mittal", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23 rd ACM SIGIR Conference", "volume": "", "issue": "", "pages": "144--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger A. L. and Mittal V. O. OCELOT: A System for summarizing web pages. In Proceedings of the 23 rd ACM SIGIR Conference. July 24-28, 2000, Athens, Greece (pp. 144-151).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Proceedings of the Ninth Text Retrieval Conference (TREC-9", "authors": [], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "366--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Experiments. In Proceedings of the Ninth Text Retrieval Conference (TREC-9) (pp. 366-379). November 2000, Gaithersberg, MD, USA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "ACSys TREC-8 Experiments", "authors": [ { "first": "D", "middle": [], "last": "Hawking", "suffix": "" }, { "first": "N", "middle": [], "last": "Craswell", "suffix": "" }, { "first": "P", "middle": [], "last": "Thistlewaite", "suffix": "" } ], "year": 1999, "venue": "Proceeding of Seventh Text Retrieval Conference (TREC-8)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hawking D., Craswell N. and Thistlewaite P. ACSys TREC-8 Experiments. In Proceeding of Seventh Text Retrieval Conference (TREC-8), November 1999, Gaithersburg, MD, USA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Reexamining the cluster hypothesis: Scatter/gather on retrieval results", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Hearst", "suffix": "" }, { "first": "J", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 19 th ACM SIGIR conference", "volume": "", "issue": "", "pages": "76--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hearst M. A. and Pedersen J. O. Reexamining the cluster hypothesis: Scatter/gather on retrieval results. In Proceedings of the 19 th ACM SIGIR conference, August 1996, Zurich, Switzerland (pp. 76-84).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Trainable Document Summarizer", "authors": [ { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" }, { "first": "J", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "F", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the ACM SIGIR conference", "volume": "", "issue": "", "pages": "68--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kupiec J., Pedersen J. and Chen F. A Trainable Document Summarizer. In Proceedings of the ACM SIGIR conference, July 1995, New York, USA (pp. 68- 73).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "TREC-9 Interactive Track -Basics", "authors": [ { "first": "P", "middle": [], "last": "Over", "suffix": "" } ], "year": 2000, "venue": "Note papers of TREC-9", "volume": "", "issue": "", "pages": "721--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Over P. TREC-9 Interactive Track -Basics. In Note papers of TREC-9. November 2000, Gaithersberg, MD, USA (pp 721-728).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic Text Structure and Summarization", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "A", "middle": [], "last": "Singhal", "suffix": "" }, { "first": "M", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "C", "middle": [], "last": "Buckley", "suffix": "" } ], "year": 1997, "venue": "Information Processing and Management", "volume": "33", "issue": "2", "pages": "193--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salton G., Singhal A., Mitra M. and Buckley C. Automatic Text Structure and Summarization. Information Processing and Management. Vol. 33 (2) 193-207, 1997.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Study of Information Seeking and Retrieving: I. Background and Methodology", "authors": [ { "first": "T", "middle": [], "last": "Saracevic", "suffix": "" }, { "first": "P", "middle": [], "last": "Kentor", "suffix": "" } ], "year": 1988, "venue": "Journal of the American Society for Information Science", "volume": "39", "issue": "3", "pages": "161--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saracevic T. and Kentor P. A Study of Information Seeking and Retrieving: I. Background and Methodology. Journal of the American Society for Information Science. Vol. 39(3), 161-176. 1988.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The TREC-8 Question Answering Track Report", "authors": [ { "first": "E", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 1999, "venue": "proceedings of the Nipth Text Retrieval Conference (TREC-8). Novemeber", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Voorhees E. M. The TREC-8 Question Answering Track Report. In proceedings of the Nipth Text Retrieval Conference (TREC-8). Novemeber 1999, Gaithersberg, MD, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Managing Gigabytes: Compressing and indexing documents and images. Van Nostrand Reinhold", "authors": [ { "first": "I", "middle": [], "last": "Witten", "suffix": "" }, { "first": "A", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "T", "middle": [], "last": "Bell", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Witten I., Moffat A. and Bell T. Managing Gigabytes: Compressing and indexing documents and images. Van Nostrand Reinhold. 1994.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Using clustering and classification approaches in interactive retrieval", "authors": [ { "first": "M", "middle": [], "last": "Wu", "suffix": "" }, { "first": "M", "middle": [], "last": "Fuller", "suffix": "" }, { "first": "R", "middle": [], "last": "Wilkinson", "suffix": "" } ], "year": 2001, "venue": "Information Processing & Management", "volume": "37", "issue": "", "pages": "459--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu M., Fuller M. and Wilkinson R. Using clustering and classification approaches in interactive retrieval. Information Processing & Management, 37 (2001) 459- 484.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Searcher performance in question answering", "authors": [ { "first": "M", "middle": [], "last": "Wu", "suffix": "" }, { "first": "M", "middle": [], "last": "Fuller", "suffix": "" }, { "first": "R", "middle": [], "last": "Wilkinson", "suffix": "" } ], "year": 2001, "venue": "Proceedings of 24th Annual ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "375--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, M., Fuller, M. and Wilkinson, R. Searcher performance in question answering. In Proceedings of 24th Annual ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 375-381, 2001.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "The interface of the First20 The interface of the AIS3 system in Experiment I The interface of the AIS3 system in Experiment II", "type_str": "figure" }, "TABREF0": { "text": "", "html": null, "type_str": "table", "content": "
    p-value
    System0.041
    Question0.000
    Searcher0.195
    System * Question0.414
    Question * Searcher0.691
    System * Searcher0.050
    ", "num": null }, "TABREF1": { "text": "", "html": null, "type_str": "table", "content": "
    Sourcep-value
    System0.020
    Question0.018
    Searcher0.547
    System * Question0.248
    Question * Searcher0.808
    System * Searcher0.525
    ", "num": null }, "TABREF2": { "text": "", "html": null, "type_str": "table", "content": "
    First20AIS3First20AIS3
    Mean(SD)Mean(SD)Mean(SD)Mean(SD)
    No. of unique queries sent2.14(0.56)1.73(0.57)2.0(1.2)1.7(1.0)
    No. of surrogate pages viewed2.80(1.64)1.98(0.97)2.4(1.4)2.0(1.3)
    No. of documents read3.42(1.22)2.66(0.77)4.2(2.8)3.2(2.7)
    ", "num": null }, "TABREF3": { "text": "", "html": null, "type_str": "table", "content": "
    Q1Q2Q3Q1Q2Q3
    First20345222
    AIS381111101213
    No difference51421
    ", "num": null } } } }