{ "paper_id": "U07-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:08:55.731686Z" }, "title": "Exploring Abbreviation Expansion for Genomic Information Retrieval *", "authors": [ { "first": "Nicola", "middle": [], "last": "Stokes", "suffix": "", "affiliation": { "laboratory": "Victoria Research Laboratory", "institution": "The University of Melbourne", "location": { "postCode": "3010", "settlement": "Victoria", "country": "Australia" } }, "email": "nstokes@csse.unimelb.edu.au" }, { "first": "Yi", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Victoria Research Laboratory", "institution": "The University of Melbourne", "location": { "postCode": "3010", "settlement": "Victoria", "country": "Australia" } }, "email": "" }, { "first": "Lawrence", "middle": [], "last": "Cavedon", "suffix": "", "affiliation": { "laboratory": "Victoria Research Laboratory", "institution": "The University of Melbourne", "location": { "postCode": "3010", "settlement": "Victoria", "country": "Australia" } }, "email": "lcavedon@csse.unimelb.edu.au" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "", "affiliation": { "laboratory": "Victoria Research Laboratory", "institution": "The University of Melbourne", "location": { "postCode": "3010", "settlement": "Victoria", "country": "Australia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Abbreviations are commonly found instances of synonymy in Biomedical journal papers. Information retrieval systems that index paragraphs rather than full-text articles are more susceptible to term variation of this kind, since abbreviations are typically only defined once at the beginning of the text. One solution to this problem is to expand the user query automatically with all possible abbreviation instances for each query term. In this paper, we compare the effectiveness of two abbreviation expansion techniques on the TREC 2006 Genomics Track queries and collection. Our results show that for highly ambiguous abbreviations the query collocation effect isn't strong enough to deter the retrieval of erroneous passages. We conclude that full-text abbreviation resolution prior to passage indexing is the most appropriate approach to this problem. * National ICT Australia is funded by the Australian Government's Department of Communications, Information Technology, and the Arts and the Australian Research Council through Backing Australia's Ability and the ICT Research Centre of Excellence programs.", "pdf_parse": { "paper_id": "U07-1015", "_pdf_hash": "", "abstract": [ { "text": "Abbreviations are commonly found instances of synonymy in Biomedical journal papers. Information retrieval systems that index paragraphs rather than full-text articles are more susceptible to term variation of this kind, since abbreviations are typically only defined once at the beginning of the text. One solution to this problem is to expand the user query automatically with all possible abbreviation instances for each query term. In this paper, we compare the effectiveness of two abbreviation expansion techniques on the TREC 2006 Genomics Track queries and collection. Our results show that for highly ambiguous abbreviations the query collocation effect isn't strong enough to deter the retrieval of erroneous passages. We conclude that full-text abbreviation resolution prior to passage indexing is the most appropriate approach to this problem. * National ICT Australia is funded by the Australian Government's Department of Communications, Information Technology, and the Arts and the Australian Research Council through Backing Australia's Ability and the ICT Research Centre of Excellence programs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Query expansion is a well-known technique used in Information Retrieval (IR) to address the problem of lexical variation between the query and semantically related terms in relevant documents (Efthimiadis, 1996) . While on average query expansion methods, such as relevance feedback (Ruthven and Lalmas, 2003) , have been shown to improve retrieval performance, there are many examples where query effectiveness has been significantly downgraded. However, in terminology rich domains where word sense distributions are heavily skewed, query expansion has been shown to have more of a consistent positive effect on retrieval performance. This trend is particularly evident in the passage retrieval task investigated at the TREC (Text REtrieval Conference) Genomics Track (Hersh et al., 2006) .", "cite_spans": [ { "start": 192, "end": 211, "text": "(Efthimiadis, 1996)", "ref_id": "BIBREF3" }, { "start": 283, "end": 309, "text": "(Ruthven and Lalmas, 2003)", "ref_id": "BIBREF9" }, { "start": 770, "end": 790, "text": "(Hersh et al., 2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we investigate the impact of various expansion term types on passage retrieval effectiveness in the biomedical domain. Our results show that expanding with ontologically related words (synonyms, hypernyms, hyponyms) significantly improves performance; however, abbreviation expansion shows more inconsistent results similar to those seen in general domain expansion experiments. One would expect that the performance of IR systems that index paragraphs rather than fulltext articles would greatly benefit from this sort of expansion, since abbreviations are typically only defined once in an entire document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We report the results of our investigation on the TREC 2006 Genomic retrieval task. We compare two abbreviation expansion techniques: the first adds abbreviations found in the ADAM database of abbreviations (Zhou et al., 2006a) ; the second, uses a pseudo relevance feedback strategy to identify query term abbreviations in the full-text documents of an initial set of retrieved passages. Despite the benefit of mutual disambiguation across query terms, referred to as the query term collocation effect (Krovetz and Croft, 1992) , both approaches reduce retrieval effectiveness, leading to the conclusion that abbreviation resolution in the document collection is more appropriate than expansion.", "cite_spans": [ { "start": 207, "end": 227, "text": "(Zhou et al., 2006a)", "ref_id": "BIBREF13" }, { "start": 503, "end": 528, "text": "(Krovetz and Croft, 1992)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another contribution of this paper is our novel concept-based IR ranking method. This ranking method is an adaptation of the Okapi method, enhanced so as to deal with multi-concept queries derived from natural language questions. Our method ensures that passages containing at least one occurrence of all the query concepts out-rank passages that contain many occurrences of only one of the concepts. We also describe a paragraph reduction strategy that increases the TREC defined answer extraction accuracy score of our system. Finally, we discuss our plans for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Biomedical text retrieval is a very active area of research, driven by the biomedical community's need for high precision systems that answer specific biological questions not captured in the plethora of database resources (of varying quality) containing different types of biological information. Two distinct user information needs have been recently investigated by the IR community: clinical text retrieval (which supports patient-centred clinical research or care) and functional genomic text retrieval (which supports researchers involved in laboratory experiments). In this paper, we focus on genomic retrieval. An interesting overview of evidence-based medical retrieval in the clinical domain can be found in (Lin and Demner-Fushman, 2006) . Functional Genomics is the study of gene and protein function and interaction at a molecular level, and the effects of this interaction on biological processes that results in phenotypic outcomes (such as disease) in organisms. An important yet very time-consuming part of the functional genomics pipeline for researchers involves arriving at biologically motivated explanations for the output of bioinformatics-based clustering techniques such as gene expression profiling. Since a single experiment can involve thousands of genes, even a competent biologist needs to turn to a search engine to determining whether the functional dependencies found in these clusters make sense.", "cite_spans": [ { "start": 718, "end": 748, "text": "(Lin and Demner-Fushman, 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval for Functional Genomics", "sec_num": "2" }, { "text": "The TREC Genomics Track was established in 2003 with the aim of supporting the evaluation of information retrieval systems capable of answering the types of questions typically posed by genomicists such as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval for Functional Genomics", "sec_num": "2" }, { "text": "\u2022 What is the role of gene A in disease B?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval for Functional Genomics", "sec_num": "2" }, { "text": "\u2022 What effect does gene A have on a particular biological process?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval for Functional Genomics", "sec_num": "2" }, { "text": "\u2022 How do genes A and B interact in the function of a specific organ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval for Functional Genomics", "sec_num": "2" }, { "text": "\u2022 How do mutations in gene A influence a particular biological process?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval for Functional Genomics", "sec_num": "2" }, { "text": "Each of these four query templates were investigated at the 2006 Genomics Track. In all, 28 queries were evaluated on a collection of full-text journal papers, where the task was to retrieved relevant answer passages rather than full-text documents. In the following section we describe our novel genomic retrieval system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval for Functional Genomics", "sec_num": "2" }, { "text": "In this section, we describe the different components in our Genomic IR architecture. Our IR system is a version of the Zettair engine 1 that we have specifically modified for passage retrieval and biomedical query term expansion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "3" }, { "text": "The TREC collection consists of full-text journal articles obtained by crawling the Highwire site 2 . The full collection contains 162,259 documents and is about 12.3 GB in size when uncompressed. After preprocessing, the whole collection becomes 7.9 GB. The collection is pre-processed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collection Preprocessing", "sec_num": null }, { "text": "Paragraph Segmentation: for evaluation purposes the Genomics Track requests that the ranked answer passages must be within specified paragraph boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collection Preprocessing", "sec_num": null }, { "text": "Sentence Segmentation: all sentences within paragraphs are segmented using an open source tool. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collection Preprocessing", "sec_num": null }, { "text": "Character Replacement: Greek characters represented by gifs are replaced by textual encodings; accented characters such as \"\u00c0\" or \"\u00c1\" are replaced by \"A\"; Roman numbers are replaced by Arabic numerals. These replacements are very important for capturing variations in gene names.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collection Preprocessing", "sec_num": null }, { "text": "Removal: all HTML tags, very short sentences, paragraphs with the heading Abbreviations, figures, tables and some special characters such as hyphens, slashes and asterisks are removed: (Trieschnigg et al., 2006) has shown that small changes in the tokenisation strategy such as these improve the performance of biomedical IR.", "cite_spans": [ { "start": 185, "end": 211, "text": "(Trieschnigg et al., 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Collection Preprocessing", "sec_num": null }, { "text": "Once the collection has been indexed, querying can begin. In the 2006 Genomics Track, each query or topic contains at least two biological concepts or entities which could be a gene (\"NM23\"), a protein (\"p53\"), a disease (\"ovarian cancer\") or a biological process (\"ethanol metabolism\"). TREC simplifies the query preprocessing task by ensuring that all topics conform to the query templates discussed in Section 2. The following is a sample query, Topic 173 from the 2006 track, which contains two concepts: \"PrnP\" (a gene) and \"mad cow disease\" (a disease):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Expansion", "sec_num": null }, { "text": "What is the role of PrnP in mad cow disease?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Expansion", "sec_num": null }, { "text": "Our query expansion process proceeds as follows. First, each gene or protein in the query is expanded with entries from the Entrez Gene database. 4 Since the same gene may occur in many different species, and many of their synonyms only differ with respect to capitalisation, we choose the first entry retrieved that belongs to the species type Homo sapien. Then, terms in the Official Symbol, Name, Other Aliases and Other Designations fields, for the gene, are added to the query.", "cite_spans": [ { "start": 146, "end": 147, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Query Expansion", "sec_num": null }, { "text": "For all disease and biological process mentions in the query, we use the MeSH 5 taxonomy of medical terms to find their synonyms (using the Entry Terms and See Also fields). The terms' hyponyms (descendants) and hypernyms (ancestors) in the MeSH tree structure are also used as expansion terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Expansion", "sec_num": null }, { "text": "As well as expanding with synonyms, we use a \"gene variant\" generation tool to generate all the possible variants for both original query terms and expanded terms. Our segmentation rules are similar to those used by (Buttcher et al., 2004) . We describe our rules as follows:", "cite_spans": [ { "start": 216, "end": 239, "text": "(Buttcher et al., 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Gene Variant Generation", "sec_num": null }, { "text": "Given a gene name containing a hyphen or punctuation, or a change from lower case to upper case, or from a character to a number (or vice versa), or a Greek character (e.g. \"alpha\"), we call this a split point. A word is split according to all its split points, and all variants are generated by concatenating all these split parts, optionally with a space inserted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gene Variant Generation", "sec_num": null }, { "text": "Greek characters are also mapped to English variants, e.g. \"alpha\" is mapped to \"a\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gene Variant Generation", "sec_num": null }, { "text": "For example, on the query term \"Sec61alpha\", we would generate the following lexical variants which are also commonly used forms of this term in the collection: \"Sec 61alpha\", \"Sec61 alpha\", \"Sec 61 alpha\", \"Sec 61a\", \"Sec61 a\", \"Sec 61 a\", \"Sec61a\";", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gene Variant Generation", "sec_num": null }, { "text": "In phrases, we replace hyphens (\"-\"), slashes (\"/\") and asterisks (\"*\") in the queries with spaces. For example, \"subunit 1 BRCA1 BRCA2 containing complex\" is a variant of \"subunit 1 BRCA1/BRCA2-containing complex\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gene Variant Generation", "sec_num": null }, { "text": "Our document ranking method is based on the Okapi model (Robertson et al., 1994) . Many participant systems at the TREC Genomics track use the Okapi method for ranking documents with respect to their similarity to the query. However, there are two fundamental problems with using this model on TREC Genomic queries.", "cite_spans": [ { "start": 56, "end": 80, "text": "(Robertson et al., 1994)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "The first problem regards Okapi not differentiating between concept terms and general query terms in the query. For example, consider two documents, one containing the terms \"mad cow disease\" and \"PrnP\", and the other containing the terms \"role\" and \"PrnP\". Clearly the first document containing the two biological concepts is more relevant. The second problem occurs because TREC 2006 topics contain more than one concept term. It is possible that a short paragraph that discusses one concept only will be ranked higher than a longer paragraph which mentions two concepts. Again this is an undesirable outcome.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "To overcome these problems, a Conceptual IR model was proposed in (Zhou et al., 2006b) . In this paper we propose another method called the concept-based query normalisation which is based on the Okapi model and similar to the method introduced in (Li, 2007; Stokes et al., 2008) for geospatial IR.", "cite_spans": [ { "start": 66, "end": 86, "text": "(Zhou et al., 2006b)", "ref_id": "BIBREF14" }, { "start": 248, "end": 258, "text": "(Li, 2007;", "ref_id": "BIBREF6" }, { "start": 259, "end": 279, "text": "Stokes et al., 2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "The first problem is solved by dividing query terms into two types: general terms t g and concept terms t c . Given a query with both concept and general terms, the similarity between a query Q and a document D d is measured as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "sim(Q, D d ) = gsim(Q, D d ) + csim(Q, D d )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "where gsim(Q, D d ) is the general similarity score and csim(Q, D d ) is the concept similarity score. The general similarity score is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "gsim(Q, D d ) = t\u2208Qg simt(Q, D d ) = t\u2208Qg r d,t \u2022 wt \u2022 rq,t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "where Q g is the aggregation of all general terms/phrases in the query. The concept similarity score is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "csim(Q, D d ) = C\u2208Qc simc(Q, D d ) = t\u2208C,C\u2208Qc Norm(simt 1 (Q, D d ), . . . , simt N (Q, D d )) = t\u2208C,C\u2208Qc (simt 1 + simt 2 a + \u2022 \u2022 \u2022 + simt N a N \u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "where Q c is the aggregation of all concepts in the query, C is one concept in Q c , and t i is a term/phrase in the query, after expansion, which belongs to the concept C; the t i are listed in descending order according to their Okapi similarity scores sim t 1 , . . ., sim t N :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "simt(Q, D d ) = r d,t \u2022 w t \u2022 rq,t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "r d,t = (k1 + 1) \u2022 f d,t k1 \u2022 [(1 \u2212 b) + b \u2022 W d avgW d ] + f d,t w t = log N \u2212 max(ft, ft q ) + 0.5 max(ft, ft q ) + 0.5 (1) rq,t = (k3 + 1) \u2022 fq,t k3 + fq,t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "where k 1 and b are usually set to 1.2 and 0.75 respectively, and k 3 can be taken to be \u221e. Variable W d is the length of the document d in bytes; avgW d is the average document length in the entire collection; N is the total number of documents in the collection; f t is the number of documents in which term t occurs; and f {d,q},t is the frequency of term t in either a document d or query q. Note that (1) is an adjustment of the calculation for the weight w t of an expansion term t appearing in the query: for expansion term t, its own term frequency f t and the corresponding original query term's frequency f tq are compared, and the larger value used -this ensures the term contributes an appropriately normalised \"concept weight\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "To solve the second problem, we use the following rules to ensure that for two passages P 1 and P 2 , where one contains more unique concepts than the other, the number of concepts ConceptNum(P) will override the Okapi score Score(P) and assign a higher rank to the passage with more unique concepts:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "if ConceptNum(P1) > ConceptNum(P2) then Rank(P1) > Rank(P2) else if ConceptNum(P1) < ConceptNum(P2) then Rank(P2) > Rank(P1) else if Score(P1) \u2265 Score(P2) then Rank(P1) > Rank(P2) else Rank(P2) > Rank(P1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept-based Query Normalisation", "sec_num": null }, { "text": "Although MeSH and Entrez Gene contain many synonyms and related terms, one important type of lexical variant, abbreviations, has very low coverage in both databases. For example, \"AD\" is a commonly used abbreviation for \"Alzheimer's Disease\". Since the long and short form (\"Alzheimer's Disease (AD)\") only appear together at the beginning of each journal document, many relevant passages will contain \"AD\" only and so will appear less relevant than they should against a query containing \"Alzheimer's Disease\". Hence, expanding the given query with \"AD\" should improve retrieval effectiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abbreviation Finder", "sec_num": null }, { "text": "As already mentioned, there are two methods for collecting abbreviations from the literature: the first uses the static resource ADAM (Zhou et al., 2006a) , while the second uses our pseudo relevance feedback method for extraction these abbreviations during run time. The advantage of the latter approach is that it dynamically collects abbreviations and so does not suffer from the coverage and update problems of static resources like ADAM. The following is an overview of how our abbreviation feedback step contributes to the retrieval process:", "cite_spans": [ { "start": 134, "end": 154, "text": "(Zhou et al., 2006a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Abbreviation Finder", "sec_num": null }, { "text": "1. Retrieve the first 1000 documents which include at least one instance of each concept in the query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abbreviation Finder", "sec_num": null }, { "text": "2. From this subset of documents, find terms which fit the pattern \"Term (Abbr)\", where \"Term\" is a concept in the query (original or expanded) and \"Abbr\" is the abbreviation or synonym defined in the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abbreviation Finder", "sec_num": null }, { "text": "3. Among all the detected abbreviations or synonyms, remove all the multi-word terms, terms that do not have any overlapping characters with the original term, and terms which occur less than three times.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abbreviation Finder", "sec_num": null }, { "text": "4. For all remaining abbreviations or synonyms, use the above generation tool to formulate all their lexical variants, and add them to the query. The expanded query is then re-submitted to the retrieval engine, and the passage extraction step, described below, is applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abbreviation Finder", "sec_num": null }, { "text": "As already mentioned the 2006 Genomics Track defined a new question answering-type task that requires short full-sentence answers to be retrieved in response to a particular query. However, before answer passages can be generated, we first retrieve the first 1000 ranked paragraphs for each topic, and use the following simple rules to reduce these paragraphs to answer spans. Two methods are examined in this paper which are best described with an example. Given a paragraph consisting of a set of sentences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Extraction", "sec_num": null }, { "text": "{(s 1 , i), (s 2 , i), (s 3 , r), (s 4 , r), (s 5 , i), (s 6 , r), (s 7 , i), (s 8 , i), (s 9 , r), (s 10 , i)},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Extraction", "sec_num": null }, { "text": "where r is relevant (that is, mentions at least one query term) and i is irrelevant. Method A shortens a paragraph by removing irrelevant sentences from its start and end until a relevant sentence is detected. Hence, it would produce the following passage of sentences:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Extraction", "sec_num": null }, { "text": "{(s 3 , r), (s 4 , r), (s 5 , i), (s 6 , r), (s 7 , i), (s 8 , i), (s 9 , r)}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Extraction", "sec_num": null }, { "text": "This extraction method does not split a paragraph into multiple passages if irrelevant sentences occur within the resultant passage. Method B, on the other hand, addresses this issue by splitting a passage if there are two or more consecutive irrelevant sentences within this span. Hence, Method B would produce the following two passages for this paragraph:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Extraction", "sec_num": null }, { "text": "{(s 3 , r), (s 4 , r), (s 5 , i), (s 6 , r)} and {(s 9 , r)}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Extraction", "sec_num": null }, { "text": "After one of these passage extraction techniques has been applied for a particular topic, we re-rank passages by re-indexing them, and re-querying the topic against this new index, using the global statistics from the original indexed collection, i.e. using term frequency f t and the average paragraph length avgW d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Extraction", "sec_num": null }, { "text": "We used the TREC 2006 Genomics Track evaluation resources to determine the effectiveness of our system. The TREC 2006 collection consists of 162,259 full-text documents from 49 journals publish electronically via the Highwire Press website 6 . The track also provided 28 topics expressed as natural language questions, formatted with respect to seven general topic templates. Participants were asked to submit the first 1,000 ranked passages returned by their system for each of the topics (Hersh et al., 2006 ). Passages in this task are defined as text sequences that cannot cross paragraph boundaries (delimited by HTML tags), and are subsets of the original paragraphs in which they occur. As is the custom at TREC, human judges were used to decide the relevance of passages in the pooled participating system results. These judges also defined exact passage boundaries, and assigned topic tags called aspects from a control vocabulary of MeSH terms to each relevant answer retrieved.", "cite_spans": [ { "start": 490, "end": 509, "text": "(Hersh et al., 2006", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Evaluation Metrics", "sec_num": "4.1" }, { "text": "Mean Average Precision, or MAP, is a popular IR metric for evaluating system effectiveness. The TREC Genomics Track defines three versions of the MAP score calculated at various levels of granularity: Document, Passage and Aspect. Traditionally the MAP score is defined as follows: first, the average of all the precision values at each recall point on a topic's document ranked list is calculated; then, the mean of all the topic average precisions is determined. Since the retrieval task at the Genomics Track is a question answering-style task, a metric that is sensitive to the length of the answer retrieved was developed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Evaluation Metrics", "sec_num": "4.1" }, { "text": "Passage MAP is similar to document MAP except average precision is calculated as the fraction of characters in the system passage overlapping with the gold standard answer, divided by the total number of characters in every passage retrieved up to that point in the ranked list. Hence, a system is penalised for all additional characters retrieved that are not members of the human evaluated answer passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Evaluation Metrics", "sec_num": "4.1" }, { "text": "The TREC organisers also wanted to measure to what extent a particular passage captured all the necessary information required in the answer. Judges were asked to assign at least one MeSH heading to all relevant passages. Aspect average precision is then measured as the number of aspects (MeSH headings) captured by all the relevant documents up to the recall point in the ranked list for a particular query. Relevant passages that did not contribute any new aspect to the aspects retrieved by higher ranked passages were removed from the ranking. Aspect MAP is defined as the mean of these average topic precision scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Evaluation Metrics", "sec_num": "4.1" }, { "text": "In this section, we examine the increased effectiveness obtained when different expansion information is added to the original query. We also evaluate the effect of our proposed abbreviation feedback technique, and our novel answer expansion module, on system performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "As explained in Section 3, our system uses Entrez Gene for expansion of genes to their synonymous instances. In addition, all term variants are generated for their abbreviations as described in Section 3, while other biological entities in the query (e.g., diseases) are expanded using MeSH. Table 1 presents the MAP scores for the following system runs:", "cite_spans": [], "ref_spans": [ { "start": 292, "end": 299, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "\u2022 Baseline: Zettair system with no expansion", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "\u2022 SYN: query expansion using Entrez gene and MeSH expansion (Synonym and See Also entries in MeSH) of query terms", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "\u2022 SYN+HYPO: query expansion using Entrez gene and MeSH expansion, including Hyponyms (i.e., specialisations)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "\u2022 SYN+HYPER: query expansion using Entrez gene and MeSH expansion, including Hypernyms (i.e., generalisations)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "\u2022 SYN+HYPER+VAR: query expansion using Entrez gene, Gene Variant Generation, and MeSH expansion, including Hypernyms All expansion run MAP scores show a statistically significant 7 improvement over the baseline MAP. The only expansion experiment that does not incrementally improve the results is the addition of hyponym terms (i.e. specialisation) from MeSH. On the other hand, hypernyms (i.e. generalisations) improve the performance of the SYN run by nearly 5%. This result may be explained by the fact that at a passage level, generalised expressions are commonly used to refer to query terms that have been discuss earlier in the document. For example, the following sentence is clearly relevant to the mad cow disease query presented in Section 3: \"These prion diseases are characterised by the accumulation of an abnormal (aberrantly folded) isoform of a cellular host protein PrPC\". However, it would only be ranked highly if the generalisation relationship from mad cow disease to prion disease has been established. Expanding the query term with the immediate parent terms in the different MeSH hierarchies usually results in a few focussed terms being added to the query. In contrast, adding specialisations may result in a much larger number of term additions, depending on the generality of the query term. For example, the term neurons has 18 unique subcategories one level below its position in the MeSH hierarchy and many more beyond this level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "Our best system run (SYN+HYPER+VAR) used ontological and gene variant expansion, and achieved a 97.6% increase in Passage MAP over the baseline run. Similarly large increases in Aspect and Document MAP were also observed. A detailed analysis showed that many passages had been either missed or ranked lower than expected by our system due to the occurrence of query term abbreviations in the relevant passage. These abbreviations were not captured in either of our ontological resources. Table 2 compares the performance of the two abbreviation expansion strategies described in Section 3. Ontological expansion using the ADAM abbreviation database reduces our best Passage MAP score by 36%. Our abbreviation feedback loop performs better, producing a small increase in Document MAP over the baseline, but slightly lower Aspect and Passage MAPs. In some respects, this feedback result is disappointing as a manual analysis of the added abbreviations shows that many useful synonyms were added to the query, which should, in theory, help to retrieve additional passages and boast the rankings of other relevant passages.", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 495, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "However, there is one big drawback to abbreviation expansion that isn't characteristic in other types of expansion we have explored: abbreviations are much more ambiguous. For example, the abbreviation \"AD\" is a very commonly used reference to \"Alzheimer's disease\"; however, according to ADAM, \"AD\" has 35 unique long forms defined in MedLINE abstracts. For example, \"AD\" can also refer to the phrases \"after discharge\", \"autosomal dominant\", \"autistic disorder\", and other unrelated concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "IR researchers have found that query-term ambiguity is less of a problem than one might expect because of the query term collocation effect (Krovetz and Croft, 1992) : query terms mutually disambiguate each other because their intended senses tend to co-occur together in relevant documents in the collection. For example, for the query term \"cell\", adding the term \"blood\" to the query ensures that documents using the biological sense are ranked higher. Hence, one would expect that despite abbreviation ambiguity, great gains in IR effectiveness would be possible using expansion. However, when the total number of possible unabbreviated forms is factored into the expansion process, it is clear that an excessive amount of ambiguity is added in.", "cite_spans": [ { "start": 140, "end": 165, "text": "(Krovetz and Croft, 1992)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "A manual analysis of the results backs up this observation: although new relevant passages containing abbreviations are being retrieved, paragraph ranking is being affected to such an extent that previously retrieved passages are \"dropping out\" of the top 1000 items in the ranked list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "However, our results also show that dynamic abbreviation expansion does not degrade performance as dramatically as expansion with ADAM. The feedback process ensures that only abbreviations that occur in documents of high ranked passages, mentioning all query concepts, are added to the query. Thus, these abbreviations have the highest potential for providing positive impact on retrieval effectiveness. The general conclusion from these abbreviation expansion experiments is clear: knowledge of these synonymous instances is obviously beneficial, but a method that reduces the impact of their high ambiguity is necessary. We discuss our proposed solution to this problem in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "Our final experiment (see Table 3 ) shows that the TREC's Passage MAP score can be increased by capturing the exact answer span in each relevant paragraph. Section 3 proposed two methods for achieving this: Method A finds the longest text span in paragraph that contains all query terms; Method B splits the span and remove sentences if there is a distance of one or more sentences between consecutive mentions of any of the query terms. Both reduction methods show improvements in Passage MAP, but at the expense of the other two metrics. This is to be expected, especially in the case of Method B, since splitting paragraphs means some relevant passages may get a lower rank or even drop out of the top 1000 passages. Table 4 shows how our best run (Best+B) performs with respect to systems that participated in the official TREC 2006 Genomics Track. TREC MEDIAN is the median value for each MAP score reported at TREC. UIC TREC 8 was the top performing system submitted by the University of Illinois at Chicago, and UIC SIGIR is the best postsubmission Passage MAP score which was also published by the same group (Zhou et al., 2007 ). If our system had participated at TREC track we would have ranked 6th for Passage MAP, 3rd for Aspect MAP and 4th for Document MAP out of 92 submitted runs. 8 The official name for this run was UICGenRun3. ", "cite_spans": [ { "start": 1117, "end": 1135, "text": "(Zhou et al., 2007", "ref_id": "BIBREF15" }, { "start": 1296, "end": 1297, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 720, "end": 727, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.2" }, { "text": "The most successful systems at the TREC Genomics Track 2006 used a combination of expansion techniques from external resources such as publicallyavailable and hand-crafted thesauri, in addition to lexical variant generation techniques similar to the one described in this paper. One of the principal contributions of this paper is our detailed analysis of what types of ontologically related terms (synonyms, hyponyms, hypernyms, lexical variants, abbreviations) provide the most impact when used as expansion terms. In particular, we have focussed on abbreviation expansion, which has high potential for impact when passages rather than full documents are being retrieved. However, our experiments show that their high ambiguity can in some cases reduce retrieval effectiveness. There are two possible solutions to the abbreviation ambiguity problem: all abbreviations in the collection are identified in advance of indexing, and a unique identifier is assigned to each long-form and its corresponding abbreviated short-form. Hence, when the query is expanded, the unique identifier rather than the lexical form of the abbreviation is added to the query. Similarly, all abbreviations in the collection will be replaced by their identifier be-fore passage indexing occurs. Another possible approach would be to explicitly add the long-forms of abbreviations in a passage to its index entry. This is a document expansion rather than a query expansion strategy. We plan to investigate both of these methods in our future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusions", "sec_num": "5" }, { "text": "Another area for potential improvement that we wish to investigate further is paragraph reduction. Passage MAP is severely affected by longanswer text spans. Paragraph reduction is similar to answer extraction in factoid-based Question-Answering tasks. However, researchers have only recently begun to investigate answer extraction for more complex question types such as Why or How questions in an ad hoc retrieval setting (Allan, 2005) . The Document Understanding Conference (DUC), which focusses on summarisation tasks, is also looking at complex questions; however, answers are typically generated by collating information from multiple documents (Dang, 2006) .", "cite_spans": [ { "start": 424, "end": 437, "text": "(Allan, 2005)", "ref_id": "BIBREF0" }, { "start": 652, "end": 664, "text": "(Dang, 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusions", "sec_num": "5" }, { "text": "http://www.seg.rmit.edu.au/zettair/ 2 http://www.highwire.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://l2r.cs.uiuc.edu/\u02dccogcomp/atool. php?tkey=SS 4 http://www.ncbi.nlm.nih.gov/sites/ entrez?db=gene", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.nlm.nih.gov/mesh", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "More information on the TREC dataset can be found at: http://ir.ohsu.edu/genomics/2006data.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use a paired Wilcoxon signed-rank test at the 0.05 confidence level to determine significance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Hard track overview in TREC 2005: High accuracy retrieval from documents", "authors": [ { "first": "J", "middle": [], "last": "Allan", "suffix": "" } ], "year": 2005, "venue": "The Fourteenth Text REtrieval Conference (TREC 2005) Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Allan. 2005. Hard track overview in TREC 2005: High accuracy retrieval from documents. In The Four- teenth Text REtrieval Conference (TREC 2005) Pro- ceedings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Domain-specific synonym expansion and validation for biomedical information retrieval", "authors": [ { "first": "S", "middle": [], "last": "Buttcher", "suffix": "" }, { "first": "C", "middle": [ "L A" ], "last": "Clarke", "suffix": "" }, { "first": "G", "middle": [ "V" ], "last": "Cormack", "suffix": "" } ], "year": 2004, "venue": "The Thirteen Text REtrieval Conference (TREC 2004) Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Buttcher, C.L.A. Clarke, and G.V. Cormack. 2004. Domain-specific synonym expansion and validation for biomedical information retrieval. In The Thirteen Text REtrieval Conference (TREC 2004) Proceedings.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Overview of duc", "authors": [ { "first": "H", "middle": [ "T" ], "last": "Dang", "suffix": "" } ], "year": 2006, "venue": "The Document Understanding Conference Workshop Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H.T. Dang. 2006. Overview of duc 2006. In The Doc- ument Understanding Conference Workshop Proceed- ings.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Query expansion. Annual Review of Information Science and Technology", "authors": [ { "first": "E", "middle": [ "N" ], "last": "Efthimiadis", "suffix": "" } ], "year": 1996, "venue": "", "volume": "31", "issue": "", "pages": "121--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. N. Efthimiadis. 1996. Query expansion. Annual Re- view of Information Science and Technology, 31:121- 187.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Trec 2006 genomics track overview. (Voorhees and", "authors": [ { "first": "W", "middle": [], "last": "Hersh", "suffix": "" }, { "first": "A", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "P", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "H", "middle": [], "last": "Rekapalli", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Hersh, A. Cohen, P. Roberts, and H. Rekapalli. 2006. Trec 2006 genomics track overview. (Voorhees and Buckland, 2006).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Lexical ambiguity and information retrieval", "authors": [ { "first": "Robert", "middle": [], "last": "Krovetz", "suffix": "" }, { "first": "W. Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 1992, "venue": "Information Systems", "volume": "10", "issue": "2", "pages": "115--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Krovetz and W. Bruce Croft. 1992. Lexical ambi- guity and information retrieval. Information Systems, 10(2):115-141.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Probabilistic toponym resolution and geographic indexing and querying", "authors": [ { "first": "Yi", "middle": [], "last": "Li", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Li. 2007. Probabilistic toponym resolution and ge- ographic indexing and querying. Masters thesis, The University of Melbourne.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The role of knowledge in conceptual retrieval: a study in the domain of clinical medicine", "authors": [ { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Dina", "middle": [], "last": "Demner-Fushman", "suffix": "" } ], "year": 2006, "venue": "SIGIR '06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "99--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jimmy Lin and Dina Demner-Fushman. 2006. The role of knowledge in conceptual retrieval: a study in the domain of clinical medicine. In SIGIR '06: Proceed- ings of the 29th annual international ACM SIGIR con- ference on Research and development in information retrieval, pages 99-106.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Okapi at TREC-3", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Robertson", "suffix": "" }, { "first": "S", "middle": [], "last": "Walker", "suffix": "" }, { "first": "S", "middle": [], "last": "Jones", "suffix": "" }, { "first": "M", "middle": [], "last": "Hancock-Beaulieu", "suffix": "" }, { "first": "M", "middle": [], "last": "Gatford", "suffix": "" } ], "year": 1994, "venue": "The Third Text Retrieval Conference (TREC 3) Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. E. Robertson, S. Walker, S. Jones, M. Hancock- Beaulieu, and M. Gatford. 1994. Okapi at TREC-3. In The Third Text Retrieval Conference (TREC 3) Pro- ceedings, Gaithersburg, Maryland, November.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A survey on the use of relevance feedback for information access systems", "authors": [ { "first": "I", "middle": [], "last": "Ruthven", "suffix": "" }, { "first": "M", "middle": [], "last": "Lalmas", "suffix": "" } ], "year": 2003, "venue": "Knowledge Engineering Review", "volume": "18", "issue": "2", "pages": "95--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Ruthven and M. Lalmas. 2003. A survey on the use of relevance feedback for information access systems. Knowledge Engineering Review, 18(2):95-145.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An empirical study of the effects of NLP components on Geographic IR performance", "authors": [ { "first": "Nicola", "middle": [], "last": "Stokes", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Jiawen", "middle": [], "last": "Rong", "suffix": "" } ], "year": 2008, "venue": "International Journal of Geographical Information Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicola Stokes, Yi Li, Alistair Moffat, and Jiawen Rong. 2008. An empirical study of the effects of NLP com- ponents on Geographic IR performance. International Journal of Geographical Information Science. To ap- pear.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The influence of basic tokenization on biomedical document retrieval", "authors": [ { "first": "D", "middle": [], "last": "Trieschnigg", "suffix": "" }, { "first": "W", "middle": [], "last": "Kraaij", "suffix": "" }, { "first": "F", "middle": [], "last": "De Jong", "suffix": "" } ], "year": 2006, "venue": "SIGIR 2007 Proceedings, Amsterdam", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Trieschnigg, W. Kraaij, and F. de Jong. 2006. The influence of basic tokenization on biomedical docu- ment retrieval. In SIGIR 2007 Proceedings, Amster- dam, The Netherlands, July.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Fifteenth Text REtrieval Conference (TREC 2006) Proceedings. NIST", "authors": [ { "first": "E", "middle": [ "M" ], "last": "Voorhees", "suffix": "" }, { "first": "Lori", "middle": [ "P" ], "last": "Buckland", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. M. Voorhees and Lori P. Buckland. 2006. The Fif- teenth Text REtrieval Conference (TREC 2006) Pro- ceedings. NIST, Gaithersburg, Maryland.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "ADAM: another database of abbreviations in MED-LINE", "authors": [ { "first": "W", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "V", "middle": [ "I" ], "last": "Torvik", "suffix": "" }, { "first": "N", "middle": [ "R" ], "last": "Smalheiser", "suffix": "" } ], "year": 2006, "venue": "Bioinformatics", "volume": "22", "issue": "22", "pages": "2813--2818", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Zhou, V. I. Torvik, and N. R. Smalheiser. 2006a. ADAM: another database of abbreviations in MED- LINE. Bioinformatics, 22(22):2813-2818.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A concept-based framework for passage retrieval in genomics", "authors": [ { "first": "W", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "C", "middle": [], "last": "Yu", "suffix": "" }, { "first": "V", "middle": [], "last": "Tovik", "suffix": "" }, { "first": "N", "middle": [], "last": "Smalheiser", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Zhou, C. Yu, V. Tovik, and N. Smalheiser. 2006b. A concept-based framework for passage retrieval in ge- nomics. (Voorhees and Buckland, 2006).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Knowledge-intensive conceptual retrieval and passage extraction of biomedical literature", "authors": [ { "first": "Wei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Neil", "middle": [], "last": "Smalheiser", "suffix": "" }, { "first": "Vetle", "middle": [], "last": "Torvik", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Hong", "suffix": "" } ], "year": 2007, "venue": "SIGIR '07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "655--662", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Zhou, Clement Yu, Neil Smalheiser, Vetle Torvik, and Jie Hong. 2007. Knowledge-intensive conceptual retrieval and passage extraction of biomedical litera- ture. In SIGIR '07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 655-662, New York, NY, USA. ACM Press.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
RunPassage MAPAspect MAPDocument MAP
Baseline0.04800.18380.3355
SYN0.0888 \u2020+85.0%P = 0.0050.3499 \u2020+90.3%P < 0.0010.4711 \u2020+40.4%P = 0.008
SYN+HYPO0.0878 \u2020+83.0%P = 0.0070.3417 \u2020+85.9%P = 0.0010.4632 \u2020+38.1%P = 0.02
SYN+HYPER0.0933 \u2020+94.4%P < 0.0010.3695 \u2020+101%P < 0.0010.4843 \u2020+44.3%P = 0.002
SYN+HYPER+VAR0.0949 \u2020+97.6%P < 0.0010.3827 \u2020+108%P < 0.0010.5080 \u2020+51.4%P < 0.001
", "type_str": "table", "html": null, "text": "Table showing improvement in MAP score obtained over baseline MAP when the query is expanded with various combinations of related terms: synonyms (SYN), hyponyms (HYPO), hypernyms (HYPER) and gene lexical variants (VAR)", "num": null }, "TABREF1": { "content": "
RunPassage MAPAspect MAPDocument MAP
SYN+HYPER+VAR0.09490.38270.5080
SYN+HYPER+VAR+Adam0.0600 \u2020\u221236.8%P < 0.0010.2387 \u2020\u221237.6%P < 0.0010.4105 \u2020\u221219.2%P = 0.001
SYN+HYPER+VAR+Abbr0.0920\u22123.06%P = 0.30.3784\u22121.12%P = 0.40.5171+1.79%P = 0.3
", "type_str": "table", "html": null, "text": "Tableshowing effect on system performance when additional expansion terms are added from the ADAM abbreviation (+Adam) database and our system Abbreviation feedback loop (+Abbr).", "num": null }, "TABREF2": { "content": "
RunPassage MAPAspect MAPDocument MAP
Best0.09200.37840.5171
Best+A0.1100 \u2020+19.6%P < 0.0010.3673\u22122.93%P = 0.30.5123\u22120.93%P = 0.3
Best+B0.1175 \u2020+27.7%P < 0.0010.3518 \u2020\u22127.03%P = 0.0040.5021\u22122.90%P = 0.08
", "type_str": "table", "html": null, "text": "Table showing effect of two passage extraction strategies A and B on system performance", "num": null }, "TABREF3": { "content": "
RunPassageAspectDocument
MAPMAPMAP
UIC SIGIR0.18230.38110.5391
UIC TREC0.14790.34920.5320
Best+B0.11750.35180.5021
TREC MEDIAN0.03450.15810.3083
", "type_str": "table", "html": null, "text": "Table showing performance of our best Passage MAP scoring run Best+B with the top performing TREC systems on the Genomics Track", "num": null } } } }