{ "paper_id": "X96-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:05:19.407224Z" }, "title": "NATURAL LANGUAGE INFORMATION RETRIEVAL: TIPSTER-2 FINAL REPORT", "authors": [ { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "", "affiliation": { "laboratory": "", "institution": "GE Corporate Research & Development Schenectady", "location": { "postCode": "12301", "region": "NY" } }, "email": "strzalkowski@crd.ge.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We report on the joint GE/NYU natural language information retrieval project as related to the Tipster Phase 2 research conducted initially at NYU and subsequently at GE R&D Center and NYU. The evaluation results discussed here were obtained in connection with the 3rd and 4th Text Retrieval Conferences (TREC-3 and TREC-4). The main thrust of this project is to use natural language processing techniques to enhance the effectiveness of full-text document retrieval. During the course of the four TREC conferences, we have built a prototype IR system designed around a statistical full-text indexing and search backbone provided by the NIST's Prise engine. The original Prise has been modified to allow handling of multi-word phrases, differential term weighting schemes, automatic query expansion, index partitioning and rank merging, as well as dealing with complex documents. Natural language processing is used to preprocess the documents in order to extract content-carrying terms, discover inter-term dependencies and build a conceptual hierarchy specific to the database domain, and process user's natural language requests into effective search queries.", "pdf_parse": { "paper_id": "X96-1030", "_pdf_hash": "", "abstract": [ { "text": "We report on the joint GE/NYU natural language information retrieval project as related to the Tipster Phase 2 research conducted initially at NYU and subsequently at GE R&D Center and NYU. The evaluation results discussed here were obtained in connection with the 3rd and 4th Text Retrieval Conferences (TREC-3 and TREC-4). The main thrust of this project is to use natural language processing techniques to enhance the effectiveness of full-text document retrieval. During the course of the four TREC conferences, we have built a prototype IR system designed around a statistical full-text indexing and search backbone provided by the NIST's Prise engine. The original Prise has been modified to allow handling of multi-word phrases, differential term weighting schemes, automatic query expansion, index partitioning and rank merging, as well as dealing with complex documents. Natural language processing is used to preprocess the documents in order to extract content-carrying terms, discover inter-term dependencies and build a conceptual hierarchy specific to the database domain, and process user's natural language requests into effective search queries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The overall architecture of the system is essentially the same for both years, as our efforts were directed at optimizing the performance of all components. A notable exception is the new massive query expansion module used in routing experiments, which replaces a prototype extension used in the TREC-3 system. On the other hand, it has to be noted that the character and the level of difficulty of TREC queries has changed quite significantly since the last year evaluation. TREC-4 new ad-hoc queries are far shorter, less focused, and they have a flavor of information requests (\"What is the prognosis of ...\") rather than search directives typical for earlier TRECs (\"The relevant document will contain ...\"). This makes building of good search queries a more sensitive task than before. We thus decided to introduce only minimum number of changes to our indexing and search processes, and even roll back some of the TREC-3 extensions which dealt with longer and somewhat redundant queries. Overall, our system performed quite well as our position with respect to the best systems improved steadily since the beginning of TREC. We participated in both main evaluation categories: category A ad-hoc and routing, working with approx. 3.3 GBytes of text. We submitted 4 official runs in automatic adhoc, manual ad-hoc, and automatic routing (2) , and were ranked 6 or 7 in each category (out of 38 participating teams). It should be noted that the most significant gain in performance seems to have occurred in precision near the top of the ranking, at 5, 10, 15 and 20 documents. Indeed, our unofficial manual runs performed after TREC-4 conference show superior results in these categories, topping by a large margin the best manual scores by any system in the official evaluation.", "cite_spans": [ { "start": 1342, "end": 1345, "text": "(2)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In general, we can note substantial improvement in performance when phrasal terms are used, especially in ad-hoc runs. Looking back at TREC-2 and TREC-3 one may observe that these improvements appear to be tied to the length and specificity of the query: the longer the query, the more improvement from linguistic processes. This can be seen comparing the improvement over baseline for automatic adhoc runs (very short queries), for manual runs (longer queries), and for semi-interactive runs (yet longer queries). In addition, our TREC-3 results (with long and detailed queries) showed 20-25% improvement in precision attributed to NLP, as compared to 10-16% in TREC-4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A typical (full-text) information retrieval (IR) task is to select documents from a database in response to a user's query, and rank these documents according to relevance. This has been usually accomplished using statistical methods (often coupled with manual encoding) that (a) select terms (words, phrases, and other units) from documents that are deemed to best represent their content, and (b) create an inverted index file (or files) that provide an easy access to documents containing these terms. A subsequent search process will attempt to match preprocessed user queries against term-based representations of documents in each case determining a degree of relevance between the two which depends upon the number and types of matching terms. Although many sophisticated search and matching methods are available, the crucial problem remains to be that of an adequate representation of content for both the documents and the queries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "In term-based representation, a document (as well as a query) is transformed into a collection of weighted terms, derived directly from the document text or indirectly through thesauri or domain maps. The representation is anchored on these terms, and thus their careful selection is critical. Since each unique term can be thought to add a new dimensionality to the representation, it is equally critical to weigh them properly against one another so that the document is placed at the correct position in the Ndimensional term space. Our goal here is to have the documents on the same topic placed close together, while those on different topics placed sufficiently apart. Unfortunately, we often do not know how to compute terms weights. The statistical weighting formulas, based on terms distribution within the database, such as ~.idf, are far from optimal, and the assumptions of term independence which are routinely made are false in most cases. This situation is even worse when single-word terms are intermixed with phrasal terms and the term independence becomes harder to justify.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "The simplest word-based representations of content, while relatively better understood, are usually inadequate since single words are rarely specific enough for accurate discrimination, and their grouping is often accidental. A better method is to identify groups of words that create meaningful phrases, especially if these phrases denote important concepts in the database domain. For example, joint venture is an important term in the Wall Street Journal (WSJ henceforth) database, while neither joint nor venture is important by itself. In the retrieval experiments with the training TREC database, we noticed that both joint and venture were dropped from the list of terms by the system because their idf (inverted document frequency) weights were too low. In large databases, such as TIPSTER, the use of phrasal terms is not just desirable, it becomes necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "The challenge is to obtain \"semantic\" phrases, or \"concepts\", which would capture underlying semantic uniformity across various surface forms of expression. Syntactic structures are often reasonable indicators of content, certainly better than 'statistical phrases' --where words are grouped solely on the basis of physical proximity (e.g., \"college junior\" is not the same as \"junior college\") --however, the creation of compound terms makes the term matching process more complex since in addition to the usual problems of lexical meaning, one must deal with structure (e.g., \"college junior\" is the same as \"junior in college\"). In order to deal with structure, the parser's output needs to be \"normalized\" or \"regularized\" so that complex terms with the same or closely related meanings would indeed receive matching representations. One way to regularize syntactic structures is to transform them into operatorargument form, or at least head-modifier form, as will be further explained in this paper. In effect, therefore, we aim at obtaining a semantic representation. This result has been achieved to a certain extent in our work thus far.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "Do we need to parse indeed? Our recent results indicate that some of the critical semantic dependencies can in fact be obtained without the intermediate step of syntactic analysis, and directly from lexicallevel representation of text. We have applied our noun phrase disambiguation method directly to word sequences generated using part-of-speech information, and the results were most promising. At this time we have no data how these results compare to those obtained via parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "No matter how we eventually arrive at the compound terms, we hope they would let us to capture more accurately the semantic content of a document. It is certainly true that the compound terms such as South Africa, or advanced document processing, when found in a document, give us a better idea about the content of such document than isolated word matches. What happens, however, if we do not find them in a document? This situation may arise for several reasons: (1) the term/concept is not there, (2) the concept is there but our system is unable to identify it, or (3) the concept is not explicitly there, but its presence can be infered using general or domain-specific knowledge. This is certainly a serious problem, since we now attach more weight to concept matching than isolated word matching, and missing a concept can reflect more dramatically on system's recall. The inverse is also true: finding a concept where it really isn't makes an irrelevant document more likely to be highly ranked than with single-word based representation. Thus, while the rewards maybe greater, the risks are increasing as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "One way to deal with this problem is to allow the system to fall back on partial matches and single word matches when concepts are not available, and to use query expansion techniques to supply missing terms. Unfortunately, thesaurus-based query expansion is usually quite uneffective, unless the subject domain is sufficiently narrow and the thesaurus sufficiently domain-specific. For example, the term natural language may be considered to subsume a term denoting a specific human language, e.g.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "English. Therefore, a query containing the former may be expected to retrieve documents containing the latter. The same can be said about language and English, unless language is in fact a part of the compound term programming language in which case the association language -Fortran is appropriate. This is a problem because (a) it is a standard practice to include both simple and compound terms in document representation, and (b) term associations have thus far been computed primarily at word level (including fixed phrases) and therefore care must be taken when such associations are used in term matching. This may prove particularly troublesome for systems that attempt term clustering in order to create \"meta-terms\" to be used in document representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "In the remainder of this paper we discuss particulars of the present system and some of the observations made while processing TREC-4 data. While this description is meant to be self-contained, the reader may want to refer to previous TREC papers by this group for more information about the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERVIEW", "sec_num": null }, { "text": "Our information retrieval system consists of a traditional statistical backbone (NIST's PRISE system [2] ) augmented with various natural language processing components that assist the system in database processing (stemming, indexing, word and phrase clustering, selectional restrictions), and translate a user's information request into an effective query. This design is a careful compromise between purely statistical non-linguistic approaches and those requiring rather accomplished (and expensive) semantic analysis of data, often referred to as 'conceptual retrieval'.", "cite_spans": [ { "start": 101, "end": 104, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "OVERALL DESIGN", "sec_num": null }, { "text": "In our system the database text is first processed with a fast syntactic parser. Subsequently certain types of phrases are extracted from the parse trees and used as compound indexing terms in addition to single-word terms. The extracted phrases are statistically analyzed as syntactic contexts in order to discover a variety of similarity links between smaller subphrases and words occurring in them. A further filtering process maps these similarity links onto semantic relations (generalization, specialization, synonymy, etc.) after which they are used to transform a user's request into a search query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERALL DESIGN", "sec_num": null }, { "text": "The user's natural language request is also parsed, and all indexing terms occurring in it are identified. Certain highly ambiguous, usually singleword terms may be dropped, provided that they also occur as elements in some compound terms. For example, \"natural\" is deleted from a query already containing \"natural language\" because \"natural\" occurs in many unrelated contexts: \"natural number\", \"natural logarithm\", \"natural approach\", etc. At the same time, other terms may be added, namely those which are linked to some query term through admissible similarity relations. For example, \"unlawful activity\" is added to a query (TREC topic 055) containing the compound term \"illegal activity\" via a synonymy link between \"illegal\" and \"unlawful\". After the final query is constructed, the database search follows, and a ranked list of documents is returned. In TREC-4, the automatic query expansion has been limited to to routing runs, where we refined our version of massive expansion using relevenace information wrt. the training database. Query expansion via automatically generated domain map was not usd in offical ad-hoc runs. Full details of TIP parser have been described in the TREC-1 report [8] , as well as in other works [6, 7] , [9, 10, 11, 12] .", "cite_spans": [ { "start": 1203, "end": 1206, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 1235, "end": 1238, "text": "[6,", "ref_id": "BIBREF5" }, { "start": 1239, "end": 1241, "text": "7]", "ref_id": "BIBREF6" }, { "start": 1244, "end": 1247, "text": "[9,", "ref_id": "BIBREF8" }, { "start": 1248, "end": 1251, "text": "10,", "ref_id": "BIBREF9" }, { "start": 1252, "end": 1255, "text": "11,", "ref_id": "BIBREF10" }, { "start": 1256, "end": 1259, "text": "12]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "OVERALL DESIGN", "sec_num": null }, { "text": "As in TREC-3, we used a randomized index splitting mechanism which creates not one but several balanced sub-indexes. These sub-indexes can be searched independently and the results can be merged meaningfully into a single ranking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OVERALL DESIGN", "sec_num": null }, { "text": "Syntactic phrases extracted from TTP parse trees are head-modifier pairs. The head in such a pair is a central element of a phrase (main verb, main noun, etc.), while the modifier is one of the adjunct arguments of the head. In the TREC experiments reported here we extracted head-modifier word and fixed-phrase pairs only. The following types of pairs are considered: (1) a head noun and its left adjective or noun adjunct, (2) a head noun and the head of its right adjunct, (3) the main verb of a clause and the head of its object phrase, and (4) the head of the subject phrase and the main verb. These types of pairs account for most of the syntactic variants [5] for relating two words (or simple phrases) into pairs cartying compatible semantic content. For example, the pair retrieve+information will be extracted from any of the following fragments: information retrieval system; retrieval of information from databases; and information that can be retrieved by a usercontrolled interactive search process.", "cite_spans": [ { "start": 663, "end": 666, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "LINGUISTIC TERMS", "sec_num": null }, { "text": "The notorious ambiguity of nominal compounds remains a serious difficulty in obtaining head-modifier pairs of highest accuracy. In order to cope with this, the pair extractor looks at the distribution statistics of the compound terms to decide whether the association between any two words (nouns and adjectives) in a noun phrase is both syntactically valid and semantically significant. For example, we may accept language+natural and processing+language from natural language processing as correct, however, case+trading would make a mediocre term when extracted from insider trading case. On the other hand, it is important to extract trading+insider to be able to match documents containing phrases insider trading sanctions act or insider trading activity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LINGUISTIC TERMS", "sec_num": null }, { "text": "Proper names, of people, places, events, organizations, etc., are often critical in deciding relevance of a document. Since names are traditionally capitalized in English text, spotting them is relatively easy, most of the time. It is important that all names recognized in text, including those made up of multiple words, e.g., South Africa or Social Security, are represented as tokens, and not broken into single words, e.g., South and Africa, which may turn out to be different names altogether by themselves. On the other hand, we need to make sure that variants of the same name are indeed recognized as such, e.g., U.S. President Bill Clinton and President Clinton, with a degree of confidence. One simple method, which we use in our system, is to represent a compound name dually, as a compound token and as a set of singleword terms. This way, if a corresponding full name variant cannot be found in a document, its component words matches can still add to the document score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LINGUISTIC TERMS", "sec_num": null }, { "text": "Finding a proper term weighting scheme is critical in term-based retrieval since the rank of a document is determined by the weights of the terms it shares with the query. One popular term weighting scheme, known as tf.idf, weights terms proportionately to their inverted document frequency scores and to their in-document frequencies (tO. The indocument frequency factor is usually normalized by the document length, that is, it is more significant for a term to occur 5 times in a short 20-word document, than to occur 10 times in a 1000-word article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TERM WEIGHTING ISSUES", "sec_num": null }, { "text": "In our post-TREC-2 experiments we changed the weighting scheme so that the phrases (but not the names which we did not distinguish in TREC-2) were more heavily weighted by their idf scores while the in-document frequency scores were replaced by logarithms multiplied by sufficiently large constants. In addition, the top N highest-idf matching terms (simple or compound) were counted more toward the document score than the remaining terms. This 'hotspot' retrieval option is discussed in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TERM WEIGHTING ISSUES", "sec_num": null }, { "text": "Schematically, these new weights for phrasal and highly specific terms are obtained using the following formula, while weights for most of the single-word terms remain unchanged:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TERM WEIGHTING ISSUES", "sec_num": null }, { "text": "weight (Ti )=( C1 *log (tf )+C 2 \" Ix(N ,i ) )*idf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TERM WEIGHTING ISSUES", "sec_num": null }, { "text": "In the above, tx(N,i) is 1 for i Runbasexbasenyuge Inyuge2Tot number of docs over all queriesRel6576657665766576RdRet364149675078 i 5112 i%chg+36.0+39.0I +40.0Average precision over all rel docsAvg0.16970.27150.28380.2913%chg+60.0[+67.0+72.0Precision at5 docs 0.37600.54800.55600.568010 docs 0.36800.48400.50000.522015 docs 0.34270.46800.48800.4933Runabasealoembase mlocilocTot number of docs over aUqueriesRel65016501 i 650165016501RelR2458i I 2498 ! 3410 I35453723%chg+1.6 ! +39.0 +44.0 +51.0Average precision over all rei docsAvg0.1394 0.1592 0.2082 0.2424 0.2767%chg+14.0 +49.0 +74.0 +98.0Precision at5 docs 0.3755 0.4571 0.5020 0.5592 0.669410 doc 0.3408 0.3939 0.4510 0.4816 0.608215 doc 0.3088 0.3687 0.4082 0.4490 0.5633" } } } }