{ "paper_id": "H92-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:28:54.990420Z" }, "title": "INFORMATION RETRIEVAL USING ROBUST NATURAL LANGUAGE PROCESSING", "authors": [ { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "", "affiliation": { "laboratory": "", "institution": "Courant Institute of Mathematical Sciences New York University", "location": { "addrLine": "715 Broadway, rm. 704", "postCode": "10003", "settlement": "New York", "region": "NY" } }, "email": "tomek@cs.nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We developed a fully automated Information Retrieval System which uses advanced natural language processing techniques to enhance the effectiveness of traditional keyword based document retrieval. In early experiments with the standard CACM-3204 collection of abstracts, the augmented system has displayed capabilities that made it clearly superior to the purely statistical base system. I These include CACM-3204, MUC-3, and a selection of nearly 6,000 technical articles extracted from Computer Library database (a Zfff Communications Inc. CD-ROM). 2 A complete description can be found in (Strzalkowski, 1991).", "pdf_parse": { "paper_id": "H92-1040", "_pdf_hash": "", "abstract": [ { "text": "We developed a fully automated Information Retrieval System which uses advanced natural language processing techniques to enhance the effectiveness of traditional keyword based document retrieval. In early experiments with the standard CACM-3204 collection of abstracts, the augmented system has displayed capabilities that made it clearly superior to the purely statistical base system. I These include CACM-3204, MUC-3, and a selection of nearly 6,000 technical articles extracted from Computer Library database (a Zfff Communications Inc. CD-ROM). 2 A complete description can be found in (Strzalkowski, 1991).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "1. OVERALL DESIGN Our information retrieval system consists of a traditional statistical backbone (Harman and Candela, 1989) augmented with various natural language processing components that assist the system in database processing (stemming, indexing, word and phrase clustering, selectional restrictions), and translate a user's information request into an effective query. This design is a careful compromise between purely statistical non-linguistic approaches and those requiring rather accomplished (and expensive) semantic analysis of data, often referred to as 'conceptual retrieval'. The conceptual retrieval systems, though quite effective, are not yet mature enough to be considered in serious information retrieval applications, the major problems being their extreme inefficiency and the need for manual encoding of domain knowledge (Mauldin, 1991 ).", "cite_spans": [ { "start": 98, "end": 124, "text": "(Harman and Candela, 1989)", "ref_id": "BIBREF0" }, { "start": 847, "end": 861, "text": "(Mauldin, 1991", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our system the database text is first processed with a fast syntactic parser. Subsequently certain types of phrases are extracted from the parse lxees and used as compound indexing terms in addition to single-word terms. The extracted phrases are statistically analyzed as syntactic contexts in order to discover a variety of similarity links between smaller subphrases and words occurring in them. A further filtering process maps these similarity links onto semantic relations (generalization, specialization, synonymy, etc.) after which they are used to transform user's request into a search query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The user's natural language request is also parsed, and all indexing terms occurring in them are identified. Next, certain highly ambiguous (usually single-word) terms are dropped, provided that they also occur as elements in some compound terms. For example, \"natural\" is deleted from a query already containing \"natural language\" because 206", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\"natural\" occurs in many unrelated contexts: \"natural number\", \"natural logarithm\", \"natural approach\", etc. At the same time, other terms may be added, namely those which are linked to some query term through admissible similarity relations. For example, \"fortran\" is added to a query containing the compound term \"program language\" via a specification link. After the final query is constructed, the database search follows, and a ranked list of documents is returned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It should be noted that all the processing steps, those performed by the backbone system, and these performed by the natural language processing components, are fully automated, and no human intervention or manual encoding is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "TIP (Tagged Text Parser) is based on the Linguistic String Grammar developed by Sager (1981) . Written in Quintus Prolog, the parser currently encompasses more than 400 grammar productions. It produces regularized parse tree representations for each sentence that reflect the sentence's logical structure. The parser is equipped with a powerful skip-and-fit recovery mechanism that allows it to operate effectively in the face of ill-formed input or under a severe time pressure. In the recent experiments with approximately 6 million words of English texts, 1 the parser's speed averaged between 0.45 and 0.5 seconds per sentence, or up to 2600 words per minute, on a 21 MIPS SparcStation ELC. Some details of the parser are discussed below. 2 TIP is a full grammar parser, and initially, it attempts to generate a complete analysis for each sentence. However, unlike an ordinary parser, it has a built-in timer which regulates the amount of time allowed for parsing any one sentence. If a parse is not returned before the allotted time elapses, the parser enters the skip-and-fit mode in which it will try to \"fit\" the parse. While in the skip-and-fit mode, the parser will attempt to forcibly reduce incomplete constituents, possibly skipping portions of input in order to restart processing at a next unattempted constituent. In other words, the parser will favor reduction to backtracking while in the skip-and-fit mode. The result of this strategy is an approximate parse, partially fitted using top-down predictions. The fragments skipped in the first pass are not thrown out, instead they are analyzed by a simple phrasal parser that looks for noun phrases and relative clauses and then attaches the recovered material to the main parse structure.", "cite_spans": [ { "start": 80, "end": 92, "text": "Sager (1981)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "FAST PARSING WITH TTP", "sec_num": "2." }, { "text": "As an illustration, consider the following sentence taken from the CACM-3204 corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FAST PARSING WITH TTP", "sec_num": "2." }, { "text": "The method is illustrated by the automatic construction of both recursive and iterative programs operating on natural numbers, lists, and trees, in order to construct a program satisfying certain specifications a theorem induced by those specifications is proved, and the desired program is extracted from the proof.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FAST PARSING WITH TTP", "sec_num": "2." }, { "text": "The italicized fragment is likely to cause additional complications in parsing this lengthy string, and the parser may be better off ignoring this fragment altogether. To do so successfully, the parser must close the currently open constituent (i.e., reduce a program satisfying certain specifications to NP), and possibly a few of its parent constituents, removing corresponding productions from further consideration, until an appropriate production is reactivated. In this case, TIP may force the following reductions: SI --> to V NP; SA ----> SI; S ---> NP V NP SA, until the production S --> S and S is reached. Next, the parser skips input to find and, and resumes normal processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FAST PARSING WITH TTP", "sec_num": "2." }, { "text": "As may be expected, the skip-and-fit strategy will only be effective if the input skipping can be performed with a degree of determinism. This means that most of the lexical level ambiguity must be removed from the input text, prior to parsing. We achieve this using a stochastic parts of speech tagger 3 to preprocess the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FAST PARSING WITH TTP", "sec_num": "2." }, { "text": "Word stemming has been an effective way of improving document recall since it reduces words to their common morphological root, thus allowing more successful matches. On the other hand, stemming tends to decrease retrieval precision, if care is not taken to prevent situations where otherwise unrelated words are reduced to the same stem. In our system we replaced a traditional morphological stemmer with a conservative dictionary-assisted suffix trimmer. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WORD SUFFIX TRIMMER", "sec_num": "3." }, { "text": "The suffix trimmer performs essentially two tasks: (1) it reduces inflected word forms to their root forms as specified in the dictionary, and (2) it converts nominalized verb forms (eg. \"implementation\", \"storage\") to the root forms of corresponding verbs (i.e., \"implement\", \"store\"). This is accomplished by removing a standard suffix, eg. \"stor+age\", replacing it with a standard root ending (\"+e\"), and checking the newly created word against the dictionary, i.e., we check whether the original root (\"storage\") is defined using the new root (\"store\"). This allows reducing \"diversion\" to \"diverse\" while preventing \"version\" to be replaced by \"verse\". Experiments with CACM-3204 collection show an improvement in retrieval precision by 6% to 8% over the base system equipped with a standard morphological stemmer (the SMART stemmer).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WORD SUFFIX TRIMMER", "sec_num": "3." }, { "text": "Syntactic phrases extracted from TTP parse trees are headmodifier pairs: from simple word pairs to complex nested structures. The head in such a pair is a central element of a phrase (verb, main noun, etc.) while the modifier is one of the adjunct arguments of the head. 5 For example, the phrase fast algorithm for parsing context-free languages yields the following pairs: algorithm+fast, algorithm+parse, parse+language, language+context_free. The following types of pairs were considered: (1) a head noun and its left adjective or noun adjunct, (2) a head noun and the head of its right adjunct, (3) the main verb of a clause and the head of its object phrase, and (4) the head of the subject phrase and the main verb, These types of pairs account for most of the syntactic variants for relating two words (or simple phrases) into pairs carrying compatible semantic content. For example, the pair [retrieve,information] is extracted from any of the following fragments: information retrieval system; retrieval of information from databases; and information that can be retrieved by a user-controlled interactive search process. 6 An example is shown in the appendix .7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HEAD-MODIFIER STRUCTURES", "sec_num": "4." }, { "text": "Head-modifier pairs form compound terms used in database indexing. They also serve as occurrence contexts for smaller terms, including single-word terms. In order to determine whether such pairs signify any important association between terms, we calculate the value of the Informational Contribution (IC) function for each element in a pair. Higher values indicate stronger association, and the element having the largest value is considered semantically dominant. IC function is a derivative of Fano's mutual information formula recently used by Church and Hanks (1990) to compute word co-occurrence patterns in a 44 million word corpus of Associated Press news stories. They noted that while generally satisfactory, the mutual information formula often produces counterintuitive results for low-frequency data. This is particularly worrisome for relatively smaller IR collections since many important indexing terms would be eliminated from consideration. Therefore, following suggestions in Wilks et al. (1990) , we adopted a revised formula that displays a more stable behavior even on very low counts. fx,y =n, and d~ = 1). Selected examples generated from CACM-3204 corpus are given in Table 2 at the end of the paper. IC values for terms become the basis for calculating term-to-term similarity coefficients. If two terms tend to be modified with a number of common modifiers and otherwise appear in few distinct contexts, we assign them a similarity coefficient, a real number between 0 and 1. The similarity is determined by comparing distribution characteristics for both terms within the corpus: how much information contents do they carry, do their information contribution over contexts vary greatly, are the common contexts in which these terms occur specific enough? In general we will credit high-contents terms appearing in identical contexts, especially if these contexts are not too commonplace. 8 The relative similarity between two words xl and x z is obtained using the following formula (a is a large constant):", "cite_spans": [ { "start": 548, "end": 571, "text": "Church and Hanks (1990)", "ref_id": "BIBREF5" }, { "start": 995, "end": 1014, "text": "Wilks et al. (1990)", "ref_id": null } ], "ref_spans": [ { "start": 1193, "end": 1200, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "TERM CORRELATIONS FROM TEXT", "sec_num": "5." }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SIM (x 1 ,x 2) = log (a ~ sim~ (x 1,x 9)", "sec_num": null }, { "text": "The similarity function is further normalized with respect to 8 It would not be appropriate to predict similarity between language and logarithm on the basis of their co-occurrence with natural. SIM(xl,xl) . It may be worth pointing out that the similarities are calculated using term co-occurrences in syntactic rather than in document-size contexts, the latter being the usual practice in non-linguistic clustering (eg. Sparck Jones and Barber, 1971; Crouch, 1988; Lewis and Croft, 1990) . Although the two methods of term clustering may be considered mutually complementary in certain situations, we befieve that more and slxonger associations can be obtained through syntactic-context clustering, given sufficient amount of data and a reasonably accurate syntactic parser. 9", "cite_spans": [ { "start": 195, "end": 205, "text": "SIM(xl,xl)", "ref_id": null }, { "start": 467, "end": 489, "text": "Lewis and Croft, 1990)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "simy (x l ,x z) = MIN (I C (x 1,[x l ,y ]) j C (x 2,[x 2,y ])) * MIN(IC(y,[xl,y])JC(y,[x2,y]))", "sec_num": null }, { "text": "Similarity relations are used to expand user queries with new terms, in an attempt to make the final search query more comprehensive (adding synonyms) and/or more pointed (adding specializations). 1\u00b0 It follows that not all similarity relations will be equally useful in query expansion, for instance, complementary relations like the one between algol and fortran may actually harm system's performance, since we may end up retrieving many irrelevant documents. Similarly, the effectiveness of a query containing fortran is likely to diminish if we add a similar but far more general term such as language. On the other hand, database search is likely to miss relevant documents if we overlook the fact that fortran is a programming language, or that interpolate is a specification of approximate. We noted that an average set of similarities generated from a text corpus contains about as many \"good\" relations (synonymy, speciafization) as \"bad\" relations (antonymy, complementation, generalization), as seen from the query expansion viewpoint. Therefore any attempt to separate these two classes and to increase the proportion of \"good\" relations should result in improved retrieval. This has indeed been confirmed in our experiments where a relatively crude filter has visibly increased retrieval precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QUERY EXPANSION", "sec_num": "6." }, { "text": "In order to create an appropriate filter, we expanded the IC function into a global specificity measure called the cumulative informational contribution function (ICW). ICW is calculated for each term across all contexts in which it occurs. The general philosophy here is that a more specific word/phrase would have a more limited use, i.e., would appear in fewer distinct contexts. ICW is similar to the standard inverted document frequency (idj) measure except that term frequency is measured over syntactic units rather than 9 Non-syntactic contexts cross sentence boundaries with no fuss, which is helpful with short, succinct documents (such as CACM abstracts), but less so with longer texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QUERY EXPANSION", "sec_num": "6." }, { "text": "to Query expansion (in the sense considered here, though not quite in the same way) has been used in information retfeval research before (eg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QUERY EXPANSION", "sec_num": "6." }, { "text": "Sparek Jones and Tait, 1984; Harman, 1988) , usually with mixed results. An alternative is to use term clusters to create new terms, \"metaterms\", and use them to index the database instead (eg. Crouch, 1988; Lewis and Croft, 1990) . We found that the query expansion approach gives the system more flexibiUty, for instance, by making room for hypertext-style topic exploration via user feedback. document size units. 11 Terms with higher ICW values are generally considered more specific, but the specificity comparison is only meaningful for terms which are already known to be similar. The new function is calculated according to the following formula: 12", "cite_spans": [ { "start": 7, "end": 28, "text": "Jones and Tait, 1984;", "ref_id": "BIBREF10" }, { "start": 29, "end": 42, "text": "Harman, 1988)", "ref_id": null }, { "start": 194, "end": 207, "text": "Crouch, 1988;", "ref_id": "BIBREF9" }, { "start": 208, "end": 230, "text": "Lewis and Croft, 1990)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "QUERY EXPANSION", "sec_num": "6." }, { "text": "where (with nw, dw > 0): aw(nw+aw-1) and analogously for IC R (w ).", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 36, "text": "aw(nw+aw-1)", "ref_id": null } ], "eq_spans": [], "section": "ICW(w) =ICL(w) * ICR (w)", "sec_num": null }, { "text": "ICL (W) = Ic ([w,_ ]) = n~", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ICW(w) =ICL(w) * ICR (w)", "sec_num": null }, { "text": "For any two terms w 1 and w 2, and a constant ~i > 1, if ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ICW(w) =ICL(w) * ICR (w)", "sec_num": null }, { "text": "The preliminary series of experiments with the CACM-3204 collection of computer science abstracts showed a consistent improvement in performance: the average precision increased from 32.8% to 37.1% (a 13% increase), while the normalized recall went from 74.3% to 84.5% (a 14% increase), in comparison with the statistics of the base system. This improvement is a combined effect of the new stemmer, compound terms, term selection in queries, and query expansion using filtered similarity relations. The choice of similarity relation filter has beeen found critical in improving retrieval precision through query expansion. It should also be pointed out that only about 1.5% of all \" We believe that measuring term specificity over document-size contexts (eg. Sparck Jones, 1972) may not be appropriate in this case. In particular, syntax-based contexts allow for processing texts without any intemal document structure. m Slightly simplified here. 13 The filter was most effective at cr = 0.57. similarity relations originally generated from CACM-3204 were found admissible after filtering, contributing only 1.2 expansion on average per query. It is quite evident significantly larger corpora are required to produce more dramatic results. 14 15 A detailed summary is given in Table 1 below.", "cite_spans": [ { "start": 948, "end": 950, "text": "13", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 1278, "end": 1286, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "SUMMARY OF RESULTS", "sec_num": "7." }, { "text": "These results, while modest by IR standards, are significant for another reason as well. They were obtained without any manual intervention into the database or queries, and without using any other information about the database except for the text of the documents (i.e., not even the hand generated keyword fields enclosed with most documents were used). Lewis and Croft (1990) , and Croft et al. (1991) report results similar to ours but they take advantage of Computer Reviews categories manually assigned to some documents. The purpose of this research is to explore the potential of automated NLP in dealing with large scale IR problems, and not necessarily to obtain the best possible results on any particular data collection. One of our goals is to point a feasible direction for integrating NLP into the traditional IR ", "cite_spans": [ { "start": 357, "end": 379, "text": "Lewis and Croft (1990)", "ref_id": "BIBREF4" }, { "start": 386, "end": 405, "text": "Croft et al. (1991)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "SUMMARY OF RESULTS", "sec_num": "7." }, { "text": "Courtesy of Bolt Beranek and Newman.4 We use Oxford Advanced Leamer's Dictionary (OALD) MRD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the experiments reported here we extracted head-modifier word pairs only. CACM collection is too small to warrant generation of larger compounds, because of their low frequencies.To deal with nominal compounds we use frequency information about the pairs generated from the entire corpus to form preferences in ambiguous situations, such as natural language processing vs. dynamic information processing.7 Note that working with the parsed text ensures a high degree of precision in capturing the meaningful phrases, which is especially evident when compared with the results usually obtained from either unprocessed or only partially processed text(Lewis and Croft, 1990).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Donna Harman of NIST for making her IR system available to us. We would also like to thank Ralph Weischedel and Marie Meteer of BBN for providing and assisting in the use of the part of speech tagger. KL Kwok has offered many helpful comments on an earlier draft of this paper. In addition, ACM has generously provided us with text data from the Computer Library database distributed by Ziff Communications Inc. This paper is based upon work supported by the Defense Advanced Research Project Agency under Contract N00014-90-J-1851 from the Office of Naval Research, and the National Science Foundation under Grant IRI-89-02304.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACKNOWLEDGEMENTS", "sec_num": null }, { "text": "and Strzalkowski, 1991 Table 3 . Filtered word similarities (* indicates the more specific term).", "cite_spans": [ { "start": 4, "end": 22, "text": "Strzalkowski, 1991", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 23, "end": 30, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Retrieving Records from a Gigabyte of text on a.Minicomputer Using Statistical Ranking", "authors": [ { "first": "Donna", "middle": [], "last": "Harman", "suffix": "" }, { "first": "Gerald", "middle": [], "last": "Candela", "suffix": "" } ], "year": 1989, "venue": "Journal of the American Society for Information Science", "volume": "41", "issue": "8", "pages": "581--589", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harman, Donna and Gerald Candela. 1989. \"Retrieving Records from a Gigabyte of text on a.Minicomputer Using Statistical Ranking.\" Journal of the American Society for Information Science, 41 (8), pp. 581-589.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Retrieval Performance in Ferret: A Conceptual Information Retrieval System", "authors": [ { "first": "Michael", "middle": [], "last": "Mauldin", "suffix": "" } ], "year": 1991, "venue": "Proceedings of ACM SIGIR-91", "volume": "", "issue": "", "pages": "347--355", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mauldin, Michael. 1991. \"Retrieval Performance in Ferret: A Conceptual Information Retrieval System.\" Proceedings of ACM SIGIR-91, pp. 347-355.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Natural Language Information Processing", "authors": [ { "first": "Naomi", "middle": [], "last": "Sager", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sager, Naomi. 1981. Natural Language Information Pro- cessing. Addison-Wesley.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "TI'P: A Fast and Robust Parser for Natural Language", "authors": [ { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "" } ], "year": 1991, "venue": "Proteus Project Memo #", "volume": "43", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strzalkowski, Tomek. 1991. \"TI'P: A Fast and Robust Parser for Natural Language.\" Proteus Project Memo #43, Courant Institute of Mathematical Science, New York University.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Term Clustering of Syntactic Phrases", "authors": [ { "first": "David", "middle": [ "D" ], "last": "Lewis", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Croft", "suffix": "" } ], "year": 1990, "venue": "Proceedings of ACM SIGIR-90", "volume": "", "issue": "", "pages": "385--405", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lewis, David D. and W. Bruce Croft. 1990. \"Term Cluster- ing of Syntactic Phrases\". Proceedings of ACM SIGIR-90, pp. 385-405.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Word association norms, mutual information, and lexicography", "authors": [ { "first": "Kenneth", "middle": [], "last": "Church", "suffix": "" }, { "first": "", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "ComputationalLinguistics", "volume": "16", "issue": "1", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, Kenneth Ward and Hanks, Patrick. 1990. \"Word association norms, mutual information, and lexicography.\" ComputationalLinguistics, 16(1), MIT Press, pp. 22-29.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Providing machine tractable dictionary tools", "authors": [ { "first": "Tony", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Brian", "middle": [ "M" ], "last": "Plate", "suffix": "" }, { "first": "", "middle": [], "last": "Slator", "suffix": "" } ], "year": 1990, "venue": "Machine Translation", "volume": "5", "issue": "", "pages": "99--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "McDonald, Tony Plate, and Brian M. Slator. 1990. \"Provid- ing machine tractable dictionary tools.\" Machine Transla- tion, 5, pp. 99-154.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "What makes automatic keyword classification effective?", "authors": [ { "first": "Sparck", "middle": [], "last": "Jones", "suffix": "" }, { "first": "K", "middle": [], "last": "", "suffix": "" }, { "first": "E", "middle": [ "O" ], "last": "Barber", "suffix": "" } ], "year": 1971, "venue": "Journal of the American Society for Information Science", "volume": "", "issue": "", "pages": "166--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sparck Jones, K. and E. O. Barber. 1971. \"What makes automatic keyword classification effective?\" Journal of the American Society for Information Science, May-June, pp. 166-175.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A cluster-based approach to thesaurus construction", "authors": [ { "first": "Carolyn", "middle": [ "J" ], "last": "Crouch", "suffix": "" } ], "year": 1988, "venue": "Proceedings of ACM SIGIR-88", "volume": "", "issue": "", "pages": "309--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crouch, Carolyn J. 1988. \"A cluster-based approach to thesaurus construction.\" Proceedings of ACM SIGIR-88, pp. 309-320.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic search term variant generation", "authors": [ { "first": "Sparck", "middle": [], "last": "Jones", "suffix": "" }, { "first": "K", "middle": [], "last": "", "suffix": "" }, { "first": "J", "middle": [ "I" ], "last": "Tait", "suffix": "" } ], "year": 1984, "venue": "Journal of Documentation", "volume": "40", "issue": "1", "pages": "50--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sparck Jones, K. and J. I. Tait. 1984. \"Automatic search term variant generation.\" Journal of Documentation, 40(1), pp. 50-66.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Towards interactive query expansion", "authors": [ { "first": "Donna", "middle": [], "last": "Harrnan", "suffix": "" } ], "year": 1988, "venue": "Proceedings ofACM SIGIR-88", "volume": "", "issue": "", "pages": "321--331", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harrnan, Donna. 1988. \"Towards interactive query expan- sion.\" Proceedings ofACM SIGIR-88, pp. 321-331.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Statistical interpretation of term specificity and its application in retrieval", "authors": [ { "first": "Sparck", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Karen", "middle": [], "last": "", "suffix": "" } ], "year": 1972, "venue": "Journal of Documentation", "volume": "28", "issue": "1", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sparck Jones, Karen. 1972. \"Statistical interpretation of term specificity and its application in retrieval.\" Journal of Documentation, 28(1 ), pp. 11-20.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Use of Phrases and Structured Queries in Information Retrieval", "authors": [ { "first": "W", "middle": [], "last": "Croft", "suffix": "" }, { "first": "Howard", "middle": [ "R" ], "last": "Bruce", "suffix": "" }, { "first": "David", "middle": [ "D" ], "last": "Turtle", "suffix": "" }, { "first": "", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 1991, "venue": "Proceedings of ACM SIGIR-91", "volume": "", "issue": "", "pages": "32--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Croft, W. Bruce, Howard R. Turtle, and David D. Lewis. 1991. \"The Use of Phrases and Structured Queries in Infor- mation Retrieval.\" Proceedings of ACM SIGIR-91, pp. 32- 45.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Fast Text Processing for Information Retrieval", "authors": [ { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Vauthey", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 4t.h DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "346--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strzalkowski, Tomek and Barbara Vauthey. 1991. \"Fast Text Processing for Information Retrieval.\" Proceedings of the 4t.h DARPA Speech and Natural Language Workshop, Morgan- Kauffman, pp. 346-351.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Natural Language Processing in Automated Information Retrieval", "authors": [ { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Vauthey", "suffix": "" } ], "year": 1991, "venue": "Proteus Project Memo #", "volume": "42", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strzalkowski, Tomek and Barbara Vauthey. 1991. \"Natural Language Processing in Automated Information Retrieval.\" Proteus Project Memo #42, Courant Institute of Mathematical Science, New York University.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "This new formula IC (x ,[x,y ]) is'based on (an estimate o0 the conditional probability of seeing a word y to the right of the word x, modified with a dispersion parameter for x. fx~r lC (x,[x,y ])n,, + d,, -1 where fx~, is the frequency of [x ,y ] in the corpus, n x is the number of pairs in which x occurs at the same position as in Ix,y], and d(x) is the dispersion parameter understood as the number of distinct words with which x is paired. When IC(x,[x,y])=O, x and y never occur together (i.e., fx,y = 0); when IC(x,[x,y]) = 1, x occurs only with y (i.e.," }, "TABREF0": { "text": "Therefore interpolate can be used to specialize approximate, while language cannot be used to expand algol. Note that if 8 is well chosen (we used 5=10), then the above filter will also help to reject antonymous and complementary relations, such as SIM~orm (pl_i,cobol)=0.685 with ICW (pl_i)=O.O 175 and ICW (cobol)=0.0289. We continue working to develop more effective filters. Examples of filtered similarity relations obtained from CACM-3204 corpus are given inTable 3.", "num": null, "type_str": "table", "html": null, "content": "
ICW(w2)>_~* ICW(wl) then w 2 is considered more
specific than w 1. In addition, if SIM,~,~(Wl,Wz)=~> O,
where 0 is an empirically established threshold, then w 2 can
be added to the query containing term w 1 with weight o. 13
In the CACM-3204 collection:
ICW (algol)= 0.0020923
ICW (language)= 0.0000145
ICW (approximate) = 0.0000218
ICW (interpolate) = 0.0042410
" }, "TABREF1": { "text": "Recall/precision statistics for CACM-3204 14 KL Kwok (private communication) has suggested that the low percentage of admissible relations might be similar to the phenomenon of 'tight dusters' which while meaningful are so few that their impact issmall. 15 A sufficiently large text corpus is 20 million words or more. This has been partially confirmed by experiments performed at the University of Massachussetts (B. Croft, private communication). 16. Grishman, Ralph and Tomek Strzalkowski. 1991. \"Information Retrieval and Natural Language Processing.\" Position paper at the workshop on Future Dkections in Natural Language Processing in Information Retrieval, Chicago. TEXT An algorithm to compute the gamma function and log gamma function of a complex variable is presented. The standard algorithm is modified in several respects to insure the continuity of the function value and to reduce accumulation of round-off errors. In addition to computation of function values, this algorithm includes an object-time estimation of round-off errors. Experimental data with regard to the effectiveness of this error control are presented. a fortran program for the algorithm appears in the algorithms section of this issue.", "num": null, "type_str": "table", "html": null, "content": "
; Grishman
" } } } }