{ "paper_id": "J10-3010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:49:07.725173Z" }, "title": "Query Rewriting Using Monolingual Statistical Machine Translation", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "", "affiliation": {}, "email": "riezler@gmail.com." }, { "first": "Yi", "middle": [], "last": "Google", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "", "affiliation": {}, "email": "yliu@google.com" }, { "first": "", "middle": [], "last": "Google", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Long queries often suffer from low recall in Web search due to conjunctive term matching. The chances of matching words in relevant documents can be increased by rewriting query terms into new terms with similar statistical properties. We present a comparison of approaches that deploy user query logs to learn rewrites of query terms into terms from the document space. We show that the best results are achieved by adopting the perspective of bridging the \"lexical chasm\" between queries and documents by translating from a source language of user queries into a target language of Web documents. We train a state-of-the-art statistical machine translation model on query-snippet pairs from user query logs, and extract expansion terms from the query rewrites produced by the monolingual translation system. We show in an extrinsic evaluation in a real-world Web search task that the combination of a query-to-snippet translation model with a query language model achieves improved contextual query expansion compared to a state-ofthe-art query expansion model that is trained on the same query log data.", "pdf_parse": { "paper_id": "J10-3010", "_pdf_hash": "", "abstract": [ { "text": "Long queries often suffer from low recall in Web search due to conjunctive term matching. The chances of matching words in relevant documents can be increased by rewriting query terms into new terms with similar statistical properties. We present a comparison of approaches that deploy user query logs to learn rewrites of query terms into terms from the document space. We show that the best results are achieved by adopting the perspective of bridging the \"lexical chasm\" between queries and documents by translating from a source language of user queries into a target language of Web documents. We train a state-of-the-art statistical machine translation model on query-snippet pairs from user query logs, and extract expansion terms from the query rewrites produced by the monolingual translation system. We show in an extrinsic evaluation in a real-world Web search task that the combination of a query-to-snippet translation model with a query language model achieves improved contextual query expansion compared to a state-ofthe-art query expansion model that is trained on the same query log data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Information Retrieval (IR) applications have been notoriously resistant to improvement attempts by Natural Language Processing (NLP). With a few exceptions for specialized tasks, 1 the contribution of part-of-speech taggers, syntactic parsers, or ontologies of nouns or verbs has been inconclusive. In this article, instead of deploying NLP tools or ontologies, we apply NLP ideas to IR problems. In particular, we take a viewpoint that looks at the problem of the word mismatch between queries and documents in Web search as a problem of translating from a source language of user queries into a target language of Web documents. We concentrate on the task of query expansion by query rewriting. This task consists of adding expansion terms with similar statistical properties to the original query in order to increase the chances of matching words in relevant documents, and also to decrease the ambiguity of the query that is inherent in natural language. We focus on a comparison of models that learn to generate query rewrites from large amounts of user query logs, and use query expansion in Web search for an extrinsic evaluation of the produced rewrites. The experimental query expansion setup used in this article is simple and direct: For a given set of randomly selected queries, n-best rewrites are produced. From the changes introduced by the rewrites, expansion terms are extracted and added as alternative terms to the query, leaving the ranking function untouched. Figure 1 shows expansions of the queries herbs for chronic constipation and herbs for mexican cooking using AND and OR operators. Conjunctive matching of all query terms is the default, and indicated by the AND operator. Expansion terms are added using the OR operator. The example in Figure 1 illustrates the key requirements to successful query expansion, namely, to find appropriate expansions in the context of the query. While remedies, medicine, or supplement are appropriate expansions in the context of the first query, they would cause a severe query drift if used in the second query. In the context of the second query, spices is an appropriate expansion for herbs, whereas this expansion would again not work for the first query.", "cite_spans": [], "ref_spans": [ { "start": 1482, "end": 1490, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1767, "end": 1775, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The central idea behind our approach is to combine the orthogonal information sources of the translation model and the language model to expand query terms in context. The translation model proposes expansion candidates, and the query language model performs a selection in the context of the surrounding query terms. Thus, in combination, the incessant problems of term ambiguity and query drift can be solved. One of the goals of this article is to show that existing SMT technology is readily applicable to this task. We apply SMT to large parallel data of queries on the source side, and snippets of clicked search results on the target side. Snippets are short text fragments that represent the parts of the result pages that are most relevant to the queries, for example, in terms of query term matches. Although the use of snippets instead of the full documents makes our approach efficient, it introduces noise because text fragments are used instead of full sentences. However, we show that state-of-theart statistical machine translation (SMT) technology is in fact robust and flexible enough to capture the peculiarities of the language pair of user queries and result snippets. We evaluate our system in a comparative, extrinsic evaluation in a real-world Web search task. We compare our approach to the expansion system of Cui et al. (2002) that is trained on the same user logs data and has been shown to produce significant improvements over the local feedback technique of Xu and Croft (1996) in a standard evaluation on TREC data. Our extrinsic evaluation is done by embedding the expansion systems into a real-world search engine, and comparing the two systems based on the search results that are triggered by the respective query expansions. Our results show that the combination of translation and language model of a state-of-the-art SMT model produces high-quality rewrites and outperforms the expansion model of Cui et al. (2002) .", "cite_spans": [ { "start": 1336, "end": 1353, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" }, { "start": 1489, "end": 1508, "text": "Xu and Croft (1996)", "ref_id": "BIBREF22" }, { "start": 1936, "end": 1953, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In the following, we will discuss related work (Section 2) and quickly sketch Cui et al. (2002) 's approach (Section 3). Then we will recapitulate the essentials of state-of-the-art SMT and describe how to adapt an SMT system to the query expansion task (Section 4). Results of the extrinsic experimental evaluation are presented in Section 5. The presented results are based on earlier results presented in Riezler, Liu, and Vasserman (2008) , and extended by deeper analyses and further experiments.", "cite_spans": [ { "start": 78, "end": 95, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" }, { "start": 408, "end": 442, "text": "Riezler, Liu, and Vasserman (2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Standard query expansion techniques such as local feedback, or pseudo-relevance feedback, extract expansion terms from the topmost documents retrieved in an initial retrieval round (Xu and Croft 1996) . The local feedback approach is costly and can lead to query drift caused by irrelevant results in the initial retrieval round. Most importantly, though, local feedback models do not learn from data, in contrast to the approaches described in this article.", "cite_spans": [ { "start": 181, "end": 200, "text": "(Xu and Croft 1996)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Recent research in the IR community has increasingly focused on deploying user query logs for query reformulations (Huang, Chien, and Oyang 2003; Fonseca et al. 2005; Jones et al. 2006) , query clustering (Beeferman and Berger 2000; Wen, Nie, and Zhang 2002; Baeza-Yates and Tiberi 2007) , or query similarity (Raghavan and Sever 1995; Fitzpatrick and Dent 1997; Sahami and Heilman 2006) . The advantage of these approaches is that user feedback is readily available in user query logs and can efficiently be precomputed. Similarly to this recent work, our approach uses data from user query logs, but as input to a monolingual SMT model for learning query rewrites.", "cite_spans": [ { "start": 115, "end": 145, "text": "(Huang, Chien, and Oyang 2003;", "ref_id": null }, { "start": 146, "end": 166, "text": "Fonseca et al. 2005;", "ref_id": null }, { "start": 167, "end": 185, "text": "Jones et al. 2006)", "ref_id": null }, { "start": 205, "end": 232, "text": "(Beeferman and Berger 2000;", "ref_id": "BIBREF2" }, { "start": 233, "end": 258, "text": "Wen, Nie, and Zhang 2002;", "ref_id": "BIBREF21" }, { "start": 259, "end": 287, "text": "Baeza-Yates and Tiberi 2007)", "ref_id": "BIBREF0" }, { "start": 336, "end": 362, "text": "Fitzpatrick and Dent 1997;", "ref_id": null }, { "start": 363, "end": 387, "text": "Sahami and Heilman 2006)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The SMT viewpoint has been introduced to the field of IR by Berger and Lafferty (1999) and Berger et al. (2000) , who proposed to bridge the \"lexical chasm\" by a retrieval model based on IBM Model 1 (Brown et al. 1993 ). Since then, ranking models based on monolingual SMT have seen various applications, especially in areas like Question Answering where a large lexical gap between questions and answers has to be bridged (Berger et al. 2000; Echihabi and Marcu 2003; Soricut and Brill 2006; Riezler et al. 2007; Surdeanu, Ciaramita, and Zaragoza 2008; Xue, Jeon, and Croft 2008) . Whereas most applications of SMT ideas to IR problems used translation system scores for (re)ranking purposes, only a few approaches use SMT to generate actual query rewrites (Riezler, Liu, and Vasserman 2008) . Similarly to Riezler, Liu, and Vasserman (2008) , we use SMT to produce actual rewrites rather than for (re)ranking, and evaluate the rewrites in a query expansion task that leaves the ranking model of the search engine untouched.", "cite_spans": [ { "start": 60, "end": 86, "text": "Berger and Lafferty (1999)", "ref_id": "BIBREF4" }, { "start": 91, "end": 111, "text": "Berger et al. (2000)", "ref_id": "BIBREF5" }, { "start": 199, "end": 217, "text": "(Brown et al. 1993", "ref_id": null }, { "start": 423, "end": 443, "text": "(Berger et al. 2000;", "ref_id": "BIBREF5" }, { "start": 444, "end": 468, "text": "Echihabi and Marcu 2003;", "ref_id": "BIBREF11" }, { "start": 469, "end": 492, "text": "Soricut and Brill 2006;", "ref_id": "BIBREF19" }, { "start": 493, "end": 513, "text": "Riezler et al. 2007;", "ref_id": "BIBREF15" }, { "start": 514, "end": 553, "text": "Surdeanu, Ciaramita, and Zaragoza 2008;", "ref_id": "BIBREF20" }, { "start": 554, "end": 580, "text": "Xue, Jeon, and Croft 2008)", "ref_id": "BIBREF23" }, { "start": 758, "end": 792, "text": "(Riezler, Liu, and Vasserman 2008)", "ref_id": "BIBREF14" }, { "start": 808, "end": 842, "text": "Riezler, Liu, and Vasserman (2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Lastly, monolingual SMT has been established in the NLP community as a useful expedient for paraphrasing, that is, the task of reformulating phrases or sentences into semantically similar strings (Quirk, Brockett, and Dolan 2004; Bannard and Callison-Burch 2005) . Although the use of the SMT in paraphrasing goes beyond pure ranking to actual rewriting, SMT-based paraphrasing has to our knowledge not yet been applied to IR tasks.", "cite_spans": [ { "start": 230, "end": 262, "text": "Bannard and Callison-Burch 2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The query expansion model of Cui et al. (2002) is based on the principle that if queries containing one term often lead to the selection of documents containing another term, then a strong relationship between the two terms can be assumed. Query terms and document terms are linked via sessions in which users click on documents in the retrieval result for the query. Cui et al. define a session as follows:", "cite_spans": [ { "start": 29, "end": 46, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Query Expansion by Query-Document Term Correlations", "sec_num": "3." }, { "text": "session := [clicked document]*", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Expansion by Query-Document Term Correlations", "sec_num": "3." }, { "text": "According to this definition, a link is established if at least one user clicks on a document in the retrieval results for a query. Because query logs contain sessions from different users, an aggregation of clicks over sessions will reflect the preferences of multiple users. Cui et al. (2002) compute the following probability distribution of document words w d given query words w q from counts over clicked documents D aggregated over sessions:", "cite_spans": [ { "start": 277, "end": 294, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Query Expansion by Query-Document Term Correlations", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(w d |w q ) = D P(w d |D)P(D|w q )", "eq_num": "( 1 )" } ], "section": "Query Expansion by Query-Document Term Correlations", "sec_num": "3." }, { "text": "The first term in Equation 1is a normalized tfidf weight of the document term in the clicked document, and the second term is the relative cooccurrence of the clicked document and query term. Because Equation 1calculates expansion probabilities for each term separately, Cui et al. (2002) introduce the following cohesion formula that respects the whole query Q by aggregating the expansion probabilities for each query term:", "cite_spans": [ { "start": 271, "end": 288, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Query Expansion by Query-Document Term Correlations", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "CoWeight Q (w d ) = ln( w q \u2208Q P(w d |w q ) + 1)", "eq_num": "(2)" } ], "section": "Query Expansion by Query-Document Term Correlations", "sec_num": "3." }, { "text": "In contrast to local feedback techniques (Xu and Croft 1996) , Cui et al. (2002) 's algorithm allows us to precompute term correlations off-line by collecting counts from query logs. This reliance on pure frequency counting is both a blessing and a curse: On the one hand it allows for efficient non-iterative estimation, but on the other hand it makes the implicit assumption that data sparsity will be overcome by counting from huge data sets. The only attempt at smoothing that is made in this approach is shifting the burden to words in the query context, using Equation (2), when Equation (1) assigns zero probability to unseen pairs. Nonetheless, Cui et al. (2002) show significant improvements over the local feedback technique of Xu and Croft (1996) in an evaluation on TREC data.", "cite_spans": [ { "start": 41, "end": 60, "text": "(Xu and Croft 1996)", "ref_id": "BIBREF22" }, { "start": 63, "end": 80, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" }, { "start": 653, "end": 670, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" }, { "start": 738, "end": 757, "text": "Xu and Croft (1996)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Query Expansion by Query-Document Term Correlations", "sec_num": "3." }, { "text": "The job of a translation system is defined in Och and Ney (2004) as finding the English string\u00ea that is a translation of a foreign string f using a linear combination of feature functions h m (e, f) and weights \u03bb m as follows:", "cite_spans": [ { "start": 46, "end": 64, "text": "Och and Ney (2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Linear Models for SMT", "sec_num": "4.1" }, { "text": "e = arg max e M m=1 \u03bb m h m (e, f)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for SMT", "sec_num": "4.1" }, { "text": "As is now standard in SMT, several complex features such as lexical translation models and phrase translation models, trained in source-target and target-source directions, are combined with language models and simple features such as phrase and word counts. In the linear model formulation, SMT can be thought of as a general tool for computing string similarities or for string rewriting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for SMT", "sec_num": "4.1" }, { "text": "The relationship of translation model and alignment model for source language string f = f J 1 and target string e = e I 1 is via a hidden variable describing an alignment mapping from source position j to target position a j :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "4.2" }, { "text": "P( f J 1 |e I 1 ) = a J 1 P( f J 1 , a J 1 |e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "4.2" }, { "text": "The alignment a J 1 contains so-called null-word alignments a j = 0 that align source words to the empty word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "4.2" }, { "text": "In our approach, \"sentence aligned\" parallel training data are prepared by pairing user queries with snippets of search results clicked for the respective queries. The translation models used are based on a sequence of word alignment models, whereas in our case three Model-1 iterations and three HMM iterations were performed. Another important adjustment in our approach is the setting of the null-word alignment probability to 0.9 in order to account for the difference in sentence length between queries and snippets. This setting improves alignment precision by filtering out noisy alignments and instead concentrating on alignments with high support in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "4.2" }, { "text": "Statistical estimation of alignment models is done by maximum-likelihood estimation of sentence-aligned strings {(f s , e s ) : s = 1, . . . , S}. Because each sentence pair is linked by a hidden alignment variable a = a J 1 , the optimal\u03b8 is found using unlabeled-data loglikelihood estimation techniques such as the EM algorithm:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Extraction", "sec_num": "4.3" }, { "text": "\u03b8 = arg max \u03b8 S s=1 a p \u03b8 (f s , a|e s )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Extraction", "sec_num": "4.3" }, { "text": "The (Viterbi-)alignment\u00e2 J 1 that has the highest probability under a model is defined as follows:\u00e2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Extraction", "sec_num": "4.3" }, { "text": "J 1 = arg max a J 1 p\u03b8( f J 1 , a J 1 |e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Extraction", "sec_num": "4.3" }, { "text": "Because a source-target alignment does not allow a source word to be aligned with two or more target words, source-target and target-source alignments can be combined via various heuristics to improve both recall and precision of alignments. In our application, it is crucial to remove noise in the alignments of queries to snippets. In order to achieve this, we symmetrize Viterbi alignments for source-target and target-source directions by intersection only. That is, given two Viterbi alignments A 1 = {(a j , j)| a j > 0} and A 2 = {(i, b i )| b i > 0}, the alignments in the intersection are defined as A = A 1 \u2229 A 2 . Phrases are extracted as larger blocks of aligned words from the alignments in the intersection, as described in Och and Ney (2004) .", "cite_spans": [ { "start": 738, "end": 756, "text": "Och and Ney (2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Extraction", "sec_num": "4.3" }, { "text": "Language modeling in our approach deploys an n-gram language model that assigns the following probability to a string w L 1 of words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4.4" }, { "text": "P(w L 1 ) = L i=1 P(w i |w i\u22121 1 ) \u2248 L i=1 P(w i |w i\u22121 i\u2212n+1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4.4" }, { "text": "Estimation of n-gram probabilities is done by counting relative frequencies of n-grams in a corpus of user queries. Remedies for sparse data problems are achieved by various smoothing techniques, as described in Brants et al. (2007) . The most important departure of our approach from standard SMT is the use of a language model trained on queries. Although this approach may seem counterintuitive from the standpoint of the noisy-channel model for SMT (Brown et al. 1993) , it fits perfectly into the linear model. Whereas in the first view a query language model would be interpreted as a language model on the source language, in the linear model directionality of translation is not essential. Furthermore, the ultimate task of a query language model in our approach is to select appropriate phrase translations in the context of the original query for query expansion. This is achieved perfectly by an SMT model that assigns the identity translation as most probable translation to each phrase. Descending the n-best list of translations, in effect the language model picks alternative non-identity translations for a phrase in context of identity-translations of the other phrases.", "cite_spans": [ { "start": 212, "end": 232, "text": "Brants et al. (2007)", "ref_id": "BIBREF6" }, { "start": 453, "end": 472, "text": "(Brown et al. 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4.4" }, { "text": "Another advantage of using identity translations and word reordering in our approach is the fact that, by preferring identity translations or word reorderings over non-identity translations of source phrases, the SMT model can effectively abstain from generating any expansion terms. This will happen if none of the candidate phrase translations fits with high enough probability in the context of the whole query, as assessed by the language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "4.4" }, { "text": "The training data for the translation model and the correlation-based model consist of pairs of queries and snippets for clicked results taken from query logs. Representing documents by snippets makes it possible to create a parallel corpus that contains data of roughly the same \"sentence\" length. Furthermore, this makes iterative training feasible. Queries and snippets are linked via clicks on result pages, where a parallel sentence pair is introduced for each query and each snippet of its clicked results. This yields a data set of 3 billion query-snippet pairs from which a phrase-table of 700 million query-snippet phrase translations is extracted. A collection of data statistics for the training data is shown in Table 1 . The language model used in our experiment is a trigram language model trained on English queries in user logs. n-grams were cut off at a minimum frequency of 4. Data statistics for resulting unique n-grams are shown in Table 2 . ", "cite_spans": [], "ref_spans": [ { "start": 724, "end": 731, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 953, "end": 960, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "Statistics of unique n-grams in language model. 1-grams 2-grams 3-grams 9 million 1.5 billion 5 billion", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "The setup for our extrinsic evaluation deploys a real-world search engine, google.com, for a comparison of expansions from the SMT-based system, the correlation-based system, and the correlation-based system using the language model as additional filter. All expansion systems are trained on the same set of parallel training data. SMT modules such as the language model and the translation models in source-target and targetsource directions are combined in a uniform manner in order to give the SMT and correlation-based models the same initial conditions. The expansion terms used in our experiments were extracted as follows: Firstly, a set of 150,000 randomly extracted 3+ word queries was rewritten by each of the systems. For each system, expansion terms were extracted from the 5-best rewrites, and stored in a table that maps source phrases to target phrases in the context of the full queries. For example, Table 3 shows unique 5-best translations of the SMT system for the queries herbs for chronic constipation and herbs for mexican cooking. Phrases that are newly introduced in the translations are highlighted in boldface. These phrases are extracted for expansion and stored in a table that maps source phrases to target phrases in the context of the query from which they were extracted. When applying the expansion Table 3 Unique 5-best phrase-level translations of queries herbs for chronic constipation and herbs for mexican cooking. Terms extracted for expansion are highlighted in boldface.", "cite_spans": [], "ref_spans": [ { "start": 917, "end": 924, "text": "Table 3", "ref_id": null }, { "start": 1332, "end": 1339, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Query Expansion Setup", "sec_num": "5.2" }, { "text": "(herbs , herbs) (for , for) (chronic , chronic) (constipation , constipation) (herbs , herb) (for , for) (chronic , chronic) (constipation , constipation) (herbs , remedies) (for , for) (chronic , chronic) (constipation , constipation) (herbs , medicine) (for , for) (chronic , chronic) (constipation , constipation) (herbs , supplements) (for , for) (chronic , chronic) (constipation , constipation) (herbs , herbs) (for , for) (mexican , mexican) (cooking , cooking) (herbs , herbs) (for , for) (cooking , cooking) (mexican , mexican) (herbs , herbs) (for , for) (mexican , mexican) (cooking , food) (mexican , mexican) (herbs , herbs) (for , for) (cooking , cooking) (herbs , spices) (for , for) (mexican , mexican) (cooking , cooking) table to the same 150,000 queries that were input to the translation, expansion phrases are included in the search query via an OR-operation. An example search query that uses the SMT-based expansions from Table 3 is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 945, "end": 952, "text": "Table 3", "ref_id": null }, { "start": 965, "end": 973, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Query Expansion Setup", "sec_num": "5.2" }, { "text": "In order to evaluate Cui et al. (2002) 's correlation-based system in this setup, we required the system to assign expansion terms to particular query terms. The best results were achieved by using a linear interpolation of scores in Equation 2and Equation (1). Equation (1) thus introduces a preference for a particular query term to the wholequery score calculated by Equation (2). Our reimplementation uses unigram and bigram phrases in queries and expansions. Furthermore, we use Okapi BM25 instead of tfidf in the calculation of Equation 1(see Robertson, Walker, and Hancock-Beaulieu 1998) .", "cite_spans": [ { "start": 21, "end": 38, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" }, { "start": 549, "end": 594, "text": "Robertson, Walker, and Hancock-Beaulieu 1998)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Query Expansion Setup", "sec_num": "5.2" }, { "text": "In addition to SMT and correlation-based expansion, we evaluate a system that uses the query language model to rescore the rewrites produced by the correlation-based model. The intended effect is to filter correlation-based expansions by a more effective context model than the cohesion model proposed by Cui et al. (2002) .", "cite_spans": [ { "start": 305, "end": 322, "text": "Cui et al. (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Query Expansion Setup", "sec_num": "5.2" }, { "text": "Because expansions from all experimental systems are done on top of the same underlying search engine, we can abstract away from interactions with the underlying system. Rewrite scores or translation probabilities were only used to create n-best lists for the respective systems; the ranking function of the underlying search engine was left untouched.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Expansion Setup", "sec_num": "5.2" }, { "text": "The evaluation was performed by three independent raters. The raters were presented with queries and 10-best search results from two systems, anonymized, and presented randomly on left or right sides. The raters' task was to evaluate the results on a 7-point Likert scale, defined as: \u22121.5: much worse \u22121.0: worse \u22120.5: slightly worse 0: about the same 0.5: slightly better 1.0: better 1.5: much better Table 4 shows evaluation results for all pairings of the three expansion systems. For each pairwise comparison, a set of 200 queries that has non-empty, different result lists for both systems is randomly selected from the basic set of 150,000 queries. The mean item score (averaged over queries and raters) for the experiment that compares the correlation-based model with language model filtering (corr+lm) against the correlation-based model (corr) shows a clear win for the experimental system. An experiment that compares SMT-based expansion (SMT) against correlation-based expansions (corr) results in a clear preference for the SMT model. An experiment that compares the SMT-based expansions (SMT) against the correlation-based expansions filtered by the language model (corr+lm) shows a smaller, but still statistically significant, preference for the SMT model. Statistical significance of result differences has been computed with a paired t-test (Cohen 1995) , yielding statistical significance at the 95% level for the first two columns in Table 4 , and statistical significance at the 90% level for the last column in Table 4 . Examples for SMT-based and correlation-based expansions are given in Table 5 . The first five examples show the five biggest wins in terms of mean item score for the SMT system over the correlation-based system. The second set of examples shows the five biggest losses of the SMT system compared to the correlation-based system. On inspection of the first set, we see that SMT-based expansions such as henry viii restaurant portland, maine, or ladybug birthday ideas, or top ten restaurants, vancouver, achieve a change in retrieval results that does not result in a query drift, but rather in improved retrieval results. The first and fifth result are wins for the SMT system because of nonsensical expansions by the baseline correlation-based system. A closer inspection of the second set of examples shows that the SMT-based expansion terms are all clearly related to the source terms, but not synonymous. In the first example, shutdown is replaced by reboot or restart which causes a demotion of the top result that matches the query exactly. In the second example, passport is replaced by the related term visa in the SMTbased expansion. The third example is a loss for SMT-based expansion because of a replacement of the specific term debian by the more general term linux. The correlationbased expansions how many tv 30 rock in the fourth example, and lampasas county sheriff Table 5 5-best and 5-worst expansions from SMT system and corr system with mean item score. home in the fifth example directly hit the title of relevant Web pages, while the SMTbased expansion terms do not improve retrieval results. However, even from these negative examples it becomes apparent that the SMT-based expansion terms are clearly related to the query terms, and for a majority of cases this has a positive effect. In contrast, the terms introduced by the correlation-based system are either only vaguely related or noise.", "cite_spans": [ { "start": 1360, "end": 1372, "text": "(Cohen 1995)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 403, "end": 410, "text": "Table 4", "ref_id": "TABREF1" }, { "start": 1455, "end": 1462, "text": "Table 4", "ref_id": "TABREF1" }, { "start": 1534, "end": 1541, "text": "Table 4", "ref_id": "TABREF1" }, { "start": 1613, "end": 1620, "text": "Table 5", "ref_id": null }, { "start": 2927, "end": 2934, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "5.3" }, { "text": "Similar results are shown in Table 6 where the five best and five worst examples for the comparison of the SMT model with the corr+lm model are listed. The wins for the SMT system are achieved by synonymous or closely related terms (make -build, create; layouts -backgrounds; contractor -contractors) or terms that properly disambiguate ambiguous query terms: For example, the term vet in the query dr. tim hammond, vet is expanded by the appropriate term veterinarian in the SMT-based expansion, whereas the correlation-based expansion to vets does not match the query context. The losses of the SMT-based system are due to terms that are only marginally related. Furthermore, the expansions of the correlation-based model are greatly improved by language model filtering. This can be seen more clearly in Table 7 , which shows the five best and worst results from the comparison of correlation-based models with and without language model filtering. Here the wins by the filtered model are due to filtering non-sensical expansions or too general expansions by the unfiltered correlation-based model rather than promoting new useful expansions.", "cite_spans": [ { "start": 232, "end": 300, "text": "(make -build, create; layouts -backgrounds; contractor -contractors)", "ref_id": null } ], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 6", "ref_id": "TABREF3" }, { "start": 807, "end": 814, "text": "Table 7", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "5.3" }, { "text": "We attribute the experimental result of a significant preference for SMT-based expansions over correlation-based expansions to the fruitful combination of translation model and language model provided by the SMT system. The SMT approach can be viewed as a combined system that proposes already reasonable candidate expansions via the translation model, and filters them by the language model. We may find a certain amount of non-sensical expansion candidates at the phrase translation level of the SMT system. However, a comparison with unfiltered correlation-based expansions shows that the candidate pool of phrase translations of the SMT model is of higher quality, yielding overall better results after language model filtering. This can be seen Table 8 which shows the most probable phrase translations that are applicable to the queries herbs for chronic constipation and herbs for mexican cooking. The phrase tables include identity translations and closely related terms as most probable translations for nearly every phrase. However, they also clearly include noisy and nonrelated terms. Thus an extraction of expansion terms from the phrase table alone would not allow the choice of the appropriate term for the given query context. This can be attained by combining the phrase translations with a language model: As shown in Table 3 , the 5-best translations of the full queries attain a proper disambiguation of the senses of herbs by replacing the term with remedies, medicine, and supplements for the first query, and with spices for the second query. Table 9 shows the top three correlationbased expansion terms assigned to unigrams and bigrams in the queries herbs for chronic constipation and herbs for mexican cooking. Expansion terms are chosen by overall highest weight and shown in boldface. Relevant expansion terms such as treatment or recipes that would disambiguate the meaning of herbs are in fact in the candidate list; however, the cohesion score promotes general terms such as interpret or com as best whole-query expansions. Although language model filtering greatly improves the quality of correlation-based expansions, overall the combination of phrase translations and language model produces better results than the combination of correlation-based expansions and language model. This is confirmed by the pairwise comparison of the SMT and corr+lm systems shown in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 750, "end": 757, "text": "Table 8", "ref_id": "TABREF5" }, { "start": 1336, "end": 1343, "text": "Table 3", "ref_id": null }, { "start": 1566, "end": 1573, "text": "Table 9", "ref_id": "TABREF6" }, { "start": 2399, "end": 2406, "text": "Table 4", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "5.3" }, { "text": "We presented a view of the term mismatch problem between queries and Web documents as a problem of translating from a source language of user queries to a target language of Web documents. We showed that a state-of-the-art SMT model can be applied to parallel data of user queries and snippets for clicked Web documents, and showed improvements over state-of-the-art probabilistic query expansion. Our experimental evaluation showed firstly that state-of-the-art SMT is robust and flexible enough to capture the peculiarities of query-snippet translation, thus questioning the need for special-purpose models to control noisy translations as suggested by Lee et al. (2008) . Furthermore, we showed that the combination of translation model and language model significantly outperforms the combination of correlation-based model and language model. We chose to take advantage of the access the google.com search engine to evaluate the query rewrite systems by query expansion embedded in a real-word search task. Although this conforms with recent appeals for more extrinsic evaluations (Belz 2009) , it decreases the reproducability of the evaluation experiment.", "cite_spans": [ { "start": 655, "end": 672, "text": "Lee et al. (2008)", "ref_id": null }, { "start": 1086, "end": 1097, "text": "(Belz 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "In future work, we hope to apply SMT-based rewriting to other rewriting tasks such as query suggestions. Also, we hope that our successful application of SMT to query expansion might serve as an example and perhaps open the doors for new applications and extrinsic evaluations of related NLP approaches such as paraphrasing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." } ], "back_matter": [ { "text": "On the reuse of past optimal queries. In Proceedings of the 18th Annual International", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Extracting semantic relations from query logs", "authors": [ { "first": "Ricardo", "middle": [], "last": "Baeza-Yates", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Tiberi", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 13th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'07)", "volume": "", "issue": "", "pages": "76--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baeza-Yates, Ricardo and Alessandro Tiberi. 2007. Extracting semantic relations from query logs. In Proceedings of the 13th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'07), San Jose, CA, pages 76-85.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Paraphrasing with bilingual parallel corpora", "authors": [ { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "597--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bannard, Colin and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), Ann Arbor, MI, pages 597-604.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Agglomerative clustering of a search engine query log", "authors": [ { "first": "Doug", "middle": [], "last": "Beeferman", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Berger", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'00)", "volume": "", "issue": "", "pages": "407--416", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beeferman, Doug and Adam Berger. 2000. Agglomerative clustering of a search engine query log. In Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'00), Boston, MA, pages 407-416.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "That's nice ... what can you do with it? Computational Linguistics", "authors": [ { "first": "Anja", "middle": [], "last": "Belz", "suffix": "" } ], "year": 2009, "venue": "", "volume": "35", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Belz, Anja. 2009. That's nice ... what can you do with it? Computational Linguistics, 35(1):111-118.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Information retrieval as statistical translation", "authors": [ { "first": "Adam", "middle": [], "last": "Berger", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 22nd ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'99)", "volume": "", "issue": "", "pages": "222--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger, Adam and John Lafferty. 1999. Information retrieval as statistical translation. In Proceedings of the 22nd ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'99), Berkeley, CA, pages 222-229.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bridging the lexical chasm: Statistical approaches to answer-finding", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "David", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Dayne", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "Vibhu", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'00)", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger, Adam L., Rich Caruana, David Cohn, Dayne Freitag, and Vibhu Mittal. 2000. Bridging the lexical chasm: Statistical approaches to answer-finding. In Proceedings of the 23rd ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'00), Athens, Greece, 192-199.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Large language models in machine translation", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Ashok", "middle": [ "C" ], "last": "Popat", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Franz", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP'07)", "volume": "", "issue": "", "pages": "858--867", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brants, Thorsten, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP'07), Prague Czech Republic, pages 858-867.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Empirical Methods for Artificial Intelligence", "authors": [ { "first": "Paul", "middle": [ "R" ], "last": "Cohen", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, Paul R. 1995. Empirical Methods for Artificial Intelligence. The MIT Press, Cambridge, MA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Probabilistic query expansion using query logs", "authors": [ { "first": "Hang", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Ji-Rong", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Wei-Ying", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 11th International World Wide Web conference (WWW'02)", "volume": "", "issue": "", "pages": "325--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cui, Hang, Ji-Rong Wen, Jian-Yun Nie, and Wei-Ying Ma. 2002. Probabilistic query expansion using query logs. In Proceedings of the 11th International World Wide Web conference (WWW'02), Honolulu, HI, pages 325-332.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A noisy-channel approach to question answering", "authors": [ { "first": "Abdessamad", "middle": [], "last": "Echihabi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL'03)", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Echihabi, Abdessamad and Daniel Marcu. 2003. A noisy-channel approach to question answering. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL'03), Sapporo, Japan, pages 16-23.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'95)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "344--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'95), Seattle, WA, pages 344-350.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Translating queries into snippets for improved query expansion", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Vasserman", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING'08)", "volume": "", "issue": "", "pages": "737--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riezler, Stefan, Yi Liu, and Alexander Vasserman. 2008. Translating queries into snippets for improved query expansion. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING'08), Manchester, England, pages 737-744.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Statistical machine translation for query expansion in answer retrieval", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Vasserman", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Tsochantaridis", "suffix": "" }, { "first": "Vibhu", "middle": [], "last": "Mittal", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL'07)", "volume": "1", "issue": "", "pages": "464--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riezler, Stefan, Alexander Vasserman, Ioannis Tsochantaridis, Vibhu Mittal, and Yi Liu. 2007. Statistical machine translation for query expansion in answer retrieval. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL'07), Prague Czech Republic, Vol. 1, pages 464-471.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Okapi at TREC-7", "authors": [ { "first": "Stephen", "middle": [ "E" ], "last": "Robertson", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Micheline", "middle": [], "last": "Hancock-Beaulieu", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Seventh Text REtrieval Conference (TREC-7)", "volume": "", "issue": "", "pages": "253--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robertson, Stephen E., Steve Walker, and Micheline Hancock-Beaulieu. 1998. Okapi at TREC-7. In Proceedings of the Seventh Text REtrieval Conference (TREC-7), Gaithersburg, MD, pages 253-264.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "NLP found helpful (at least for one text categorization task)", "authors": [ { "first": "Carl", "middle": [], "last": "Sable", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP'02)", "volume": "", "issue": "", "pages": "172--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sable, Carl, Kathleen McKeown, and Kenneth W. Church. 2002. NLP found helpful (at least for one text categorization task). In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP'02), Philadelphia, PA, pages 172-179.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A Web-based kernel function for measuring the similarity of short text snippets", "authors": [ { "first": "Mehran", "middle": [], "last": "Sahami", "suffix": "" }, { "first": "Timothy", "middle": [ "D" ], "last": "Heilman", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 15th International World Wide Web conference (WWW'06)", "volume": "", "issue": "", "pages": "377--386", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sahami, Mehran and Timothy D. Heilman. 2006. A Web-based kernel function for measuring the similarity of short text snippets. In Proceedings of the 15th International World Wide Web conference (WWW'06), Edinburgh, Scotland, pages 377-386.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Automatic question answering using the Web: Beyond the factoid", "authors": [ { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 2006, "venue": "Journal of Information Retrieval -Special Issue on Web Information Retrieval", "volume": "9", "issue": "", "pages": "191--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soricut, Radu and Eric Brill. 2006. Automatic question answering using the Web: Beyond the factoid. Journal of Information Retrieval -Special Issue on Web Information Retrieval, 9:191-206.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning to rank answers on large online QA collections", "authors": [ { "first": "M", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "M", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "H", "middle": [], "last": "Zaragoza", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL'08)", "volume": "", "issue": "", "pages": "719--727", "other_ids": {}, "num": null, "urls": [], "raw_text": "Surdeanu, M., M. Ciaramita, and H. Zaragoza. 2008. Learning to rank answers on large online QA collections. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL'08), Columbus, OH, pages 719-727.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Query clustering using user logs", "authors": [ { "first": "Ji-Rong", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Hong-Jiang", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Information Systems", "volume": "20", "issue": "1", "pages": "59--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen, Ji-Rong, Jian-Yun Nie, and Hong-Jiang Zhang. 2002. Query clustering using user logs. ACM Transactions on Information Systems, 20(1):59-81.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Query expansion using local and global document analysis", "authors": [ { "first": "Jinxi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "W. Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'07)", "volume": "", "issue": "", "pages": "4--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, Jinxi and W. Bruce Croft. 1996. Query expansion using local and global document analysis. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'07), Zurich, Switzerland, pages 4-11.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Retrieval models for question and answer archives", "authors": [ { "first": "Xiaobing", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Jiwoon", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'08)", "volume": "", "issue": "", "pages": "475--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xue, Xiaobing, Jiwoon Jeon, and Bruce Croft. 2008. Retrieval models for question and answer archives. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'08), Singapore, pages 475-482.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Search queries herbs for chronic constipation and herbs for mexican cooking integrating expansion terms into OR-nodes in conjunctive matching.", "type_str": "figure", "num": null }, "TABREF0": { "text": "Statistics of query-snippet training data.", "html": null, "content": "
query-snippetquerysnippet
pairswordswords
tokens3 billion8 billion 25 billion
avg. length-2.68.3
", "num": null, "type_str": "table" }, "TABREF1": { "text": "Comparison of query expansion systems on the Web search task with respect to a 7-point Likert scale.", "html": null, "content": "
experimentcorr+lmSMTSMT
baselinecorrcorrcorr+lm
mean item score 0.264 \u00b1 0.095 0.254 \u00b1 0.09125 0.093 \u00b1 0.0850
", "num": null, "type_str": "table" }, "TABREF3": { "text": "-best and 5-worst expansions from SMT system and corr+lm system with mean item score.", "html": null, "content": "
querySMT expansionscorr+lm expansionsscore
how to make bombsmake -build, createmake -book1.5
dominion power va-dominion -virginia1.3
purple myspacelayouts -backgroundspurple -free1.167
layoutsmyspace -free
dr. tim hammond, vetvet -veterinarian,vet -vets1.167
veterinary, hospital
tci general contractorcontractor -contractors-1.167
health effects oftea -coffee-\u22121.5
drinking too much tea
tomahawk wis-wis -wisconsin\u22121.0
bike rally
apprentice tv show-tv -com\u22121.0
super nes romsroms -emulatornes -nintendo\u22121.0
family guyfamily -genealogyclips -video\u22121.0
clips hitler
", "num": null, "type_str": "table" }, "TABREF4": { "text": "-best and 5-worst expansions from corr system and corr+lm system with mean item score.", "html": null, "content": "
querycorr+lm expansionscorr expansionsscore
outer cape-cape -home;1.5
health serviceshealth -home;
services -home
Henry VII Menu-menu -england;1.5
Portland, Maineportland -six
easing to relievegallbladder -gallstonegallbladder -disease,1.333
gallbladder paingallstones, gallstone
guardian angel picture-picture -lyrics1.333
view full episodesepisodes -watchnaruto -tv1.333
of naruto
iditarod 2007 schedule iditarod 2007 -race-\u22121.5
40 inches plusinches plus -reviewinches -calculator\u22121.333
Lovell sisters reviewlovell sisters -website-\u22121.333
smartparts ion Review smartparts ion -reviews review -pbreview\u22121.167
canon eos rebelepinion -com-\u22121.167
xt slr + epinion
from inspecting
", "num": null, "type_str": "table" }, "TABREF5": { "text": "Phrase translations for source strings herbs for chronic constipation and herbs for mexican cooking.", "html": null, "content": "
herbsherbs, herbal, medicinal, spices, supplements, remedies
herbs forherbs for, herbs, herbs and, with herbs
herbs for chronicherbs for chronic, and herbs for chronic, herbs for
for chronicfor chronic, chronic, of chronic
for chronic constipation for chronic constipation, chronic constipation, for constipation
chronicchronic, acute, patients, treatment
chronic constipationchronic constipation, of chronic constipation,
with chronic constipation
constipationconstipation, bowel, common, symptoms
for mexicanfor mexican, mexican, the mexican, of mexican
for mexican cookingmexican food, mexican food and, mexican glossary
mexicanmexican, mexico, the mexican
mexican cookingmexican cooking, mexican food, mexican, cooking
cookingcooking, culinary, recipes, cook, food, recipe
", "num": null, "type_str": "table" }, "TABREF6": { "text": "Correlation-based expansions for queries herbs for chronic constipation and herbs for mexican cooking.", "html": null, "content": "
query termsn-best expansions
herbscomtreatment encyclopedia
chronicinterprettreatingcom
constipationinterprettreatingcom
herbs formedicinal supportwomen
for chroniccomgoldencyclopedia
chronic constipation interprettreating
herbscooksrecipescom
mexicanrecipescomcooks
cookingcooksrecipescom
herbs formedicinal womensupport
for mexicancookscomallrecipes
", "num": null, "type_str": "table" } } } }