|
{ |
|
"paper_id": "N07-1028", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:47:37.365061Z" |
|
}, |
|
"title": "A Case for Shorter Queries, and Helping Users Create Them", |
|
"authors": [ |
|
{ |
|
"first": "Giridhar", |
|
"middle": [], |
|
"last": "Kumaran", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Massachusetts Amherst Amherst", |
|
"location": { |
|
"postCode": "01003", |
|
"region": "MA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "giridhar@cs.umass.edu" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Massachusetts Amherst Amherst", |
|
"location": { |
|
"postCode": "01003", |
|
"region": "MA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "allan@cs.umass.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Information retrieval systems are frequently required to handle long queries. Simply using all terms in the query or relying on the underlying retrieval model to appropriately weight terms often leads to ineffective retrieval. We show that rewriting the query to a version that comprises a small subset of appropriate terms from the original query greatly improves effectiveness. Targeting a demonstrated potential improvement of almost 50% on some difficult TREC queries and their associated collections, we develop a suite of automatic techniques to rewrite queries and study their characteristics. We show that the shortcomings of automatic methods can be ameliorated by some simple user interaction, and report results that are on average 25% better than the baseline.", |
|
"pdf_parse": { |
|
"paper_id": "N07-1028", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Information retrieval systems are frequently required to handle long queries. Simply using all terms in the query or relying on the underlying retrieval model to appropriately weight terms often leads to ineffective retrieval. We show that rewriting the query to a version that comprises a small subset of appropriate terms from the original query greatly improves effectiveness. Targeting a demonstrated potential improvement of almost 50% on some difficult TREC queries and their associated collections, we develop a suite of automatic techniques to rewrite queries and study their characteristics. We show that the shortcomings of automatic methods can be ameliorated by some simple user interaction, and report results that are on average 25% better than the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Query expansion has long been a focus of information retrieval research. Given an arbitrary short query, the goal was to find and include additional related and suitably-weighted terms to the original query to produce a more effective version. In this paper we focus on a complementary problem -query re-writing. Given a long query we explore whether there is utility in modifying it to a more concise version such that the original information need is still expressed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Y!Q beta 1 search engine allows users to select large portions of text from documents and issue them as queries. The search engine is designed to encourage users to submit long queries such as this example from the web site \"I need to know the gas mileage for my Audi A8 2004 model\". The motivation for encouraging this type of querying is that longer queries would provide more information in the form of context (Kraft et al., 2006) , and this additional information could be leveraged to provide a better search experience. However, handling such long queries is a challenge. The use of all the terms from the user's input can rapidly narrow down the set of matching documents, especially if a boolean retrieval model is adopted. While one would expect the underlying retrieval model to appropriately assign weights to different terms in the query and return only relevant content, it is widely acknowledged that models fail due to a variety of reasons (Harman and Buckley, 2004) , and are not suited to tackle every possible query.", |
|
"cite_spans": [ |
|
{ |
|
"start": 418, |
|
"end": 438, |
|
"text": "(Kraft et al., 2006)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 960, |
|
"end": 986, |
|
"text": "(Harman and Buckley, 2004)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recently, there has been great interest in personalized search (Teevan et al., 2005) , where the query is modified based on a user's profile. The profile usually consists of documents previously viewed, web sites recently visited, e-mail correspondence and so on. Common procedures for using this large amount of information usually involve creating huge query vectors with some sort of term-weighting mechanism to favor different portions of the profile.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 84, |
|
"text": "(Teevan et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The queries used in the TREC ad-hoc tracks consist of title, description and narrative sections, of progressively increasing length. The title, of length ranging from a single term to four terms is considered a concise query, while the description is considered a longer version of the title expressing the same information need. Almost all research on the TREC ad-hoc retrieval track reports results using only the title portion as the query, and a combination of the title and description as a separate query. Most reported results show that the latter is more effective than the former, though in the case of some hard collections the opposite is true. However, as we shall show later, there is tremendous scope for improvement. Formulating a shorter query from the description can lead to significant improvements in performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the light of the above, we believe there is great utility in creating query-rewriting mechanisms for handling long queries. This paper is organized in the following way. We start with some examples and explore ways by which we can create concise high-quality reformulations of long queries in Section 2. We describe our baseline system in Section 3 and motivate our investigations with experiments in Section 4. Since automatic methods have shortfalls, we present a procedure in Section 5 to involve users in selecting a good shorter query from a small selection of alternatives. We report and discuss the results of this approach in Section 6. Related work is presented in Section 7. We wrap up with conclusions and future directions in Section 8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Consider the following query:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Define Argentine and British international relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "When this query was issued to a search engine, the average precision (AP, Section 3) of the results was 0.424. When we selected subsets of terms (subqueries) from the query, and ran them as distinct queries, the performance was as shown in Table 1 . It can be observed that there are seven different ways of re-writing the original query to attain better performance. The best query, also among the shortest, did not have a natural-language flavor to it. It however had an effectiveness almost 50% more than the original query. This immense potential for improvement by query re-writing is the motivation for this paper. Table 1 : The results of using all possible subsets (excluding singletons) of the original query as queries.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 247, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 621, |
|
"end": 628, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The query terms were stemmed and stopped.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Analysis of the terms in the sub-queries and the relationship of the sub-queries with the original query revealed a few interesting insights that had potential to be leveraged to aid sub-query selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. Terms in the original query that a human would consider vital in conveying the type of information desired were missing from the best subqueries. For example, the best sub-query for the example was britain argentina, omitting any reference to international relations. This also reveals a mismatch between the user's query and the way terms occurred in the corpus, and suggests that an approximate query could at times be a better starting point for search.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2. The sub-query would often contain only terms that a human would consider vital to the query while the original query would also (naturally) contain them, albeit weighted lower with respect to other terms. This is a common problem (Harman and Buckley, 2004) , and the focus of efforts to isolate the key concept terms in queries (Buckley et al., 2000; Allan et al., 1996) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 259, |
|
"text": "(Harman and Buckley, 2004)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 353, |
|
"text": "(Buckley et al., 2000;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 373, |
|
"text": "Allan et al., 1996)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. Good sub-queries were missing many of the noise terms found in the original query. Ideally the retrieval model would weight them lower, but dropping them completely from the query appeared to be more effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "4. Sub-queries a human would consider as an incomplete expression of information need sometimes performed better than the original query. Our example illustrates this point.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Given the above empirical observations, we explored a variety of procedures to refine a long query into a shorter one that retained the key terms. We expected the set of terms of a good sub-query to have the following properties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A. Minimal Cardinality: Any set that contains more than the minimum number of terms to retrieve relevant documents could suffer from concept drift. B. Coherency: The terms that constitute the subquery should be coherent, i.e. they should buttress each other in representing the information need. If need be, terms that the user considered important but led to retrieval of non-relevant documents should be dropped.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Some of the sub-query selection methods we explored with these properties in mind are reported below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting sub-queries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Let X and Y be two random variables, with joint distribution P (x, y) and marginal distributions P (x) and P (y) respectively. The mutual information is then defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "I(X; Y ) = x y p(x, y)log p(x, y) p(x)p(y)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Intuitively, mutual information measures the information about X that is shared by Y . If X and Y are independent, then X contains no information about Y and vice versa and hence their mutual information is zero. Mutual Information is attractive because it is not only easy to compute, but also takes into consideration corpus statistics and semantics. The mutual information between two terms (Church and Hanks, 1989 ) can be calculated using Equation 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 394, |
|
"end": 417, |
|
"text": "(Church and Hanks, 1989", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "I(x, y) = log n(x,y) N n(x) N n(y) N (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "n(x, y) is the number of times terms x and y occurred within a term window of 100 terms across the corpus, while n(x) and n(y) are the frequencies of x and y in the collection of size N terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To tackle the situation where we have an arbitrary number of variables (terms) we extend the twovariable case to the multivariate case. The extension, called multivariate mutual information (MVMI) can be generalized from Equation 1 to:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "I(X 1 ; X 2 ; X 3 ; ...; X N ) = N i=1 (\u22121) i\u22121 X\u2282(X 1 ,X 2 ,X 3 ,...,X N ),|X|=k H(X)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The calculation of multivariate information using Equation 3 was very cumbersome, and we instead worked with the approximation (Kern et al., 2003) given below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 146, |
|
"text": "(Kern et al., 2003)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "I(X 1 ; X 2 ; X 3 ; ...; X N ) = (4) i,j={1,2,3,...,N ;i =j} I(X i ; X j )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For the case involving multiple terms, we calculated MVMI as the sum of the pair-wise mutual information for all terms in the candidate sub-query. This can be also viewed as the creation of a completely connected graph G = (V, E), where the vertices V are the terms and the edges E are weighted using the mutual information between the vertices they connect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To select a score representative of the quality of a sub-query we considered several options including the sum, average, median and minimum of the edge weights. We performed experiments on a set of candidate queries to determine how well each of these measures tracked AP, and found that the average worked best. We refer to the sub-query selection procedure using the average score as Average.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mutual Information", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "It is well-known that an average is easily skewed by outliers. In other words, the existence of one or more terms that have low mutual information with every other term could potentially distort results. This problem could be further compounded by the fact that mutual information measured using Equation 2 could have a negative value. We attempted to tackle this problem by considering another measure that involved creating a maximum spanning tree (MaxST) over the fully connected graph G, and using the weight of the identified tree as a measure representative of the candidate query's quality (Rijsbergen, 1979) . We used Kruskal's minimum spanning tree (Cormen et al., 2001 ) algorithm after negating the edge weights to obtain a MaxST. We refer to the sub-query selection procedure using the weight of the maximum spanning tree as MaxST.", |
|
"cite_spans": [ |
|
{ |
|
"start": 597, |
|
"end": 615, |
|
"text": "(Rijsbergen, 1979)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 658, |
|
"end": 678, |
|
"text": "(Cormen et al., 2001", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Spanning Tree", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Named entities (names of persons, places, organizations, dates, etc.) are known to play an important anchor role in many information retrieval applications. In our example from Section 2, sub-queries without Britain or Argentina will not be effective even though the mutual information score of the other two terms international and relations might indicate otherwise. We experimented with another version of sub-query selection that considered only sub-queries that retained at least one of the named entities from the original query. We refer to the variants that retained named entities as NE Average and NE MasT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We used version 2.3.2 of the Indri search engine, developed as part of the Lemur 2 project. While the inference network-based retrieval framework of Indri permits the use of structured queries, the use of language modeling techniques provides better estimates of probabilities for query evaluation. The pseudo-relevance feedback mechanism we used is based on relevance models (Lavrenko and Croft, 2001) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 402, |
|
"text": "(Lavrenko and Croft, 2001)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To extract named entities from the queries, we used BBN Identifinder (Bikel et al., 1999) . The named entities identified were of type Person, Location, Organization, Date, and Time.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 89, |
|
"text": "(Bikel et al., 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We used the TREC Robust 2004 and Robust 2005 (Voorhees, 2006) document collections for our experiments. The 2004 Robust collection contains around half a million documents from the Financial Times, the Federal Register, the LA Times, and FBIS. The Robust 2005 collection is the one-million document AQUAINT collection. All the documents were from English newswire. We chose these collections because they and their associated queries are known to be hard, and hence present a challenging environment. We stemmed the collections using the Krovetz stemmer provided as part of Indri, and used a manually-created stoplist of twenty terms (a, an, and, are, at, as, be, for, in, is, it, of, on, or, that, the, to, was, with and what) . To determine the best query selection procedure, we analyzed 163 queries from the Robust 2004 track, and used 30 and 50 queries from the 2004 and 2005 Robust tracks respectively for evaluation and user studies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 61, |
|
"text": "(Voorhees, 2006)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 634, |
|
"end": 727, |
|
"text": "(a, an, and, are, at, as, be, for, in, is, it, of, on, or, that, the, to, was, with and what)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For all systems, we report mean average precision (MAP) and geometric mean average precision (GMAP). MAP is the most widely used measure in Information Retrieval. While precision is the fraction of the retrieved documents that are relevant, average precision (AP) is a single value obtained by averaging the precision values at each new relevant document observed. MAP is the arithmetic mean of the APs of a set of queries. Similarly, GMAP is the geometric mean of the APs of a set of queries. The GMAP measure is more indicative of performance across an entire set of queries. MAP can be skewed by the presence of a few well-performing queries, and hence is not as good a measure as GMAP from the perspective of measure comprehensive performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We first ran two baseline experiments to record the quality of the available long query and the shorter version. As mentioned in Section 1, we used the description and title sections of each TREC query as surrogates for the long and short versions respectively of a query. The results are presented in the first two rows, Baseline and Pseudo-relevance Feedback (PRF), of Table 2 . Measured in terms of MAP and GMAP (Section3), using just the title results in better performance than using the description. This clearly indicates the existence of terms in the description that while elaborating an information need hurt retrieval performance. The result of using pseudo-relevance feedback (PRF) on both the title and description show moderate gains -a known fact about this particular collection and associated train- To show the potential and utility of query rewriting, we first present results that show the upper bound on performance that can obtained by doing so. We ran retrieval experiments with every combination of query terms. For a query of length n, there are 2 n combinations. We limited our experiments to queries of length n \u2264 12. Selecting the performance obtained by the best sub-query of each query revealed an upper bound in performance almost 50% better than the baseline (Table 2) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 378, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1291, |
|
"end": 1300, |
|
"text": "(Table 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To evaluate the automatic sub-query selection procedures developed in Section 2, we performed retrieval experiments using the sub-queries selected using them. The results, which are presented in Table 3, show that the automatic sub-query selection process was a failure. The results of automatic selection were worse than even the baseline, and there was no significant difference between using any of the different sub-query selection procedures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The failure of the automatic techniques could be attributed to the fact that we were working with the assumption that term co-occurrence could be used to model a user's information need. To see if there was any general utility in using the procedures to select sub-queries, we selected the best-performing sub-query from the top 10 ranked by each selection procedure (Table 4) . While the effectiveness in each case as measured by MAP is not close to the best possible MAP, 0.342, they are all significantly better than the baseline of 0.243.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 376, |
|
"text": "(Table 4)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The final results we presented in the last section hinted at a potential for user interaction. We envi- Table 4 : Score of the best sub-query in the top 10 ranked by various measures sioned providing the user with a list of the top 10 sub-query candidates using a good ranking procedure, and asking her to select the sub-query she felt was most appropriate. This additional round of human intervention could potentially compensate for the inability of the ranking measures to select the best sub-query automatically.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 111, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Interacting with the user", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We displayed the description (the long query) and narrative portion of each TREC query in the interface. The narrative was provided to help the participant understand what information the user who issued the query was interested in. The title was kept hidden to avoid influencing the participant's choice of the best sub-query. A list of candidate sub-queries was displayed along with links that could be clicked on to display a short section of text in a designated area. The intention was to provide an example of what would potentially be retrieved with a high rank if the candidate sub-query were used. The participant used this information to make two decisionsthe perceived quality of each sub-query, and the best sub-query from the list. A facility to indicate that none of the candidates were good was also included.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User interface design", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "28.5% MaxST 35.5% NE Average 31.1% NE MaxST 36.6% Table 5 : Number of candidates from top 10 that exceeded the baseline", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 57, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Percentage of candidates better than baseline Average", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The two key issues we faced while determining the content of the user interface were: A. Deciding which sub-query selection procedure to use to get the top 10 candidate sub-queries: To determine this in the absence of any significant difference in performance due to the top-ranked candidate selected by each procedure, we looked at the number of candidates each procedure brought into the top 10 that were better than the baseline query, as measured by MAP. This was guided by the belief that greater the number of better candidates in the top 10, the higher the probability that the user would select a better sub-query. Table 5 shows how each of the selection procedures compared. The NE MaxST ranking procedure had the most number of better sub-queries in the top 10, and hence was chosen.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 623, |
|
"end": 630, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User interface content issues", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "B. Displaying context: Simply displaying a list of 10 candidates without any supportive information would make the task of the user difficult. This was in contrast to query expansion techniques (Anick and Tipirneni, 1999) where displaying a list of terms sufficed as the task of the user was to disambiguate or expand a short query. An experiment was performed in which a single user worked with a set of 30 queries from Robust 2004, and an accompanying set of 10 candidate sub-queries each, twice -once with passages providing context and one with snippets providing context. The top-ranked passage was generated by modifying the candidate query into one that retrieved passages of fixed length instead of documents. Snippets, like those seen along with links to top-ranked documents in the results from almost all popular search engines, were generated after a document-level query was used to query the collection. The order in which the two contexts were presented to the user was randomized to prevent the MAP GMAP Snippet as Context 0.348 0.170 Passage as Context 0.296 0.151 Table 6 : Results showing the MAP over 19 of 30 queries that the user provided selections for using each context type.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 221, |
|
"text": "(Anick and Tipirneni, 1999)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1082, |
|
"end": 1089, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User interface content issues", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "user from assuming a quality order. We see that presenting the snippet led to better MAP that presenting the passage (Table 6 ). The reason for this could be that the top-ranking passage we displayed was from a document ranked lower by the document-focussed version of the query. Since we finally measure MAP only with respect to document ranking, and the snippet was generated from the top-ranked document, we hypothesize that this led to the snippet being a better context to display.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 125, |
|
"text": "(Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User interface content issues", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We conducted an exploratory study with five participants -four of them were graduate students in computer science while the fifth had a background in the social sciences and was reasonably proficient in the use of computers and internet search engines. The participants worked with 30 queries from Robust 2004, and 50 from Robust 2005 3 . The baseline values reported are automatic runs with the description as the query. Table 7 shows that all five participants 4 were able to choose sub-queries that led to an improvement in performance over the baseline (TREC title query only). This improvement is not only on MAP but also on GMAP, indicating that user interaction helped improve a wide spectrum of queries. Most notable were the improvements in P@5 and P@10. This attested to the fact that the interaction technique we explored was precision-enhancing. Another interesting result, from # sub-queries selected was that participants were able to decide in a large number of cases that re-writing was either not useful for a query, or that none of the options presented to them were better. Showing context appears to have helped. Table 7 : # Queries refers to the number of queries that were presented to the participant while # sub-queries selected refers to the number of queries for which the participant chose a sub-query. All scores including upper bounds were calculated only considering the queries for which the participant selected a sub-query. An entry in bold means that the improvement in MAP is statistically significant. Statistical significance was measured using a paired t-test, with \u03b1 set to 0.05.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 422, |
|
"end": 429, |
|
"text": "Table 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1133, |
|
"end": 1140, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our interest in finding a concise sub-query that effectively captures the information need is reminiscent of previous work in (Buckley et al., 2000) . However, the focus was more on balancing the effect of query expansion techniques such that different concepts in the query were equally benefited. Mutual information has been used previously in (Church and Hanks, 1989) to identify collocations of terms for identifying semantic relationships in text. Experiments were confined to bigrams. The use of MaST over a graph of mutual information values to incorporate the most significant dependencies between terms was first noted in (Rijsbergen, 1979) . Extensions can be found in a different field -image processing (Kern et al., 2003) -where multivariate mutual information is frequently used.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 148, |
|
"text": "(Buckley et al., 2000)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 370, |
|
"text": "(Church and Hanks, 1989)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 631, |
|
"end": 649, |
|
"text": "(Rijsbergen, 1979)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 734, |
|
"text": "(Kern et al., 2003)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Work done by (White et al., 2005 ) provided a basis for our decision to show context for sub-query selection. The useful result that top-ranked sentences could be used to guide users towards relevant material helped us design an user interface that the par-ticipants found very convenient to use.", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 32, |
|
"text": "(White et al., 2005", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "A related problem addressed by (Cronen-Townsend et al., 2002) was determining query quality. This is known to be a very hard problem, and various efforts (Carmel et al., 2006; Vinay et al., 2006) have been made towards formalizing and understanding it.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 61, |
|
"text": "(Cronen-Townsend et al., 2002)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 154, |
|
"end": 175, |
|
"text": "(Carmel et al., 2006;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 195, |
|
"text": "Vinay et al., 2006)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Previous work (Shapiro and Taksa, 2003) in the web environment attempted to convert a user's natural language query into one suited for use with web search engines. However, the focus was on merging the results from using different sub-queries, and not selection of a single sub-query. Our approach of re-writing queries could be compared to query reformulation, wherein a user follows up a query with successive reformulations of the original. In the web environment, studies have shown that most users still enter only one or two queries, and conduct limited query reformulation . We hypothesize that the techniques we have developed will be well-suited for search engines like Ask Jeeves where 50% of the queries are in question format (Spink and Ozmultu, 2002) . More experimentation in the Web domain is required to substantiate this.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 39, |
|
"text": "(Shapiro and Taksa, 2003)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 764, |
|
"text": "(Spink and Ozmultu, 2002)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Our results clearly show that shorter reformulations of long queries can greatly impact performance. We believe that our technique has great potential to be used in an adaptive information retrieval environment, where the user starts off with a more general information need and a looser notion of relevance. The initial query can then be made longer to express a most focused information need.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "As part of future work, we plan to conduct a more elaborate study with more interaction strategies included. Better techniques to select effective subqueries are also in the pipeline. Since we used mutual information as the basis for most of our subquery selection procedures, we could not consider sub-queries that comprised of a single term. We plan to address this issue too in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "http://yq.search.yahoo.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.lemurproject.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Participant 4 looked that only 34 of the 50 queries presented 4 The p value for testing statistical significance of MAP improvement for Participant 5 was 0.053 -the result very narrowly missed being statistically significant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported in part by the Center for Intelligent Information Retrieval and in part by the Defense Advanced Research Projects Agency (DARPA) under contract number HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are the authors and do not necessarily reflect those of the sponsor. We also thank the anonymous reviewers for their valuable comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "9" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Inquery at TREC-5", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Callan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Bruce" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Broglio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinxi", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongming", |
|
"middle": [], |
|
"last": "Shu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "TREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Allan, James P. Callan, W. Bruce Croft, Lisa Ballesteros, John Broglio, Jinxi Xu, and Hongming Shu. 1996. Inquery at TREC-5. In TREC.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The paraphrase search assistant: terminological feedback for iterative information seeking", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Anick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tipirneni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "22nd ACM SIGIR Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "153--159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter G. Anick and Suresh Tipirneni. 1999. The paraphrase search assistant: terminological feedback for iterative infor- mation seeking. In 22nd ACM SIGIR Proceedings, pages 153-159.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "An algorithm that learns what's in a name", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Bikel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Machine Learning", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "211--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel M. Bikel, Richard Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what's in a name. Machine Learning, 34(1-3):211-231.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Using clustering and superconcepts within smart: TREC 6. Information Processing and Management", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Buckley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Mitra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janet", |
|
"middle": [], |
|
"last": "Walz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "109--131", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Buckley, Mandar Mitra, Janet Walz, and Claire Cardie. 2000. Using clustering and superconcepts within smart: TREC 6. Information Processing and Management, 36(1):109-131.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "What makes a query difficult?", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Carmel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elad", |
|
"middle": [], |
|
"last": "Yom-Tov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Darlow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Pelleg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "29th ACM SIGIR Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "390--397", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Carmel, Elad Yom-Tov, Adam Darlow, and Dan Pelleg. 2006. What makes a query difficult? In 29th ACM SIGIR Proceedings, pages 390-397.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Word association norms, mutual information, and lexicography", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"Ward" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Hanks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "27th ACL Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1989. Word associ- ation norms, mutual information, and lexicography. In 27th ACL Proceedings, pages 76-83.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Introduction to Algorithms, Second Edition. The MIT Electrical Engineering and Computer Science Series", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Cormen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronald", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Leiserson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clifford", |
|
"middle": [], |
|
"last": "Rivest", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2001. Introduction to Algorithms, Sec- ond Edition. The MIT Electrical Engineering and Computer Science Series. The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Predicting query performance", |
|
"authors": [ |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Cronen-Townsend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Bruce" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "25th ACM SIGIR Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "299--306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steve Cronen-Townsend, Yun Zhou, and W. Bruce Croft. 2002. Predicting query performance. In 25th ACM SIGIR Proceed- ings, pages 299-306.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The NRRC reliable information access (RIA) workshop", |
|
"authors": [ |
|
{ |
|
"first": "Donna", |
|
"middle": [], |
|
"last": "Harman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Buckley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "27th ACM SIGIR Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "528--529", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Donna Harman and Chris Buckley. 2004. The NRRC reliable information access (RIA) workshop. In 27th ACM SIGIR Proceedings, pages 528-529.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Registration of image cubes using multivariate mutual information", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Kern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marios", |
|
"middle": [], |
|
"last": "Pattichis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Stearns", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Thirty-Seventh Asilomar Conference", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1645--1649", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey P. Kern, Marios Pattichis, and Samuel D. Stearns. 2003. Registration of image cubes using multivariate mutual infor- mation. In Thirty-Seventh Asilomar Conference, volume 2, pages 1645-1649.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Searching with context", |
|
"authors": [ |
|
{ |
|
"first": "Reiner", |
|
"middle": [], |
|
"last": "Kraft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chi", |
|
"middle": [ |
|
"Chao" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Farzin", |
|
"middle": [], |
|
"last": "Maghoul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ravi", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "15th International CIKM Conference Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "477--486", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reiner Kraft, Chi Chao Chang, Farzin Maghoul, and Ravi Ku- mar. 2006. Searching with context. In 15th International CIKM Conference Proceedings, pages 477-486.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Relevance based language models", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Lavrenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W. Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "24th ACM SIGIR Conference Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Lavrenko and W. Bruce Croft. 2001. Relevance based language models. In 24th ACM SIGIR Conference Proceed- ings, pages 120-127.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Constructing web search queries from the user's information need expressed in a natural language", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Shapiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isak", |
|
"middle": [], |
|
"last": "Taksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 ACM Symposium on Applied Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1157--1162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Shapiro and Isak Taksa. 2003. Constructing web search queries from the user's information need expressed in a nat- ural language. In Proceedings of the 2003 ACM Symposium on Applied Computing, pages 1157-1162.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Characteristics of question format web queries: An exploratory study. Information Processing and Management", |
|
"authors": [ |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Spink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H. Cenk", |
|
"middle": [], |
|
"last": "Ozmultu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "38", |
|
"issue": "", |
|
"pages": "453--471", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amanda Spink and H. Cenk Ozmultu. 2002. Characteristics of question format web queries: An exploratory study. Infor- mation Processing and Management, 38(4):453-471.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "From e-sex to e-commerce: Web search changes", |
|
"authors": [ |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Spink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernard", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Jansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietmar", |
|
"middle": [], |
|
"last": "Wolfram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tefko", |
|
"middle": [], |
|
"last": "Saracevic", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Computer", |
|
"volume": "35", |
|
"issue": "3", |
|
"pages": "107--109", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amanda Spink, Bernard J. Jansen, Dietmar Wolfram, and Tefko Saracevic. 2002. From e-sex to e-commerce: Web search changes. Computer, 35(3):107-109.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Personalizing search via automated analysis of interests and activities", |
|
"authors": [ |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Teevan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Dumais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Horvitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "28th ACM SIGIR Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "449--456", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaime Teevan, Susan T. Dumais, and Eric Horvitz. 2005. Per- sonalizing search via automated analysis of interests and ac- tivities. In 28th ACM SIGIR Proceedings, pages 449-456.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "On ranking the effectiveness of searches", |
|
"authors": [ |
|
{ |
|
"first": "Vishwa", |
|
"middle": [], |
|
"last": "Vinay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ingemar", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Cox", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "29th ACM SIGIR Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "398--404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vishwa Vinay, Ingemar J. Cox, Natasa Milic-Frayling, and Ken Wood. 2006. On ranking the effectiveness of searches. In 29th ACM SIGIR Proceedings, pages 398-404.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The TREC 2005 robust track. SIGIR Forum", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "40", |
|
"issue": "", |
|
"pages": "41--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen M. Voorhees. 2006. The TREC 2005 robust track. SIGIR Forum, 40(1):41-48.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Using top-ranking sentences to facilitate effective information access: Book reviews", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Ryen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joemon", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "White", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Jose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ruthven", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "JAIST", |
|
"volume": "56", |
|
"issue": "10", |
|
"pages": "1113--1125", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryen W. White, Joemon M. Jose, and Ian Ruthven. 2005. Us- ing top-ranking sentences to facilitate effective information access: Book reviews. JAIST, 56(10):1113-1125.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Score of the highest rank sub-query by various measures.", |
|
"content": "<table><tr><td/><td>MAP GMAP</td></tr><tr><td>Baseline</td><td>0.243 0.136</td></tr><tr><td>AverageTop10</td><td>0.296 0.167</td></tr><tr><td>MaxSTTop10</td><td>0.293 0.150</td></tr><tr><td colspan=\"2\">NE AverageTop10 0.278 0.156</td></tr><tr><td colspan=\"2\">NE MaxSTTop10 0.286 0.159</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |