Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N07-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:47:32.286297Z"
},
"title": "Information Retrieval On Empty Fields",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": {
"postCode": "01003-4610",
"region": "MA",
"country": "USA"
}
},
"email": "lavrenko@cs.umass.edu"
},
{
"first": "Xing",
"middle": [],
"last": "Yi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": {
"postCode": "01003-4610",
"region": "MA",
"country": "USA"
}
},
"email": "yixing@cs.umass.edu"
},
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": {
"postCode": "01003-4610",
"region": "MA",
"country": "USA"
}
},
"email": "allan@cs.umass.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We explore the problem of retrieving semi-structured documents from a realworld collection using a structured query. We formally develop Structured Relevance Models (SRM), a retrieval model that is based on the idea that plausible values for a given field could be inferred from the context provided by the other fields in the record. We then carry out a set of experiments using a snapshot of the National Science Digital Library (NSDL) repository, and queries that only mention fields missing from the test data. For such queries, typical field matching would retrieve no documents at all. In contrast, the SRM approach achieves a mean average precision of over twenty percent.",
"pdf_parse": {
"paper_id": "N07-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "We explore the problem of retrieving semi-structured documents from a realworld collection using a structured query. We formally develop Structured Relevance Models (SRM), a retrieval model that is based on the idea that plausible values for a given field could be inferred from the context provided by the other fields in the record. We then carry out a set of experiments using a snapshot of the National Science Digital Library (NSDL) repository, and queries that only mention fields missing from the test data. For such queries, typical field matching would retrieve no documents at all. In contrast, the SRM approach achieves a mean average precision of over twenty percent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This study investigates information retrieval on semi-structured information, where documents consist of several textual fields that can be queried independently. If documents contained subject and author fields, for example, we would expect to see queries looking for documents about theory of relativity by the author Einstein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This setting suggests exploring the issue of inexact match-is special theory of relativity relevant?that has been explored elsewhere (Cohen, 2000) . Our interest is in an extreme case of that problem, where the content of a field is not corrupted or in-correct, but is actually absent. We wish to find relevant information in response to a query such as the one above even if a relevant document is completely missing the subject and author fields.",
"cite_spans": [
{
"start": 133,
"end": 146,
"text": "(Cohen, 2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our research is motivated by the challenges we encountered in working with the National Science Digital Library (NSDL) collection. 1 Each item in the collection is a scientific resource, such as a research paper, an educational video, or perhaps an entire website. In addition to its main content, each resource is annotated with metadata, which provides information such as the author or creator of the resource, its subject area, format (text/image/video) and intended audience -in all over 90 distinct fields (though some are very related). Making use of such extensive metadata in a digital library paves the way for constructing highly-focused models of the user's information need. These models have the potential to dramatically improve the user experience in targeted applications, such as the NSDL portals. To illustrate this point, suppose that we are running an educational portal targeted at elementary school teachers, and some user requests teaching aids for an introductory class on gravity. An intelligent search system would be able to translate the request into a structured query that might look something like: subject='gravity . Such a query can be efficiently answered by a relational database system.",
"cite_spans": [
{
"start": 131,
"end": 132,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, using a relational engine to query a semi-structured collection similar to NSDL will run into a number of obstacles. The simplest problem is that natural language fields are filled inconsistently: e.g., the audience field contains values such as K-4, K-6, second grade, and learner, all of which are clearly semantically related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A larger problem, and the one we focus on in this study, is that of missing fields. For example 24% of the items in the NSDL collection have no subject field, 30% are missing the author information, and over 96% mention no target audience (reading level). This means that a relational query for elementary school material will consider at most 4% of all potentially relevant resources in the NSDL collection. 2 The goal of our work is to introduce a retrieval model that will be capable of answering complex structured queries over a semi-structured collection with corrupt and missing field values. This study focuses on the latter problem, an extreme version of the former. Our approach is to use a generative model to compute how plausible a word would appear in a record's empty field given the context provided by the other fields in the record.",
"cite_spans": [
{
"start": 409,
"end": 410,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows. We survey previous attempts at handling semistructured data in section 2. Section 3 will provide the details of our approach, starting with a high-level view, then providing a mathematical framework, and concluding with implementation details. Section 4 will present an extensive evaluation of our model on the large set of queries over the NSDL collection. We will summarize our results and suggest directions for future research in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The issue of missing field values is addressed in a number of recent publications straddling the areas of relational databases and machine learning. In most cases, researchers introduce a statistical model for predicting the value of a missing attribute or relation, based on observed values. Friedman et al (1999) introduce a technique called Probabilistic Relational Models (PRM) for automatically learning the structure of dependencies in a relational database. Taskar et al (2001) demonstrate how PRM can be used to predict the category of a given research paper and show that categorization accuracy can be substantially improved by leveraging the relational structure of the data. Heckerman et al (2004) introduce the Probabilistic Entity Relationship (PER) model as an extension of PRM that treats relations between entities as objects. Neville at al (2003) discuss predicting binary labels in relational data using Relational Probabilistic Trees (RPT). Using this method they successfully predict whether a movie was a box office hit based on other movies that share some of the properties (actors, directors, producers) with the movie in question.",
"cite_spans": [
{
"start": 293,
"end": 314,
"text": "Friedman et al (1999)",
"ref_id": "BIBREF4"
},
{
"start": 465,
"end": 484,
"text": "Taskar et al (2001)",
"ref_id": "BIBREF11"
},
{
"start": 687,
"end": 709,
"text": "Heckerman et al (2004)",
"ref_id": "BIBREF6"
},
{
"start": 844,
"end": 864,
"text": "Neville at al (2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Our work differs from most of these approaches in that we work with free-text fields, whereas database researchers typically deal with closed-vocabulary values, which exhibit neither the synonymy nor the polysemy inherent in natural language expressions. In addition, the goal of our work is different: we aim for accurate ranking of records by their relevance to the user's query, whereas database research has typically focused on predicting the missing value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Our work is related to a number of existing approaches to semi-structured text search. Desai et al (1987) followed by Macleod (1991) proposed using the standard relational approach to searching unstructured texts. The lack of an explicit ranking function in their approaches was partially addressed by Blair (1988) . Fuhr (1993) proposed the use of Probabilistic Relational Algebra (PRA) over the weights of individual term matches. Vasanthukumar et al (1996) developed a relational implementation of the inference network retrieval model. A similar approach was taken by de Vries and Wilschut (1999) , who managed to improve the efficiency of the approach. De Fazio et al (1995) integrated IR and RDBMS technology using an approached called cooperative indexing. Cohen (2000) describes WHIRL -a language that allows efficient inexact matching of textual fields within SQL statements. A number of relevant works are also published in the proceedings of the INEX workshop. 3 The main difference between these endeavors and our work is that we are explicitly focusing on the cases where parts of the structured data are missing or mis-labeled.",
"cite_spans": [
{
"start": 87,
"end": 105,
"text": "Desai et al (1987)",
"ref_id": "BIBREF3"
},
{
"start": 118,
"end": 132,
"text": "Macleod (1991)",
"ref_id": "BIBREF8"
},
{
"start": 302,
"end": 314,
"text": "Blair (1988)",
"ref_id": "BIBREF0"
},
{
"start": 317,
"end": 328,
"text": "Fuhr (1993)",
"ref_id": "BIBREF5"
},
{
"start": 433,
"end": 459,
"text": "Vasanthukumar et al (1996)",
"ref_id": null
},
{
"start": 575,
"end": 600,
"text": "Vries and Wilschut (1999)",
"ref_id": "BIBREF13"
},
{
"start": 673,
"end": 679,
"text": "(1995)",
"ref_id": null
},
{
"start": 764,
"end": 776,
"text": "Cohen (2000)",
"ref_id": "BIBREF1"
},
{
"start": 972,
"end": 973,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In this section we will provide a detailed description of our approach to searching semi-structured data. Before diving into the details of our model, we want to clearly state the challenge we intend to address with our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured Relevance Model",
"sec_num": "3"
},
{
"text": "The aim of our system is to identify a set of records relevant to a structured query provided by the user. We assume the query specifies a set of keywords for each field of interest to the user, for example Q: subject='physics,gravity' AND audi-ence='grades 1-4' 4 . Each record in the database is a set of natural-language descriptions for each field. A record is considered relevant if it could plausibly be annotated with the query fields. For example, a record clearly aimed at elementary school students would be considered relevant to Q even if it does not contain 'grades 1-4' in its description of the target audience. In fact, our experiments will specifically focus on finding relevant records that contain no direct match to the specified query fields, explicitly targeting the problem of missing data and inconsistent schemata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task: finding relevant records",
"sec_num": "3.1"
},
{
"text": "This task is not a typical IR task because the fielded structure of the query is a critical aspect of the processing, not one that is largely ignored in favor of pure content based retrieval. On the other hand, the approach used is different from most DB work because cross-field dependencies are a key component of the technique. In addition, the task is unusual for both communities because it considers an unusual case where the fields in the query do not occur at all in the documents being searched.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task: finding relevant records",
"sec_num": "3.1"
},
{
"text": "Our approach is based on the idea that plausible values for a given field could be inferred from the context provided by the other fields in the record. For instance, a resource titled 'Transductive SVMs' and containing highly technical language in its description is unlikely to be aimed at elementary-school stu-4 For this paper we will focus on simple conjunctive queries. Extending our model to more complex queries is reserved for future research. dents. In the following section we will describe a statistical model that will allow us to guess the values of un-observed fields. At the intuitive level, the model takes advantage of the fact that records similar in one respect will often be similar in others. For example, if two resources share the same author and have similar titles, they are likely to be aimed at the same audience. Formally, our model is based on the generative paradigm. We will describe a probabilistic process that could be viewed, hypothetically, as the source of every record in our collection. We will assume that the query provided by our user is also a sample from this generative process, albeit a very short one. We will use the observed query fields (e.g. audience and subject) to estimate the likely values for other fields, which would be plausible in the context of the observed subject and audience. The distributions over plausible values will be called relevance models, since they are intended to mimic the kind of record that might be relevant to the observed query. Finally, all records in the database will be ranked by their information-theoretic similarity to these relevance models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the approach",
"sec_num": "3.2"
},
{
"text": "We start with a set of definitions that will be used through the remainder of this paper. Let C be a collection of semi-structured records. Each record w consists of a set of fields w 1 . . .w m . Each field w i is a sequence of discrete variables (words) w i,1 . . .w i,n i , taking values in the field vocabulary V i . 5 When a record contains no information for the i'th field, we assume n i =0 for that record. A user's query q takes the same representation as a record in the database:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.3"
},
{
"text": "q={q i,j \u2208V i : i=1..m, j = 1..n i }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.3"
},
{
"text": "We will use p i to denote a language model over",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.3"
},
{
"text": "V i , i.e. a set of probabilities p i (v)\u2208[0, 1], one for each word v, obeying the constraint \u03a3 v p i (v) = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.3"
},
{
"text": "The set of all possible language models over V i will be denoted as the probability simplex IP i . We define \u03c0 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.3"
},
{
"text": "IP 1 \u00d7\u2022 \u2022 \u2022\u00d7IP m \u2192[0, 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.3"
},
{
"text": "to be a discrete measure function that assigns a probability mass \u03c0(p 1 . . .p m ) to a set of m language models, one for each of the m fields present in our collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.3"
},
{
"text": "We will now present a generative process that will be viewed as a hypothetical source that produced every record in the collection C. We stress that this process is purely hypothetical; its only purpose is to model the kinds of dependencies that are necessary to achieve effective ranking of records in response to the user's query. We assume that each record w in the database is generated in the following manner:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.4"
},
{
"text": "1. Pick m distributions p 1 . . .p m according to \u03c0 2. For each field i = 1. . .m: (a) Pick the length n i of the i th field of w (b) Draw i.i.d. words w i,1 . . .w i,n i from p i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.4"
},
{
"text": "Under this process, the probability of observing a record {w i,j : i=1..m, j=1..n i } is given by the following expression:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.4"
},
{
"text": "I P 1 ...I Pm m i=1 n i j=1 pi(wi,j) \u03c0(p1. . .pm)dp1. . .dpm (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.4"
},
{
"text": "The generative measure function \u03c0 plays a critical part in equation 1: it specifies the likelihood of using different combinations of language models in the process of generating w. We use a non-parametric estimate for \u03c0, which relies directly on the combinations of language models that are observed in the training part of the collection. Each training record w 1 . . .w m corresponds to a unique combination of language models p w 1 . . .p w m defined by the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A generative measure function",
"sec_num": "3.4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p w i (v) = #(v, wi) + \u00b5icv ni + \u00b5i",
"eq_num": "(2)"
}
],
"section": "A generative measure function",
"sec_num": "3.4.1"
},
{
"text": "Here #(v, w i ) represents the number of times the word v was observed in the i'th field of w, n i is the length of the i'th field, and c v is the relative frequency of v in the entire collection. Metaparameters \u00b5 i allow us to control the amount of smoothing applied to language models of different fields; their values are set empirically on a held-out portion of the data. We define \u03c0(p 1 . . .p m ) to have mass 1 N when its argument p 1 . . .p m corresponds to one of the N records w in the training part C t of our collection, and zero otherwise:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A generative measure function",
"sec_num": "3.4.1"
},
{
"text": "\u03c0(p1. . .pm) = 1 N w\u2208C t m i=1 1 p i =p w i (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A generative measure function",
"sec_num": "3.4.1"
},
{
"text": "Here p w i is the language model associated with the training record w (equation 2), and 1 x is the Boolean indicator function that returns 1 when its predicate x is true and zero when it is false.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A generative measure function",
"sec_num": "3.4.1"
},
{
"text": "The generative model described in the previous section treats each field in the record as a bag of words with no particular order. This representation is often associated with the assumption of word independence. We would like to stress that our model does not assume word independence, on the contrary, it allows for strong un-ordered dependencies among the words -both within a field, and across different fields within a record. To illustrate this point, suppose we let \u00b5 i \u21920 in equation 2to reduce the effects of smoothing. Now consider the probability of observing the word 'elementary' in the audience field together with the word 'differential' in the title (equation 1). It is easy to verify that the probability will be non-zero only if some training record w actually contained these words in their respective fields -an unlikely event. On the other hand, the probability of 'elementary' and 'differential' co-occurring in the same title might be considerably higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assumptions and limitations of the model",
"sec_num": "3.4.2"
},
{
"text": "While our model does not assume word independence, it does ignore the relative ordering of the words in each field. Consequently, the model will fail whenever the order of words, or their proximity within a field carries a semantic meaning. Finally, our generative model does not capture dependencies across different records in the collection, each record is drawn independently according to equation (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assumptions and limitations of the model",
"sec_num": "3.4.2"
},
{
"text": "In this section we will describe how the generative model described above can be used to find database records relevant to the structured query provided by the user. We are given a structured query q, and a collection of records, partitioned into the training portion C t and the testing portion C e . We will use the training records to estimate a set of relevance ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the model for retrieval",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Ri(v) = P (q1. . .v\u2022qi. . .qm) P (q1. . .qi. . .qm)",
"eq_num": "(4)"
}
],
"section": "Using the model for retrieval",
"sec_num": "3.5"
},
{
"text": "We use v\u2022q i to denote appending word v to the string q i . Both the numerator and the denominator are computed using equation 1. Once we have computed relevance models R i for each of the m fields, we can rank testing records w by their similarity to these relevance models. As a similarity measure we use weighted cross-entropy, which is an extension of the ranking formula originally proposed by (Lafferty and Zhai, 2001 ):",
"cite_spans": [
{
"start": 399,
"end": 423,
"text": "(Lafferty and Zhai, 2001",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using the model for retrieval",
"sec_num": "3.5"
},
{
"text": "H(R1..m; w1..m) = m i=1 \u03b1i v\u2208V i Ri(v) log p w i (v) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the model for retrieval",
"sec_num": "3.5"
},
{
"text": "The outer summation goes over every field of interest, while the inner extends over all the words in the vocabulary of the i'th field. R i are computed according to equation 4, while p w i are estimated from equation (2). Meta-parameters \u03b1 i allow us to vary the importance of different fields in the final ranking; the values are selected on a held-out portion of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the model for retrieval",
"sec_num": "3.5"
},
{
"text": "We tested the performance of our model on a January 2005 snapshot of the National Science Digital Library repository. The snapshot contains a total of 656,992 records, spanning 92 distinct (though sometimes related) fields. 6 Only 7 of these fields are present in every record, and half the fields are present in less than 1% of the records. An average record contains only 17 of the 92 fields. Our experiments focus on a subset of 5 fields (title, description, subject, content and audience). These fields were selected for two reasons: (i) they occur frequently enough to allow a meaningful evaluation and (ii) they seem plausible to be included in a potential query. 7 Of these fields, title represents the title of the resource, description is a very brief abstract, content is a more detailed description (but not the full content) of the resource, subject is a library-like classification of the topic covered by the resource, and audience reflects the target reading level (e.g. elementary school or post-graduate). Summary statistics for these fields are provided in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1075,
"end": 1082,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset and queries",
"sec_num": "4.1"
},
{
"text": "The dataset was randomly split into three subsets: the training set, which comprised 50% of the records and was used for estimating the relevance models as described in section 3.5; the held-out set, which comprised 25% of the data and was used to tune the smoothing parameters \u00b5 i and the bandwidth parameters \u03b1 i ; and the evaluation set, which contained 25% of the records and was used to evaluate the performance of the tuned model 8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and queries",
"sec_num": "4.1"
},
{
"text": "Our experiments are based on a set of 127 automatically generated queries. We randomly split the queries into two groups, 64 for training and 63 for evaluation. The queries were constructed by combining two randomly picked subject words with two audience words, and then discarding any combination that had less than 10 exact matches in any of the three subsets of our collection. This procedure yields queries such as Q 91 ={subject:'artificial intelligence' AND audience='researchers'}, or Q 101 ={subject:'philosophy' AND audience='high school'}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and queries",
"sec_num": "4.1"
},
{
"text": "We evaluate our model by its ability to find \"relevant\" records in the face of missing values. We de-fine a record w to be relevant to the user's query q if every keyword in q is found in the corresponding field of w. For example, in order to be relevant to Q 101 a record must contain the word 'philosophy' in the subject field and words 'high' and 'school' in the audience field. If either of the keywords is missing, the record is considered non-relevant. 9 When the testing records are fully observable, achieving perfect retrieval accuracy is trivial: we simply return all records that match all query keywords in the subject and audience fields. As we stated earlier, our main interest concerns the scenario when parts of the testing data are missing. We are going to simulate this scenario in a rather extreme manner by completely removing the subject and audience fields from all testing records. This means that a straightforward approach -matching query fields against record fields -will yield no relevant results. Our approach will rank testing records by comparing their title, description and content fields against the query-based relevance models, as discussed in section 3.5.",
"cite_spans": [
{
"start": 459,
"end": 460,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation paradigm",
"sec_num": "4.2"
},
{
"text": "We will use the standard rank-based evaluation metrics: precision and recall. Let N R be the total number of records relevant to a given query, suppose that the first K records in our ranking contain N K relevant ones. Precision at rank K is defined as N K K and recall is defined as N K N R . Average precision is defined as the mean precision over all ranks where relevant items occur. R-precision is defined as precision at rank K=N R .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation paradigm",
"sec_num": "4.2"
},
{
"text": "Our experiments will compare the ranking performance of the following retrieval systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4.3"
},
{
"text": "cLM is a cheating version of un-structured text search using a state-of-the-art language-modeling approach (Ponte and Croft, 1998) . We disregard the structure, take all query keywords and run them against a concatenation of all fields in the testing records. This is a \"cheating\" baseline, since the con-catenation includes the audience and subject fields, which are supposed to be missing from the testing records. We use Dirichlet smoothing (Lafferty and Zhai, 2001) , with parameters optimized on the training data. This baseline mimics the core search capability currently available on the NSDL website.",
"cite_spans": [
{
"start": 107,
"end": 130,
"text": "(Ponte and Croft, 1998)",
"ref_id": "BIBREF10"
},
{
"start": 444,
"end": 469,
"text": "(Lafferty and Zhai, 2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4.3"
},
{
"text": "bLM is a combination of SQL-like structured matching and unstructured search with query expansion. We take all training records that contain an exact match to our query and select 10 highlyweighted words from the title, description, and content fields of these records. We run the resulting 30 words as a language modeling query against the concatenation of title, description, and content fields in the testing records. This is a non-cheating baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4.3"
},
{
"text": "bMatch is a structured extension of bLM. As in bLM, we pick training records that contain an exact match to the query fields. Then we match 10 highly-weighted title words, against the title field of testing records, do the same for the description and content fields, and merge the three resulting ranked lists. This is a non-cheating baseline that is similar to our model (SRM). The main difference is that this approach uses exact matching to select the training records, whereas SRM leverages a best-match language modeling algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4.3"
},
{
"text": "SRM is the Structured Relevance Model, as described in section 3.5. For reasons of both effectiveness and efficiency, we firstly run the original query to retrieve top-500 records, then use these records to build SRMs. When calculating the cross entropy(equ. 5), for each field we only include the top-100 words which will appear in that field with the largest probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4.3"
},
{
"text": "Note that our baselines do not include a standard SQL approach directly on testing records. Such an approach would have perfect performance in a \"cheating\" scenario with observable subject and audience fields, but would not match any records when the fields are removed. Table 2 shows the performance of our model (SRM) against the three baselines. The model parameters were tuned using the 64 training queries on the training and held-out sets. the evalution corpus.)",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4.3"
},
{
"text": "The upper half of Table 2 shows precision at fixed recall levels; the lower half shows precision at different ranks. The %change column shows relative difference between our model and the baseline bLM. The improved column shows the number of queries where SRM exceeded bLM vs. the number of queries where performance was different. For example, 33/49 means that SRM out-performed bLM on 33 queries out of 63, underperformed on 49\u221233=16 queries, and had exactly the same performance on 63\u221249=14 queries. Bold figures indicate statistically significant differences (according to the sign test with p < 0.05).",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.4"
},
{
"text": "The results show that SRM outperforms three baselines in the high-precision region, beating bLM's mean average precision by 29%. Useroriented metrics, such as R-precision and precision at 10 documents, are improved by 39.4% and 44.3% respectively. The absolute performance figures are also very encouraging. Precision of 28% at rank 10 means that on average almost 3 out of the top 10 records in the ranked list are relevant, despite the requested fields not being available to the model. We note that SRM continues to outperform bLM until very high recall and until the 100-document cutoff. After that, SRM degrades rapidly with respect to bLM. We feel the drop in effectiveness is of marginal interest because precision is already well below 10% and few users will be continuing to that depth in the list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.4"
},
{
"text": "It is encouraging to see that SRM outperforms both cLM, the cheating baseline that takes advantage of the field values that are supposed to be \"missing\", and bMatch, suggesting that best-match retrieval provides a superior strategy for selecting a set of appropriate training records.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.4"
},
{
"text": "We have developed and empirically validated a new retrieval model for semi-structured text. The model is based on the idea that missing or corrupted values for one field can be inferred from values in other fields of the record. The cross-field inference makes it possible to find documents in response to a structured query when those query fields do not exist in the relevant documents at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "We validated the SRM approach on a large archive of the NSDL repository. We developed a large set of structured Boolean queries that had relevant documents in the test portion of collection. We then indexed the documents without the fields used in the queries. As a result, using standard field matching approaches, not a single document would be returned in response to the queries-in particular, no relevant documents would be found. We showed that standard information retrieval techniques and structured field matching could be combined to address this problem, but that the SRM approach outperforms them. We note that SRM brought two relevant documents into the top fiveagain, querying on missing fields-and achieved an average precision of 23%, a more than 35% improvement over a state-of-the-art relevance model approach combining the standard field matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Our work is continuing by exploring methods for handling fields with incorrect or corrupted values. The challenge becomes more than just inferring what values might be there; it requires combining likely missing values with confidence in the values already present: if an audience field contains 'undergraduate', it should be unlikely that 'K-6' would be a plausible value, too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "In addition to using SRMs for retrieval, we are currently extending the ideas to provide field validation and suggestions for data entry and validation: the same ideas used to find documents with missing field values can also be used to suggest potential values for a field and to identify values that seem inappropriate. We have also begun explorations toward using inferred values to help a user browse when starting from some structured informatione.g., given values for two fields, what values are probable for other fields.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "http://www.nsdl.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Some of the NSDL metadata fields overlap substantially in meaning, so it might be argued that the overlapping fields will cover the collection better. Under the broadest possible interpretation of field meanings, more than 7% of the documents still contain no subject and 95% still contain no audience field.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://inex.is.informatik.uni-duisburg.de/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We allow each field to have its own vocabulary Vi, since we generally do not expect author names to occur in the audience field, etc. We also allow Vi to share same words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As of May 2006, the NSDL contains over 1.5 million documents.7 The most frequent NSDL fields (id, icon, url, link and 4 brand fields) seem unlikely to be used in user queries.8 In real use, typical pseudo relevance feedback scheme can be followed: retrieve top-k documents to build relevance models then perform IR again on the same whole collection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This definition of relevance is unduly conservative by the standards of Information Retrieval researchers. Many records that might be considered relevant by a human annotator will be treated as non-relevant, artificially decreasing the accuracy of any retrieval algorithm. However, our approach has the advantage of being fully automatic: it allows us to test our model on a scale that would be prohibitively expensive with manual relevance judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the Center for Intelligent Information Retrieval and in part by the Defense Advanced Research Projects Agency (DARPA) under contract number HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are the authors' and do not necessarily reflect those of the sponsor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An extended relational document retrieval model",
"authors": [
{
"first": "D",
"middle": [
"C"
],
"last": "Blair",
"suffix": ""
}
],
"year": 1988,
"venue": "Inf. Process. Manage",
"volume": "24",
"issue": "3",
"pages": "349--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.C. Blair. 1988. An extended relational document re- trieval model. Inf. Process. Manage., 24(3):349-371.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "WHIRL: A word-based information representation language",
"authors": [
{
"first": "W",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2000,
"venue": "Artificial Intelligence",
"volume": "118",
"issue": "1-2",
"pages": "163--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W.W. Cohen. 2000. WHIRL: A word-based informa- tion representation language. Artificial Intelligence, 118(1-2):163-196.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Integrating IR and RDBMS Using Cooperative Indexing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Defazio",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Daoud",
"suffix": ""
},
{
"first": "L",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Srinivasan",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "84--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. DeFazio, A. Daoud, L. A. Smith, and J. Srinivasan. 1995. Integrating IR and RDBMS Using Cooperative Indexing. In Proceedings of SIGIR, pages 84-92.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Non-first normal form universal relations: an application to information retrieval systems",
"authors": [
{
"first": "B",
"middle": [
"C"
],
"last": "Desai",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sadri",
"suffix": ""
}
],
"year": 1987,
"venue": "Inf. Syst",
"volume": "12",
"issue": "1",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. C. Desai, P. Goyal, and F. Sadri. 1987. Non-first nor- mal form universal relations: an application to infor- mation retrieval systems. Inf. Syst., 12(1):49-55.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning probabilistic relational models",
"authors": [
{
"first": "N",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Getoor",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Pfeffer",
"suffix": ""
}
],
"year": 1999,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "1300--1309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Friedman, L. Getoor, D. Koller, and A. Pfeffer. 1999. Learning probabilistic relational models. In IJCAI, pages 1300-1309.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A probabilistic relational model for the integration of IR and databases",
"authors": [
{
"first": "N",
"middle": [],
"last": "Fuhr",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of SI-GIR",
"volume": "",
"issue": "",
"pages": "309--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Fuhr. 1993. A probabilistic relational model for the integration of IR and databases. In Proceedings of SI- GIR, pages 309-317.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Probabilistic models for relational data",
"authors": [
{
"first": "D",
"middle": [],
"last": "Heckerman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Heckerman, C. Meek, and D. Koller. 2004. Proba- bilistic models for relational data. Technical Report MSR-TR-2004-30, Microsoft Research.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Document language models, query models, and risk minimization for information retrieval",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "111--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty and C. Zhai. 2001. Document language mod- els, query models, and risk minimization for informa- tion retrieval. In Proceedings of SIGIR, pages 111- 119.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text retrieval and the relational model",
"authors": [
{
"first": "I",
"middle": [],
"last": "Macleod",
"suffix": ""
}
],
"year": 1991,
"venue": "Journal of the American Society for Information Science",
"volume": "42",
"issue": "3",
"pages": "155--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Macleod. 1991. Text retrieval and the relational model. Journal of the American Society for Information Sci- ence, 42(3):155-165.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning relational probability trees",
"authors": [
{
"first": "J",
"middle": [],
"last": "Neville",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jensen",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Friedland",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hay",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACM KDD",
"volume": "",
"issue": "",
"pages": "625--630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Neville, D. Jensen, L. Friedland, and M. Hay. 2003. Learning relational probability trees. In Proceedings of ACM KDD, pages 625-630, New York, NY, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A language modeling approach to information retrieval",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Ponte",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "275--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Ponte and W. B. Croft. 1998. A language modeling approach to information retrieval. In Proceedings of SIGIR, pages 275-281.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Probabilistic classification and clustering in relational data",
"authors": [
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Segal",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "870--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Taskar, E. Segal, and D. Koller. 2001. Probabilistic classification and clustering in relational data. In Pro- ceedings of IJCAI, pages 870-876.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Integrating INQUERY with an RDBMS to support text retrieval",
"authors": [
{
"first": "S",
"middle": [
"R"
],
"last": "Vasanthakumar",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Callan",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 1996,
"venue": "IEEE Data Eng. Bull",
"volume": "19",
"issue": "1",
"pages": "24--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. R. Vasanthakumar, J.P. Callan, and W.B. Croft. 1996. Integrating INQUERY with an RDBMS to support text retrieval. IEEE Data Eng. Bull., 19(1):24-33.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On the integration of IR and databases",
"authors": [
{
"first": "A",
"middle": [
"D"
],
"last": "Vries",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Wilschut",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of IFIP 2.6 Working Conf. on Data Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.D. Vries and A. Wilschut. 1999. On the integration of IR and databases. In Proceedings of IFIP 2.6 Working Conf. on Data Semantics, Rotorua, New Zealand.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"text": "Summary statistics for the five NSDL fields used in our retrieval experiments.",
"content": "<table><tr><td>models R 1 . . .R m , intended to reflect the user's in-</td></tr><tr><td>formation need. We will then rank testing records by</td></tr><tr><td>their divergence from these relevance models. A rel-</td></tr><tr><td>evance R i (v) specifies how plausible it is that word</td></tr><tr><td>v would occur in the i'th field of a record, given</td></tr><tr><td>that the record contains a perfect match to the query</td></tr><tr><td>fields q 1 . . .q m :</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "The results are for the 63 test queries run against the evaluation corpus. (Similar results occur if the 64 training queries are run against",
"content": "<table><tr><td/><td colspan=\"2\">cLM bMatch</td><td>bLM</td><td colspan=\"3\">SRM %change improved</td></tr><tr><td>Rel-ret:</td><td>949</td><td>582</td><td>914</td><td>861</td><td>-5.80</td><td>26/50</td></tr><tr><td colspan=\"3\">Interpolated Recall -Precision:</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">at 0.00 0.3852</td><td colspan=\"3\">0.3730 0.4153 0.5448</td><td>31.2</td><td>33/49</td></tr><tr><td colspan=\"2\">at 0.10 0.3014</td><td colspan=\"3\">0.3020 0.3314 0.4783</td><td>44.3</td><td>42/56</td></tr><tr><td colspan=\"2\">at 0.20 0.2307</td><td colspan=\"3\">0.2256 0.2660 0.3641</td><td>36.9</td><td>40/59</td></tr><tr><td colspan=\"2\">at 0.30 0.2105</td><td colspan=\"3\">0.1471 0.2126 0.2971</td><td>39.8</td><td>36/58</td></tr><tr><td colspan=\"2\">at 0.40 0.1880</td><td colspan=\"3\">0.1130 0.1783 0.2352</td><td>31.9</td><td>36/58</td></tr><tr><td colspan=\"2\">at 0.50 0.1803</td><td colspan=\"3\">0.0679 0.1591 0.1911</td><td>20.1</td><td>32/57</td></tr><tr><td colspan=\"2\">at 0.60 0.1637</td><td colspan=\"3\">0.0371 0.1242 0.1439</td><td>15.8</td><td>27/51</td></tr><tr><td colspan=\"2\">at 0.70 0.1513</td><td colspan=\"3\">0.0161 0.1001 0.1089</td><td>8.7</td><td>21/42</td></tr><tr><td colspan=\"2\">at 0.80 0.1432</td><td colspan=\"3\">0.0095 0.0901 0.0747</td><td>-17.0</td><td>18/36</td></tr><tr><td colspan=\"2\">at 0.90 0.1292</td><td colspan=\"3\">0.0055 0.0675 0.0518</td><td>-23.2</td><td>12/27</td></tr><tr><td colspan=\"2\">at 1.00 0.1154</td><td colspan=\"3\">0.0043 0.0593 0.0420</td><td>-29.2</td><td>9/23</td></tr><tr><td colspan=\"2\">Avg.Prec. 0.1790</td><td colspan=\"3\">0.1050 0.1668 0.2156</td><td>29.25</td><td>43/63</td></tr><tr><td>Precision at:</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">5 docs 0.1651</td><td colspan=\"3\">0.2159 0.2413 0.3556</td><td>47.4</td><td>32/43</td></tr><tr><td colspan=\"2\">10 docs 0.1571</td><td colspan=\"3\">0.1651 0.2063 0.2889</td><td>40.0</td><td>34/48</td></tr><tr><td colspan=\"2\">15 docs 0.1577</td><td colspan=\"3\">0.1471 0.1841 0.2360</td><td>28.2</td><td>32/49</td></tr><tr><td colspan=\"2\">20 docs 0.1540</td><td colspan=\"3\">0.1349 0.1722 0.2024</td><td>17.5</td><td>28/47</td></tr><tr><td colspan=\"2\">30 docs 0.1450</td><td colspan=\"3\">0.1101 0.1492 0.1677</td><td>12.4</td><td>29/50</td></tr><tr><td colspan=\"2\">docs 0.0913</td><td colspan=\"3\">0.0465 0.0849 0.0871</td><td>2.6</td><td>37/57</td></tr><tr><td colspan=\"2\">200 docs 0.0552</td><td colspan=\"3\">0.0279 0.0539 0.0506</td><td>-6.2</td><td>33/53</td></tr><tr><td colspan=\"2\">500 docs 0.0264</td><td colspan=\"3\">0.0163 0.0255 0.0243</td><td>-4.5</td><td>26/48</td></tr><tr><td colspan=\"2\">1000 docs 0.0151</td><td colspan=\"3\">0.0092 0.0145 0.0137</td><td>-5.8</td><td>26/50</td></tr><tr><td colspan=\"2\">R-Prec. 0.1587</td><td colspan=\"3\">0.1204 0.1681 0.2344</td><td>39.44</td><td>31/49</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"text": "Performance of the 63 test queries retrieving 1000 documents on the evaluation data. Bold figures show statistically significant differences. Across all 63 queries, there are 1253 relevant documents.",
"content": "<table/>",
"num": null,
"type_str": "table"
}
}
}
}