Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H01-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:31:12.615131Z"
},
"title": "Assigning Belief Scores to Names in Queries",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Dozier",
"suffix": "",
"affiliation": {
"laboratory": "Research and Development Thomson Legal and Regulatory",
"institution": "",
"location": {
"addrLine": "610 Opperman Drive Eagan",
"postCode": "55123",
"region": "MN",
"country": "USA"
}
},
"email": "chris.dozier@westgroup.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Assuming that the goal of a person name query is to find references to a particular person, we argue that one can derive better relevance scores using probabilities derived from a language model of personal names than one can using corpus based occurrence frequencies such as inverse document frequency (idf). We present here a method of calculating person name match probability using a language model derived from a directory of legal professionals. We compare how well name match probability and idf predict search precision of word proximity queries derived from names of legal professionals and major league baseball players. Our results show that name match probability is a better predictor of relevance than idf. We also indicate how rare names with high match probability can be used as virtual tags within a corpus to identify effective collocation features for person names within a professional class.",
"pdf_parse": {
"paper_id": "H01-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "Assuming that the goal of a person name query is to find references to a particular person, we argue that one can derive better relevance scores using probabilities derived from a language model of personal names than one can using corpus based occurrence frequencies such as inverse document frequency (idf). We present here a method of calculating person name match probability using a language model derived from a directory of legal professionals. We compare how well name match probability and idf predict search precision of word proximity queries derived from names of legal professionals and major league baseball players. Our results show that name match probability is a better predictor of relevance than idf. We also indicate how rare names with high match probability can be used as virtual tags within a corpus to identify effective collocation features for person names within a professional class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Some of the most common types of queries submitted to search engines both on the internet and on proprietary text search systems consist simply of a person's name. To improve the way such queries are handled, it would be useful if search engines could estimate the likelihood or belief that a name contained in a document pertains to the name in the query. Traditionally, relevance likelihood for name phrases has been based on inverse document frequency or idf, [3] [4] . The idea behind this relevance estimate is that names which rarely occur in the corpus are thought to be more indicative of relevance than names that commonly occur.",
"cite_spans": [
{
"start": 463,
"end": 466,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 467,
"end": 470,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Assuming that the goal of a person name query is to find references to a particular person, we argue that one can derive better relevance scores using probabilities derived from a language model of personal names than one can using corpus based occurrence frequencies. The reason for this is that finding references to a particular person in text is more dependent upon the relative rarity of the name with respect to the human population than it is on the rarity of the name within a corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "To get an intuitive idea of this point, consider that, within a corpus of 27,000 Wall Street Journal articles published between January and August of the year 2000, the name \"Trent Lott\" occurred in 80 documents while the name \"John Smith\" occurred in 24. All 80 references to \"Trent Lott\" referred to the majority leader of the U.S. Senate, while \"John Smith\" references mapped to 5 different people. This is not surprising. From our experience, we know that \"Trent Lott\" is an uncommon name and \"John Smith\" is a common one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "We present here evidence that name match probability based on a language model predicts relevance for name queries far better than idf. It may be argued that idf was never intended to be used to measure the relative ambiguity of a name query. However, idf is the standard measure used in probabilistic search engines to measure the degree of relevance terms and phrases within a collection have to the terms and phrases in queries, [1] [5] . For this reason, we take idf to be the standard against which to compare name match probability.",
"cite_spans": [
{
"start": 432,
"end": 435,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 436,
"end": 439,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Being able to predict relevance through name match probabilities enables us to do three things. First, it tells us when we need to add information to the query to improve precision either by prompting the user for it or automatically expanding the query. Second, and perhaps more importantly, it enables us to use names with high match probabilities as virtual tags that can help us find useful collocation features to disambiguate names within a given class of names, such as the names of attorneys and judges. For purposes of this paper, we define an ambiguous name as one likely to be shared by many people and an unambiguous name as one likely to apply to a single person or to only a few people. And third, match probability can be used as a feature within a name search operator to improve search precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "The motivation for our work is an effort to develop a name search operator to find attorneys and judges in the news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DESCRIPTION OF MATCH PROBABILITY CALCULATION FOR PERSON NAMES",
"sec_num": "2."
},
{
"text": "In our particular application, we wish to allow users to search for newspaper references to attorneys and judges listed in a directory of U.S. legal professionals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DESCRIPTION OF MATCH PROBABILITY CALCULATION FOR PERSON NAMES",
"sec_num": "2."
},
{
"text": "This directory contains the curriculum vitae of approximately one million people. In this section, we show how we calculate person name match probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DESCRIPTION OF MATCH PROBABILITY CALCULATION FOR PERSON NAMES",
"sec_num": "2."
},
{
"text": "To compute the probability of relevance or match probability for a name, we perform three steps. First, we compute a probability distribution for the first and last names in our name directory. This is our language model. Second, we compute a name's probability by multiplying its first name probability with its last name probability. Third, we compute its match probability by taking the reciprocal of the product of the name probability and the size of the human population likely to be referenced in the corpus. For our Wall Street Journal test corpus, we estimated this size to be approximately the size of the U.S. population or 300 million. Formulas for the three steps are shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DESCRIPTION OF MATCH PROBABILITY CALCULATION FOR PERSON NAMES",
"sec_num": "2."
},
{
"text": "where F = number of occurrences of first name, L = number of occurrences of last name, and N = number of names in the directory. where H = size of human population likely to be referenced by the collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DESCRIPTION OF MATCH PROBABILITY CALCULATION FOR PERSON NAMES",
"sec_num": "2."
},
{
"text": "Example calculations for Trent Lott and John Smith are shown below in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "DESCRIPTION OF MATCH PROBABILITY CALCULATION FOR PERSON NAMES",
"sec_num": "2."
},
{
"text": "In this example, the match probability for Trent Lott is approximately four orders of magnitude higher than the match probability for John Smith, while idf or document frequency suggests the likelihood of relevance for documents retrieved for John Smith is higher than for documents retrieved for Trent Lott. Both empirically and intuitively, match probability is a better predictor of relevance here than idf.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DESCRIPTION OF MATCH PROBABILITY CALCULATION FOR PERSON NAMES",
"sec_num": "2."
},
{
"text": "To test our hypothesis that name match probability predicts relevance better than idf, we compared how well name queries with high match probabilities performed against name queries with high idf. We performed two experiments. In the first, we selected names of individuals in our legal directory. In the second, we used the names of currently active major league baseball players.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION OF NAME MATCH PROBABILITY VERSUS IDF",
"sec_num": "3."
},
{
"text": "To conduct the first experiment, we labeled person names in a collection of 27,000 WSJ documents with a commercially available name tagging program. We then extracted these names and created a merged list of names specified by first and last name and pulled from this list names that occurred within our legal directory. We then sorted this list by name match probability and by document occurrence frequency (which is equivalent to idf) to create two lists. We then binned the names in the name match probability list into sets that fell between the following probability ranges: 1.0-0.9, 0.9-0.8 ,0.8-0.7, 0.7-0.6, 0.6-0.5, 0.5-0.4, 0.4-0.3, 0.3-0.2, 0.2-0.1, and 0.1-0.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION OF NAME MATCH PROBABILITY VERSUS IDF",
"sec_num": "3."
},
{
"text": "We binned the names in the document frequency list into sets that fell into the following document occurrence frequencies: 1, 2, 3, 4, 5, 6, 7, 8, 9, and >=10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION OF NAME MATCH PROBABILITY VERSUS IDF",
"sec_num": "3."
},
{
"text": "We then selected 50 names at random from each of these bins (except for bins associated with 0.8-0.7 and 0.7-0.6 probabilities which contained 42 and 31 names respectively). For each name selected, we identified the legal directory entry that was compatible with the name. In most cases, only one legal directory entry was compatible with the name. In some cases, multiple entries were compatible. For example, the name \"Paul Brown\" is compatible with 71 legal directory entries since there are 71 people in the directory with the first name \"Paul\" and the last name \"Brown\". In these cases, we selected one of the entries at random.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION OF NAME MATCH PROBABILITY VERSUS IDF",
"sec_num": "3."
},
{
"text": "For each name in each bin, we found the set of documents in the WSJ collection that would be returned by the word proximity query \"First_name +2 Last_name\". That is, the documents that contained the first name followed within two words by the last name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION OF NAME MATCH PROBABILITY VERSUS IDF",
"sec_num": "3."
},
{
"text": "The search precision results for match probability and document frequency bins are shown in tables 2 and 3 below. The search precision of each bin was the number of relevant documents returned by the names in the bin divided by the total number of documents returned. The row labeled \"Number Unique Names in Each Category\" is a count of the number of unique first and last name pairs found within the WSJ collection for the probability and document frequency ranges indicated. It was from these sets of names that we selected our queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION OF NAME MATCH PROBABILITY VERSUS IDF",
"sec_num": "3."
},
{
"text": "The results in tables 2 and 3 show that match probability does a better job of estimating relevance than idf. Table 2 shows that search precision goes up as match probability rises. Table 3 shows no apparent correspondence between document frequency and search precision. In the second experiment, we performed basically the same steps described above on the names of the 286 baseball players currently playing in the major leagues. We assigned name match probabilities to these names using the language model we derived from the legal directory. Of the 286 names, we found 82 that were compatible with one or more name instances in the WSJ collection. For all 82, we found the set of documents in the WSJ collection that would be returned by the word proximity query \"First_name +2 Last_name\". We then measured how frequently the documents returned for a particular word proximity query actually referenced the player with which the name query was paired. As in the attorney and judge name experiment, name match probability predicted relevance more accurately than idf. The results for baseball player names are shown in tables 4 and 5 above.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 182,
"end": 189,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "EVALUATION OF NAME MATCH PROBABILITY VERSUS IDF",
"sec_num": "3."
},
{
"text": "Note that on average the search precision for baseball players was higher than for attorneys and judges. This is due to the combined effects of there being far fewer baseball player names than attorney and judge names and the fact that the average probability of a baseball player being mentioned in the news is higher than the average probability for a judge or attorney being mentioned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION OF NAME MATCH PROBABILITY VERSUS IDF",
"sec_num": "3."
},
{
"text": "An important use of name match probabilities is the identification of co-occurrence features in text that can serve to disambiguate name references. If we know certain names in the corpora very probably refer to certain individuals listed in a professional directory, we can look for words that co-occur frequently with these names but infrequently with names in general. These words are likely to work well at disambiguating references to names of low match probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "USING RARE NAMES TO IDENTIFY SEARCH FEATURES",
"sec_num": "4."
},
{
"text": "As an example of feature identification, consider the figures 1 and 2 above. In these figures, the word \"rare\" stands for the 20% of names in the legal directory that have the highest match probability. The phrase \"medium rare\" stands for the next 20% and so on. The word \"common\" then stands for the 20% of names with the lowest match probability. For each of the five categories of name rarity, the graphs in the figures show the probability of an appositive term occurring at a given word position relative to the position of a name. Figure 1 shows the probability of attorney appositive nouns such as \"attorney\", \"lawyer\", \"counsel\", or \"partner\" occurring at 12 different word positions around attorney names of varying degrees of rarity. Position -1 stands for the word position directly before the name. Position +1 stands for the position directly after. Position -2 stands for the word position two words in front of the name and so on. Figure 2 shows the probability of judge appositive nouns such as \"judge\" or \"justice\" occurring around judge names.",
"cite_spans": [],
"ref_spans": [
{
"start": 537,
"end": 545,
"text": "Figure 1",
"ref_id": null
},
{
"start": 946,
"end": 954,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "USING RARE NAMES TO IDENTIFY SEARCH FEATURES",
"sec_num": "4."
},
{
"text": "The graphs in figures 1 and 2 show that the probability of appositive terms occurring at particular word positions grows steadily as the name rarity increases. This demonstrates that appositive terms are good indicators for judge and attorney names within the WSJ collection. The figures also shows the word positions in which we should look for appositive terms. Figure 1 shows that we should look for attorney appositives in word positions -2, -1, +2, +4, and +5. This makes intuitive sense because it accounts for sentence constructs such as those shown in table 6.",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 372,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "USING RARE NAMES TO IDENTIFY SEARCH FEATURES",
"sec_num": "4."
},
{
"text": "The sudden drop off in appositive term probability at word position +1 also makes sense since an article, adjective, or other part of speech often occurs between a trailing appositive head noun and the proper noun it modifies. The drop off at word position +3 is still something of a mystery and is not something we can explain at this time. Since +3 behavior seems to have no linguistic basis that we can perceive, we do not rely on it in constructing our search operator. Figure 2 shows that we should look for judge appositives in word position -1. This makes perfect sense since it accounts for constructs such as \" Judge William Rehnquist\" and \"Justice Antonin Scalia\". Figure 2 also suggests that using the -1 appositive test should yield good search recall since the conditional probability for rare names is about 0.9. ",
"cite_spans": [],
"ref_spans": [
{
"start": 474,
"end": 482,
"text": "Figure 2",
"ref_id": null
},
{
"start": 675,
"end": 683,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "USING RARE NAMES TO IDENTIFY SEARCH FEATURES",
"sec_num": "4."
},
{
"text": "We are currently investigating what levels of search precision and recall we can achieve with special attorney and judge name search operators using name rarity together with co-occurrence features such as appositive, city, state, firm, and court terms. Our preliminary results are encouraging. Initial experiments with the attorney search operator indicate we can achieve a nine fold improvement in search precision over simple word proximity searches over the WSJ collection while sacrificing 18% recall. Preliminary results are shown in table 7 below. We produced these results by selecting 677 attorney names at random from the legal directory that existed within the WSJ collection. For each name, we ran word proximity searches using the first and last name of the lawyers and scored the results. Using the scored results from 377 of the names, we then trained a special Bayesian based name operator that used first name, last name, city, state, firm, and name rarity information as sources of name match evidence. Finally we tested the word proximity operator performance against the special name operator using the remaining 300 names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRELIMINARY SEARCH OPERATOR EXPERIMENTS",
"sec_num": "5."
},
{
"text": "Note that we have assumed above that word proximity searches yield 100% recall. This is not wholly accurate since it does not account for nicknames, use of first name initials, and so on. We plan to revise this recall estimate in the future, but for now we assume that a word proximity search on first and last name provides close to 100% recall in a collection such as the WSJ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRELIMINARY SEARCH OPERATOR EXPERIMENTS",
"sec_num": "5."
},
{
"text": "We plan to complete development of search operators for attorney and judges that make use of the combined features of name rarity, appositives, city, state, firm, and court terms. We plan to compare the performance of these operators against searches based on name indexes derived from combining MUC style extraction techniques and record linking techniques. [2] Our hope is that the search operators will perform at levels close to the indexed based searches so that we can avoid the operational costs of creating special name indexes.",
"cite_spans": [
{
"start": 359,
"end": 362,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FUTURE WORK",
"sec_num": "6."
},
{
"text": "We plan to mine names from text using name rarity and seed appositive phrases. For example, using a seed appositive phrase for a profession such as \"expert witness\", we plan to identify and extract a set of expert witness names. From this initial set of names, we will identify rare names and use these to identify more appositive phrases. Once the appositive phrases are identified, we plan to extract more names, then more appositive phrases, and so on until a stopping condition is reached. In this manner, we hope to develop a technique to automatically extract name lists from text collections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FUTURE WORK",
"sec_num": "6."
},
{
"text": "Finally we plan to assess whether it is possible to develop similar name match probability calculations for other types of names such as company names, organization names, and product names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FUTURE WORK",
"sec_num": "6."
},
{
"text": "Assuming that the goal of a person name query is to find references to a particular person, we have shown that one can derive better relevance scores using probabilities derived from a language model of personal names than one can using corpus based occurrence frequencies. We presented here a method of calculating person name match probability using a language model derived from a directory of legal professionals. We compared how well name match probability and idf predict search precision of word proximity queries derived from names of legal professionals and major league baseball players. Our results showed that name match probability is a better predictor of relevance than idf. We also indicated how rare names with high match probability can be used as virtual tags within a corpus to identify effective collocation features for person names within a professional class. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modern Information Retrieval",
"authors": [
{
"first": "R",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ribeiro-Neto",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baeza-Yates, R. and Ribeiro-Neto, B., Modern Information Retrieval. ACM Press, New York, 1999.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic Extraction and Linking of Person Names in Legal Text",
"authors": [
{
"first": "C",
"middle": [],
"last": "Dozier",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Haschart",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of RIAO '2000; Content Based Multimedia Information Access",
"volume": "",
"issue": "",
"pages": "1305--1321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dozier, C. and Haschart, R., \"Automatic Extraction and Linking of Person Names in Legal Text\" in Proceedings of RIAO '2000; Content Based Multimedia Information Access. Paris, France. pp.1305-1321. 2000",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Phrase Recognition and Expansion for Short, Precision-biased Queries based on a Query Log",
"authors": [
{
"first": "F",
"middle": [],
"last": "De Lima",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc.of the 22nd Annual Int. ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "145--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "de Lima, F. and Pedersen, J., Phrase Recognition and Expansion for Short, Precision-biased Queries based on a Query Log. In Proc.of the 22nd Annual Int. ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 145 - 152, Berkeley, California, USA, 1999.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Name Searching and Information Retrieval",
"authors": [
{
"first": "P",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dozier",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc.of the 2nd Conference on Empirical Methods in NLP",
"volume": "",
"issue": "",
"pages": "134--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thompson, P. and Dozier, C., Name Searching and Information Retrieval. In Proc.of the 2nd Conference on Empirical Methods in NLP, pp. 134 -140, Providence, Rhode Island, 1997.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Inference Networks for Document Retrieval",
"authors": [
{
"first": "H",
"middle": [],
"last": "Turtle",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1990,
"venue": "Proc.of the 13 th Annual Int. ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turtle, H. and Croft, W., Inference Networks for Document Retrieval. In Proc.of the 13 th Annual Int. ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1 - 24, Brussels, Belgium, 1990.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Figure1: Conditional probability of attorney terms by word position relative to name",
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>Name</td><td>P(first name)</td><td>P(last name)</td><td>P(name)</td><td>P(name match)</td><td>Doc Freq</td></tr><tr><td>Trent Lott</td><td>0.000084</td><td>0.000048</td><td>0.00000000408</td><td>0.449371705</td><td>80</td></tr><tr><td>John Smith</td><td>0.036409</td><td>0.006552</td><td>0.00023857</td><td>0.00001397</td><td>24</td></tr></table>",
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>Match</td><td>Prob</td><td>1.0 -</td><td>0.9 -</td><td>0.8 -</td><td>0.7 -</td><td>0.6 -</td><td>0.5</td><td>-</td><td>0.4</td><td>-</td><td>0.3 -</td><td>0.2 -</td><td>0.1 -</td></tr><tr><td>Range</td><td/><td>0.9</td><td>0.8</td><td>0.7</td><td>0.6</td><td>0.5</td><td>0.4</td><td/><td>0.3</td><td/><td>0.2</td><td>0.1</td><td>0.0</td></tr><tr><td>Search</td><td/><td>0.835</td><td>0.754</td><td>0.595</td><td>0.677</td><td>0.596</td><td>0.708</td><td/><td>0.628</td><td/><td>0.544</td><td>0.520</td><td>0.12</td></tr><tr><td>Precision</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Number Unique</td><td>80</td><td>61</td><td>42</td><td>31</td><td>57</td><td>72</td><td/><td>113</td><td/><td>135</td><td>292</td><td>10758</td></tr><tr><td colspan=\"2\">Names in Each</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Category</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>Doc Freq</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>&gt;=10</td></tr><tr><td>Search</td><td>0.18</td><td>0.10</td><td>0.10</td><td>0.20</td><td>0.06</td><td>0.10</td><td>0.08</td><td>0.18</td><td>0.14</td><td>0.24</td></tr><tr><td>Precision</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Number Unique</td><td>7702</td><td>1946</td><td>703</td><td>374</td><td>224</td><td>145</td><td>95</td><td>75</td><td>55</td><td>322</td></tr><tr><td>Names in Each</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Category</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF4": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF5": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>Doc Freq</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>&gt;=10</td></tr><tr><td>Search</td><td>0.888</td><td>0.882</td><td>0.952</td><td>1.0</td><td>0.75</td><td>0.666</td><td>1.0</td><td>NA</td><td>1.0</td><td>0.74</td></tr><tr><td>Precision</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Number Unique</td><td>45</td><td>17</td><td>7</td><td>3</td><td>4</td><td>6</td><td>2</td><td>0</td><td>1</td><td>8</td></tr><tr><td>Names in Each</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Category</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF6": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>Relative</td><td>Example sentence</td></tr><tr><td>Word</td><td/></tr><tr><td>Position</td><td/></tr><tr><td>-2</td><td>Attorney General Janet Reno said today \u2026..</td></tr><tr><td>-1</td><td>Attorney Jack Smith defended his client vigorously.</td></tr><tr><td>+2</td><td>said Vicki Patton, senior attorney for Environmental</td></tr><tr><td/><td>Defense</td></tr><tr><td>+4</td><td>said Jim Hahn, Los Angeles City Attorney</td></tr><tr><td>+5</td><td>says Buck Chapoton, a prominent Washington tax</td></tr><tr><td/><td>attorney</td></tr></table>",
"num": null
},
"TABREF7": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>Search Method</td><td colspan=\"3\">Precision Recall F-measure</td></tr><tr><td>Word proximity</td><td>0.09</td><td>1.00</td><td>0.17</td></tr><tr><td>Attorney Name Search</td><td>0.85</td><td>0.82</td><td>0.83</td></tr><tr><td>Operator</td><td/><td/><td/></tr></table>",
"num": null
}
}
}
}