Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y98-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:37:23.812246Z"
},
"title": "Extracting Recurrent Phrases and Terms from Texts Using a Purely Statistical Method",
"authors": [
{
"first": "Zhao-Ming",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Harold",
"middle": [],
"last": "Somers",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Academia",
"middle": [],
"last": "Sinica",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most statistical measures for extracting interesting word pairs such as MI and t-score require a large corpus to work well. This paper evaluates some of the most widely used statistical measures and introduces a method that can identify significant bigrams in relatively small texts by adapting Fung and Church's (1994) K-vec algorithm, which was originally designed to extract word correspondences from unaligned parallel corpora. The proposed method captures the linguistic generalisation abou lexical patterning in texts and can identify recurrent co-occurring word sequences, which might be phrases, terms, or unknown words. In addition, it has the potential of identifying key phrases and terms that reveal topicality in a text.",
"pdf_parse": {
"paper_id": "Y98-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "Most statistical measures for extracting interesting word pairs such as MI and t-score require a large corpus to work well. This paper evaluates some of the most widely used statistical measures and introduces a method that can identify significant bigrams in relatively small texts by adapting Fung and Church's (1994) K-vec algorithm, which was originally designed to extract word correspondences from unaligned parallel corpora. The proposed method captures the linguistic generalisation abou lexical patterning in texts and can identify recurrent co-occurring word sequences, which might be phrases, terms, or unknown words. In addition, it has the potential of identifying key phrases and terms that reveal topicality in a text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, there has been a growing interest in eliciting linguistic knowledge directly from corpora using statistical methods. Several quantitative measures have been proposed to identify significant lexical relations. These measures, however, are designed to work with large corpora with millions of words. Accordingly, they do not perform well in relatively small texts with a few thousand words. This paper presents a statistical method that is well-suited to extracting recurrent phrases and terms from relatively small texts. The method, a variant of Fung and Church's (1994) K-vec algorithm, is shown to be in line with linguistic generalisations about lexical cohesion in text structures. Church and Hanks (1990) and Church et al. (1991) use mutual information (MI) to estimate associations between two words. Mutual information is defined as follows.",
"cite_spans": [
{
"start": 563,
"end": 587,
"text": "Fung and Church's (1994)",
"ref_id": "BIBREF5"
},
{
"start": 703,
"end": 726,
"text": "Church and Hanks (1990)",
"ref_id": "BIBREF1"
},
{
"start": 731,
"end": 751,
"text": "Church et al. (1991)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(1) 1(x, y) = log2 P(x)P (y) MI compares the joint probability P(x, y) (i.e. the probability of the co-occurrence of x and y) with P(x) and P(y), the independent probabilities of x and y (chance). If there is a strong association between word x and word y, then the joint probability P(x, y) will be much larger than chance P(x)P(y), and accordingly 1(x, y) > 0. If no significant relation holds between x and y, 1(x, y) will approximate to zero. If x is in complementary distribution with y, I(x, y) will be less than zero. Besides MI, Church and Hanks (1990) and Church et al. (1991) use t-score for testing the statistical significance of an co-occurrence. t-score can be approximated by (2).",
"cite_spans": [
{
"start": 25,
"end": 28,
"text": "(y)",
"ref_id": null
},
{
"start": 131,
"end": 135,
"text": "P(x)",
"ref_id": null
},
{
"start": 537,
"end": 560,
"text": "Church and Hanks (1990)",
"ref_id": "BIBREF1"
},
{
"start": 565,
"end": 585,
"text": "Church et al. (1991)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "P(x, y)",
"sec_num": null
},
{
"text": "f (x, y ) - f (x) (Y) (2). t N f ( x, Y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(x, y)",
"sec_num": null
},
{
"text": "where f(x), f(y), and f(x, y) are the number of occurrences of x, y, and x co-occurring with y, respectively; while N is the number of occurrences of all the tokens in the text. *Chinese Knowledge Information Processing Group, Institute of Information Science, Academia Sinica, Nankang, Taipei 115, Taiwan. E-mail: imgao@hp.iis.sinica.edu.tw a=k(AB)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(x, y)",
"sec_num": null
},
{
"text": "b=k(--AB) c = k(A -B) d k(-A -B)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(x, y)",
"sec_num": null
},
{
"text": "Language, hiormation and Computation (PACLIC12), 18-20 Feb, 1998, 206-211 Several alternatives to MI and t-score have been proposed. These methods require the contingency table in (3).",
"cite_spans": [
{
"start": 49,
"end": 73,
"text": "18-20 Feb, 1998, 206-211",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "P(x, y)",
"sec_num": null
},
{
"text": "(3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(x, y)",
"sec_num": null
},
{
"text": "where A and B are the words in question, and k is the count of the bigrams. The -sign means not; so for example c is the count of the bigram where A is followed by a word other than B. One of the alternatives to MI is the association measure IM, which is very similar to MI. IM is calculated as in (4) (cf. Daille (1996) ). In addition, Gale and Church (1991) introduce the 0.2 coefficient using the formula in (5). Dunning (1993) notes that MI is subject to overestimation when the counts are small and thus proposes using log likelihood ratio G2 as a significance test for estimating surprise and coincidence of a rare event. G 2 is computed by the formula in (6). We conducted experiments testing all the statistical measures described above with a Chinese text of 5155 words. The Chinese text was preprocessed by the Chinese word segmentation program reported in Chen and Liu (1992) . The results of the tests are shown in Table 1 . Bigrams with a t-score lower than 1.65 have been left out. As can be seen in Table 1 , the performance of MI and t-score is not satisfactory, for many uninteresting bigrams containing pronouns and determiners are incorrectly extracted. It is obvious in Table 1 that (1: 12 and G2 outperform MI and t-score. Nevertheless, (112 gives a zero value for word pairs which always co-occur with each other, since b + d in (6) is zero if word pairs always co-occur. Therefore, bigrams consisting of proper nouns such as .1. I `Hsiao Hung', Ni le 'Ho Te', 'Te Fen' in Table 1 are given zero value, which is counterintuitive, because high values for rigid pairs are expected. Besides, icrs2 and G2 do not seem to be able to distinguish bigrams containing two content words from those containing one function word. For instance, G 2 gives a larger value to -13. 7. yi wei 'one CLASSIFIER' than the more interesting proper names /j\\ I lisiao Hung' and g . 0E7 `Lu Anni'. IM seems to outperform all the other statistical measures in small texts. By setting the threshold to -3, all the proper names together with some interesting terms such as tat feminism', tt 'equal right', fiec AA 'administrative staffcan be extracted. However, IM has a serious defect: its threshold value is difficult to determine. Fung and Church (1994) propose a simple algorithm to find word correspondences from unaligned parallel texts. The basic idea is that a true word pair should have similar distributions in terms of the position of its occurrence in the text. To estimate the similarity of co-occurrence, the parallel texts are split into the same number of segments (K) and the distributions of each word are represented in a 1...K binary vector. For instance, suppose the Chinese and English texts are divided into ten segments. Suppose further that the Chinese word tig 4t. daxue occurs ten times, with the first 3 occurrences in the fourth segment and the remaining 7 occurrences in the seventh segment and that the English word university appears twelve times, with the first 4 occurrences in the fourth segment and the remaining 8 occurrences in the seventh segment. Using the K binary vectors, the distributions of both the Chinese and English words in question can be represented as <0,0,0,1,0,0,1,0,0,0>. Mutual information (MI) and t-score are then used to estimate the correlation of a proposed word correspondence. Mutual information and t-score are computed using the formulas in (8) and (9).",
"cite_spans": [
{
"start": 307,
"end": 320,
"text": "Daille (1996)",
"ref_id": "BIBREF3"
},
{
"start": 337,
"end": 359,
"text": "Gale and Church (1991)",
"ref_id": "BIBREF6"
},
{
"start": 424,
"end": 430,
"text": "(1993)",
"ref_id": null
},
{
"start": 867,
"end": 886,
"text": "Chen and Liu (1992)",
"ref_id": "BIBREF0"
},
{
"start": 2228,
"end": 2250,
"text": "Fung and Church (1994)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 927,
"end": 934,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1014,
"end": 1021,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1190,
"end": 1197,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1495,
"end": 1502,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "P(x, y)",
"sec_num": null
},
{
"text": "P(Vc,Ve) MI(Vc ,Ve )= log g2 PV P(Vc)= a+b P(Ve )= a -11-cc* P (V c , V e ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying Fung and Church's (1994) K-vec Algorithm to Extract Recurrent Monolingual Terms",
"sec_num": "3."
},
{
"text": "where a is the number of pieces of segments in which both the Chinese and the English word occur; b is the number of pieces of segment where only the Chinese word is found; c is the number of pieces of segment where only the English word is found. The t-score in (8) is introduced to filter out word pairs with low frequency which happen to co-occur in the same segment by chance.' Fung and Church set the threshold value of MI to be 0 and t-score to be 1.65. Only word pairs with both MI and t-score higher than the predetermined threshold values and in the frequency range 3-10 are considered to be potential mutual translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying Fung and Church's (1994) K-vec Algorithm to Extract Recurrent Monolingual Terms",
"sec_num": "3."
},
{
"text": "The rationale behind the K-vec algorithm is that two words in parallel text associate strongly with each other if they co-occur more often than by chance in some text segments. The statistics of co-occurrence K-vec employs is actually grounded on a linguistic generalisation about lexical patterning in the text. Research by Halliday and Hasan (1976) and Hoey (1991) suggest that cohesion plays a very important role in the organisation of texts. They point out that the most straightforward form of cohesion is repetition. In addition, as each text has a topic, words or phrases closely related to the topic tend to recur in the text (cf. Salton and McGill (1983) , Phillips (1985) ).",
"cite_spans": [
{
"start": 325,
"end": 350,
"text": "Halliday and Hasan (1976)",
"ref_id": "BIBREF7"
},
{
"start": 355,
"end": 366,
"text": "Hoey (1991)",
"ref_id": "BIBREF8"
},
{
"start": 640,
"end": 664,
"text": "Salton and McGill (1983)",
"ref_id": "BIBREF10"
},
{
"start": 667,
"end": 682,
"text": "Phillips (1985)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying Fung and Church's (1994) K-vec Algorithm to Extract Recurrent Monolingual Terms",
"sec_num": "3."
},
{
"text": "K-vec can be easily applied to monolingual texts to identify recurrent noun phrases, collocations, or words not listed in the dictionary. The only necessary adaptation is that the source is the same as the target text. In addition, since sentences are the basic building blocks of a text, they are better units of a discourse segment than an ad hoc number of words as proposed by the original K-vec. As a result, a Word-Sentence Index (WSI) is required which records the position and the index of the sentence in which each word occurs. Based on WSI, adjacent word pairs that co-occur more often than by chance can be extracted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying Fung and Church's (1994) K-vec Algorithm to Extract Recurrent Monolingual Terms",
"sec_num": "3."
},
{
"text": "Comparing Table 1 with Table 2 we can see that K-vec is better than MI, t-score, 0 2, and G2 in identifying collocations, recurrent proper names and phrases in a small text in terms of precision and recall. Like IM, K-vec can distinguish interesting bigrams from uninteresting ones. But unlike IM, the threshold value of K-value is predetermined (i.e. MI >=0 and t-score >= 1.65). K-value is thus more convenient than IM. In contrast with Smadja's (1993) Xtract, which was designed to extract collocations from large corpora, our proposed method is suitable for extracting recurrent rigid collocations in relatively small texts.",
"cite_spans": [
{
"start": 439,
"end": 454,
"text": "Smadja's (1993)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 23,
"end": 30,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modifying Fung and Church's (1994) K-vec Algorithm to Extract Recurrent Monolingual Terms",
"sec_num": "3."
},
{
"text": "If two extracted bigrams are adjacent to each other, they are mostly likely to be phrases or proper nouns, as shown in Table 3 . The proximity relation between two bigrams can be easily identified in the light of the WSI. It is interesting to note that many of the word pairs identified in Table 2 are key phrases that suggest topicality of the text, e.g. rstif:mf Lu Anni Incident', tcltIa 'feminism',01M1,* leacher-student relationship', VanrIN. 'Campus Affairs Committee Conference'. ' The approximation .of t-score used by Fung and Church (1994) in (8) is slightly different from (2). ",
"cite_spans": [
{
"start": 487,
"end": 488,
"text": "'",
"ref_id": null
},
{
"start": 527,
"end": 549,
"text": "Fung and Church (1994)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 3",
"ref_id": null
},
{
"start": 290,
"end": 297,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modifying Fung and Church's (1994) K-vec Algorithm to Extract Recurrent Monolingual Terms",
"sec_num": "3."
},
{
"text": "This paper reconfirms the importance of selecting an appropriate unit of text in lexical knowledge acquisition, as emphasized by Church et al. (1991) . The proposed method, a simple variant of MI, t-score, and K-vec, has a higher precision than most current statistical algorithms in extracting recurrent word sequences from relatively small texts. The algorithm can be used to identify Chinese unknown words or key phrases in any language. Table 3 . Proper Names Extracted On the Basis of Table 2 and Word-Sentence Index",
"cite_spans": [
{
"start": 129,
"end": 149,
"text": "Church et al. (1991)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 441,
"end": 448,
"text": "Table 3",
"ref_id": null
},
{
"start": 490,
"end": 497,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4."
}
],
"back_matter": [
{
"text": "The first author would like to thank Prof. C.-R. Huang, Prof. K.-J. Chen, Dr. L.-F. Chien at Academia Sinica and anonymous PACLIC reviewers for their comments on an earlier draft of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word Identification for Mandarin Chinese Sentences",
"authors": [
{
"first": "K.-J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S.-H",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "101--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, K.-J. and Liu, S.-H. (1992) \"Word Identification for Mandarin Chinese Sentences.\" In Proceedings of the International Conference on Computational Linguistics, pp. 101-107.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word Association Norms, Mutual Information, and Lexicography",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. and Hanks, P. (1990) \"Word Association Norms, Mutual Information, and Lexicography.\" Computational Linguistics, Vol. 16, No. 1, pp. 22-29.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using Statistics in Lexical Analysis",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
}
],
"year": 1991,
"venue": "Lexical Acquisition: Exploiting On-Line Resources to Build a Lexicon",
"volume": "",
"issue": "",
"pages": "115--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K, W. Gale, P. Hanks, and D. Hindle. (1991) 'Using Statistics in Lexical Analysis,' in Zernik (ed.) Lexical Acquisition: Exploiting On-Line Resources to Build a Lexicon, pp. 115 -164, Lawrence Erlbaum Associates Publishers.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Study and Implementation of Combined Techniques for Automatic Extraction of Terminology",
"authors": [
{
"first": "B",
"middle": [],
"last": "Daille",
"suffix": ""
}
],
"year": 1996,
"venue": "The Balancing Act: Combining Symbolic and Statistical Approaches to Language",
"volume": "",
"issue": "",
"pages": "49--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daille, B. (1996) \"Study and Implementation of Combined Techniques for Automatic Extraction of Terminology.\" In Klavans and Resnik (eds.) The Balancing Act: Combining Symbolic and Statistical Approaches to Language, MIT Press, pp. 49-66.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Accurate Methods for the Statistics of Surprise and Coincidences",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning, T. (1993) 'Accurate Methods for the Statistics of Surprise and Coincidences,' Computational Linguistics, Vol. 19, No. 1, pp. 61-74.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "K-vec: A New Approach for Aligning Parallel Texts",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the International Conference of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1096--1102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fung, P. and Church, K. (1994) \"K-vec: A New Approach for Aligning Parallel Texts.\" Proceedings of the International Conference of Computational Linguistics, pp. 1096-1102, Kyoto.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Concordances for Parallel Texts",
"authors": [
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Seventh Annual Conference of the UW Centre for the New OED and Text Research, Using Corpora",
"volume": "",
"issue": "",
"pages": "40--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, W. and Church, K. (1991) \"Concordances for Parallel Texts.\" In Proceedings of the Seventh Annual Conference of the UW Centre for the New OED and Text Research, Using Corpora, pp. 40-62, Oxford.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cohesion in English",
"authors": [
{
"first": "M",
"middle": [],
"last": "Halliday",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hasan",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Halliday, M. and Hasan, R. (1976) Cohesion in English. Longman Publishers.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Patterns of Lexis in Text",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hoey",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoey, M. (1991). Patterns of Lexis in Text. Oxford University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Aspects of Text Structure: An Investigation of the Lexical Organisation of Text",
"authors": [
{
"first": "M",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phillips, M. (1985) Aspects of Text Structure: An Investigation of the Lexical Organisation of Text. Elsevier Science Publishers.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Introduction to Modern Information Retrieval",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mcgill",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salton, G. and McGill, M. (1983) Introduction to Modern Information Retrieval. McGraw-Hill.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Retrieving Collocations from Text: Xtract",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "143--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F. (1993) 'Retrieving Collocations from Text: Xtract', Computational Linguistics, Vol. 19, No. 1, pp. 143 -177.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "b)(a + c)(b + d)(c + d)"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "(6). G2 = a log a + b log b + c log c + d log d \u2022 (a+b)log(a+b) -(a+c)log(a+c) -(b+d)log(b+d) -(c+d)log(c+d) + (a+b+c+d)log(a+b+c+d)"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "(8). t(Vc,Ve)= P(Vc ,Ve )-P(Vc)P(Ve) P(Vc,Ve) K"
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": ""
},
"TABREF0": {
"text": "Output of Different Statistical Measures for Identifying Interesting Bigrams",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Cl</td><td>C2</td><td>MI</td><td>t-score</td><td>IM</td><td>(1) 2</td><td>G2</td></tr><tr><td/><td>TI</td><td>6.23</td><td>3.95</td><td>-5.90</td><td>33.19</td><td>57.20</td></tr><tr><td/><td>flE</td><td>4.46</td><td>1.90</td><td>-7.67</td><td>1.45</td><td>8.03</td></tr><tr><td/><td/><td>3.33</td><td>1.80</td><td>-8.80</td><td>0.43</td><td>5.24</td></tr><tr><td/><td>1M</td><td>5.91</td><td>1.70</td><td>-6.22</td><td>4.01</td><td>9.21</td></tr><tr><td/><td>EER</td><td>7.18</td><td>1.72</td><td>-4.95</td><td>0.09</td><td>13.12</td></tr><tr><td/><td>43ZU</td><td>4.82</td><td>2.72</td><td>-7.31</td><td>3.85</td><td>18.24</td></tr><tr><td/><td>ft.</td><td>4.83</td><td>2.36</td><td>-7.30</td><td>1.25</td><td>16.04</td></tr><tr><td>tt</td><td>E4%</td><td>9.81</td><td>1.73</td><td>-2.32</td><td>0.59</td><td>18.74</td></tr><tr><td>/.1\\</td><td>tE</td><td>9.14</td><td>2.82</td><td>-3.00</td><td>0.00</td><td>50.97</td></tr><tr><td/><td>Zit</td><td>6.41</td><td>1.97</td><td>-5.72</td><td>0.07</td><td>15.61</td></tr><tr><td>aF</td><td>11</td><td>3.55</td><td>3.03</td><td>-8.59</td><td>1.01</td><td>16.49</td></tr><tr><td>aF</td><td>1M</td><td>4.99</td><td>1.67</td><td>-7.14</td><td>1.67</td><td>7.14</td></tr><tr><td>T 4 )</td><td colspan=\"6\">NA go 5.84 2.40 -6.29 6.13 19.05 6.41 1.71 -5.72 0.05 11.68</td></tr><tr><td/><td/><td>4.97</td><td>1.67</td><td>-7.16</td><td>1.75</td><td>7.02</td></tr><tr><td>ft</td><td/><td>4.85</td><td>2.15</td><td>-7.29</td><td>2.97</td><td>11.22</td></tr></table>"
}
}
}
}