|
{ |
|
"paper_id": "H05-1032", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:34:37.863372Z" |
|
}, |
|
"title": "Bayesian Learning in Text Summarization", |
|
"authors": [ |
|
{ |
|
"first": "Tadashi", |
|
"middle": [], |
|
"last": "Nomoto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Institute of Japanese Literature", |
|
"location": { |
|
"addrLine": "1-16-10 Yutaka Shinagawa", |
|
"postCode": "142-8585", |
|
"settlement": "Tokyo", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "nomoto@acm.org" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The paper presents a Bayesian model for text summarization, which explicitly encodes and exploits information on how human judgments are distributed over the text. Comparison is made against non Bayesian summarizers, using test data from Japanese news texts. It is found that the Bayesian approach generally leverages performance of a summarizer, at times giving it a significant lead over non-Bayesian models.", |
|
"pdf_parse": { |
|
"paper_id": "H05-1032", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The paper presents a Bayesian model for text summarization, which explicitly encodes and exploits information on how human judgments are distributed over the text. Comparison is made against non Bayesian summarizers, using test data from Japanese news texts. It is found that the Bayesian approach generally leverages performance of a summarizer, at times giving it a significant lead over non-Bayesian models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Consider figure 1. What is shown there is the proportion of the times that sentences at particular locations are judged as relevant to summarization, or worthy of inclusion in a summary. Each panel shows judgment results on 25 Japanese texts of a particular genre; columns (G1K3), editorials (G2K3) and news stories (G3K3). All the documents are from a single Japanese news paper, and judgments are elicited from some 100 undergraduate students. While more will be given on the details of the data later (Section 3.2), we can safely ignore them here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Each panel has the horizontal axis representing location or order of sentence in a document, and the vertical axis the proportion of the times sentences at particular locations are picked as relevant to summarization. Thus in G1K3, we see that the first sentence (to appear in a document) gets voted for about 12% of the time, while the 26th sentence is voted for less than 2% of the time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Curiously enough, each of the panels exhibits a distinct pattern in the way votes are spread across a document: G1K3 has the distribution of votes (DOV) with sharp peaks around 1 and 14; in G2K3, the distribution is peaked around 1, with a small bump around 19; in G3K3, the distribution is sharply skewed to the left, indicating that the majority of votes went to the initial section of a document. What is interesting about the DOV is that we could take it as indicating a collective preference for what to extract for a summary. A question is then, can we somehow exploit the DOV in summarization? To our knowledge, no prior work seems to exist that addresses the question. The paper discusses how we could do this under a Bayesian modeling framework, where we explicitly represent and make use of the DOV by way of Dirichlet posterior (Congdon, 2003) . 1", |
|
"cite_spans": [ |
|
{ |
|
"start": 839, |
|
"end": 854, |
|
"text": "(Congdon, 2003)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Since the business of extractive summarization, such as one we are concerned with here, is about ranking sentences according to how useful/important they are as part of summary, we will consider here a particular ranking scheme based on the probability of a sentence being part of summary under a given DOV, i.e.,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (y|v v v),", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where y denotes a given sentence, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "v v v = (v 1 , . . . , v n )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "stands for a DOV, an array of observed vote counts for sentences in the text; v 1 refers to the count of votes for a sentence at the text initial position, v 2 to that for a sentence occurring at the second place, etc. Thus given a four sentence long text, if we have three people in favor of a lead sentence, two in favor ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "v v v = (3, 2, 1, 0).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Now suppose that each sentence y i (i.e., a sentence at the i-th place in the order of appearance) is associated with what we might call a prior preference factor \u03b8 i , representing how much a sentence at a particular position is favored as part of a summary in general. Then the probability that y i finds itself in a summary is given as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c6(y i |\u03b8 i )P (\u03b8 i ),", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where \u03c6 denotes some likelihood function, and P (\u03b8 i ) a prior probability of \u03b8 i . Since the DOV is something we could actually observe about \u03b8 i , we might as well couple", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03b8 i with v v v by making a probability of \u03b8 i conditioned on v v v.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Formally, this would be written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c6(y i |\u03b8 i )P (\u03b8 i |v v v).", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The problem, however, is that we know nothing about what each \u03b8 i looks like, except that it should somehow be informed by v v v. A typical Bayesian solution to this is to 'erase' \u03b8 i by marginalizing (summing) over it, which brings us to this:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (y i |v v v) = \u03c6(y i |\u03b8 i )P (\u03b8 i |v v v) d\u03b8 i .", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Note that equation 4 no longer talks about the probability of y i under a particular \u03b8 i ; rather it talks about the expected probability for y i with respect to a preference factor dictated by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "v v v. All we need to know v v v / / \u03b8 \u03b8 \u03b8 / / y i Figure 2: A graphical view about P (\u03b8 i |v v v)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "to compute the expectation is v v v and a probability distribution P , and not \u03b8 i 's, anymore. We know something about v v v, and this would leave us P . So what is it? In principle it could be any probability distribution. However largely for the sake of technical convenience, we assume it is one component of a multinomial distribution known as the Dirichlet distribution. In particular, we talk about Dirichlet(\u03b8 \u03b8 \u03b8|v v v), namely a Dirichlet posterior of \u03b8, given observations v v v, where \u03b8 \u03b8 \u03b8 = (\u03b8 1 , . . . , \u03b8 i , . . . , \u03b8 n ), and n i \u03b8 i = 1 (\u03b8 i > 0). (Remarkably, if P (\u03b8) is a Dirichlet, so is P (\u03b8|v v v).) \u03b8 \u03b8 \u03b8 here represents a vector of preference factors for n sentences -which constitute the text. 2 Accordingly, equation 4 could be rewritten as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 723, |
|
"end": 724, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (y i |v v v) = \u03c6(y i |\u03b8 \u03b8 \u03b8)P (\u03b8 \u03b8 \u03b8 |v v v) d\u03b8 \u03b8 \u03b8.", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "An interesting way to look at the model is by way of a graphical model (GM), which gives some intuitive idea of what the model looks like. In a GM perspective, our model is represented as a simple tripartite structure (figure 2), in which each node corresponds to a variable (parameter), and arcs represent dependencies among them. x \u2192 y reads 'y depends on x.' An arc linkage between v v v and y i is meant to represent marginalization over \u03b8 \u03b8 \u03b8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Moreover, we will make use of a scale parameter \u03bb \u2265 1 to have some control over the shape of the distribution, so we will be working with Dirichlet(\u03b8|\u03bb v v v) rather than Dirichlet(\u03b8|v v v). Intuitively, we might take \u03bb as representing a degree of confidence we have in a set of empirical observations we call v v v, as increasing the value of \u03bb has the effect of reducing variance over each \u03b8 i in \u03b8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The expectation and variance of Dirichlet", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "(\u03b8 \u03b8 \u03b8|v v v) are given as follows. 3 E[\u03b8 i ] = v i v 0 (6) V ar[\u03b8 i ] = v i (v 0 \u2212 v i ) v 2 0 (v 0 + 1) ,", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "v 0 = n i v i . Therefore the variance of a scaled Dirichlet is: V ar[\u03b8 i |\u03bbv v v] = v i (v 0 \u2212 v i ) v 2 0 (\u03bbv 0 + 1) .", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "See how \u03bb is stuck in the denominator. Another obvious fact about the scaling is that it does not affect the expectation, which remains the same.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To get a feel for the significance of \u03bb, consider figure 3; the left panel shows a histogram of 50,000 variates of p 1 randomly drawn from Dirichlet(p 1 , p 2 |\u03bbc 1 , \u03bbc 2 ), with \u03bb = 1, and both c 1 and c 2 set to 1. The graph shows only the p 1 part but things are no different for p 2 . (The x-dimension represents a particular value p 1 takes (which ranges between 0 and 1) and the y-dimension records the number of the times p 1 takes that value.) We see that points are spread rather evenly over the probability space. Now the right panel shows what happens if you increase \u03bb by a factor of 1,000 (which will give you P (p 1 , p 2 |1000, 1000)); points take a bell shaped form, concentrating in a small region around the expectation of p 1 . In the experiments section, we will return to the issue of \u03bb and discuss how it affects performance of summarization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Let us turn to the question of how to find a solution to the integral in equation 5. We will be concerned here with two standard approaches to the issue: one is based on MAP (maximum a posteriori)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 http://www.cis.hut.fi/ahonkela/dippa/dippa.html and another on numerical integration. We start off with a MAP based approach known as Bayesian Information Criterion or BIC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For a given model m, BIC seeks an analytical approximation for equation 4, which looks like the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "ln P (y i |m) = ln \u03c6(y i |\u03b8 \u03b8 \u03b8, m) \u2212 k 2 ln N,", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where k denotes the number of free parameters in m, and N that of observations.\u03b8 \u03b8 \u03b8 is a MAP estimate of \u03b8 \u03b8 \u03b8 under m, which is E[\u03b8 \u03b8 \u03b8]. It is interesting to note that BIC makes no reference to prior. Also worthy of note is that a minus of BIC equals MDL (Minimum Description Length). Alternatively, one might take a more straightforward (and fully Bayesian) approach known as the Monte Carlo integration method (MacKay, 1998) (MC, hereafter) where the integral is approximated by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (y i |v v v) \u2248 1 n n j=1 \u03c6(y i |x (j) ),", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where we draw each sample x (j) randomly from the distribution P (\u03b8 \u03b8 \u03b8|v v v), and n is the number of x (i) 's so collected. Note that MC gives an expectation of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "P (y i |v v v) with respect to P (\u03b8 \u03b8 \u03b8|v v v).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Furthermore, \u03c6 could be any probabilistic function. Indeed any discriminative classifier (such as C4.5) will do as long as it generates some kind of probability. Given \u03c6, what remains to do is essentially training it on samples bootstrapped (i.e., resampled) from the training data based on \u03b8 \u03b8 \u03b8 -which we draw from Dirichlet(\u03b8 \u03b8 \u03b8|v v v). 4 To be more specific, suppose that we have a four sentence long text and an array of probabilities \u03b8 \u03b8 \u03b8 = (0.4, 0.3, 0.2, 0.1) drawn from a Dirichlet distribution: which is to say, we have a preference factor of 0.4 for the lead sentence, 0.3 for the second sentence, etc. Then we resample with replacement lead sentences from training data with the probability of 0.4, the second with the probability of 0.3, and so forth. Obviously, a high preference factor causes the associated sentence to be chosen more often than those with a low preference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 342, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Thus given a text T = (a, b, c, d) with \u03b8 \u03b8 \u03b8 = (0.4, 0.3, 0.2, 0.1), we could end up with a data set dominated by a few sentence types, such as T = (a, a, a, b), which we proceed to train a classifier on in place of T . Intuitively, this amounts to inducing the classifier to attend to or focus on a particular region or area of a text, and dismiss the rest. Note an interesting parallel to boosting (Freund and Schapire, 1996) and the alternating decision tree (Freund and Mason, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 401, |
|
"end": 428, |
|
"text": "(Freund and Schapire, 1996)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 487, |
|
"text": "(Freund and Mason, 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In MC, for each \u03b8 \u03b8 \u03b8 (k) drawn from Dirichlet(\u03b8 \u03b8 \u03b8|v v v), we resample sentences from the training data using probabilities specified by \u03b8 \u03b8 \u03b8 (k) , use them for training a classifier, and run it on a test document d to find, for each sentence in d, its probability of being a 'pick' (summary-worthy) sentence,i.e., P (y i |\u03b8 \u03b8 \u03b8 (k) ), which we average across \u03b8 \u03b8 \u03b8's. In experiments later described, we apply the procedure for 20,000 runs (meaning we run a classifier on each of 20,000 \u03b8 \u03b8 \u03b8's we draw), and average over them to find an estimate for P", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 25, |
|
"text": "(k)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 145, |
|
"end": 148, |
|
"text": "(k)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 335, |
|
"text": "(k)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(y i |v v v).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As for BIC, we generally operate along the lines of MC, except that we bootstrap sentences using only E[\u03b8 \u03b8 \u03b8], and the model complexity term, namely, \u2212 k 2 ln N is dropped as it has no effect on ranking sentences. As with MC, we train a classifier on the bootstrapped samples and run it on a test document. Though we work with a set of fixed parameters, a bootstrapping based on them still fluctuates, produc-ing a slightly different set of samples each time we run the operation. To get a reasonable convergence in experiments, we took the procedure to 5,000 iterations and averaged over the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Either with BIC or with MC, building a summarizer on it is a fairly straightforward matter. In what follows, we will look at whether and how the Bayesian approach, when applied for the C4.5 decision tree learner (Quinlan, 1993) , leverages its performance on real world data. This means our model now operates either by", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 227, |
|
"text": "(Quinlan, 1993)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (y i |v v v) \u2248 1 n n j=1 \u03c6 c4.5 (y i |x (j) ),", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "or by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "ln P (y i |m) = ln \u03c6 c4.5 (y i |\u03b8 \u03b8 \u03b8, m) \u2212 k 2 ln N,", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "with the likelihood function \u03c6 filled out by C4.5. Moreover, we compare two versions of the classifier; one with BIC/MC and one without. We used Weka implementations of the algorithm (with default settings) in experiments described below (Witten and Frank, 2000).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "While C4.5 here is configured to work in a binary (positive/negative) classification scheme, we run it in a 'distributional' mode, and use a particular class membership probability it produces, namely, the probability of a sentence being positive, i.e., a pick (summary-worthy) sentence, instead of a category label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Attributes for C4.5 are broadly intended to represent some aspects of a sentence in a document, an object of interest here. Thus for each sentence \u03c8, its encoding involves reference to the following set of attributes or features. 'LocSen' gives a normalized location of \u03c8 in the text, i.e., a normalized distance from the top of the text; likewise, 'LocPar' gives a normalized location of the paragraph in which \u03c8 occurs, and 'LocWithinPar' records its normalized location within a paragraph. Also included are a few length-related features such as the length of text and sentence. Furthermore we brought in some language specific feature which we call 'EndCue.' It records the morphology of a linguistic element that ends \u03c8, such as inflection, part of speech, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In addition, we make use of the weight feature ('Weight') for a record on the importance of \u03c8 based on tf.idf. Let \u03c8 = w 1 , . . . , w n , for some word w i . Then the weight W (\u03c8) is given as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "W (\u03c8) = w (1 + log(tf(w))) \u2022 log(N/df(w)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Here 'tf(w)' denotes the frequency of word w in a given document, 'df(w)' denotes the 'document frequency' of w, or the number of documents which contain an occurrence of w. N represents the total number of documents. 5 Also among the features used here is 'Pos,' a feature intended to record the position or textual order of \u03c8, given by how many sentences away it occurs from the top of text, starting with 0.", |
|
"cite_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 219, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "While we do believe that the attributes discussed above have a lot to do with the likelihood that a given sentence becomes part of summary, we choose not to consider them parameters of the Bayesian model, just to keep it from getting unduly complex. Recall the graphical model in figure 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bayesian Model of Summaries", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Here is how we created test data. We collected three pools of texts from different genres, columns, editorials and news stories, from a Japanese financial paper (Nihon Keizai Shinbun) published in 1995, each with 25 articles. Then we asked 112 Japanese students to go over each article and identify 10% worth of sentences they find most important in creating a summary for that article. For each sentence, we recorded how many of the subjects are in favor of its inclusion in summary. On average, we had about seven people working on each text. In the following, we say sentences are 'positive' if there are three or more people who like to see them in a summary, and 'negative' otherwise. For convenience, let us call the corpus of columns G1K3, that of editorials G2K3 and that of news stories G3K3. Additional details are found in table 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Tables 2 through 4 show how the Bayesian summarist performs on G1K3, G2K3, and G3K3. The tables list results in precision at compression rates (r) of interest (0 < r < 1). The figures thereof indicate performance averaged over leave-one-out cross validation folds. What this means is that you leave out one text for testing and use the rest for training, which you repeat for each one of the texts in the data. Since we have 25 texts for each data set, this leads to a 25-fold cross validation. Precision is defined by the ratio of hits (positive sentences) to the number of sentences retrieved, i.e., r-percent of sentences in the text. 6 In each table, figures to the left of the vertical line indicate performance of summarizers with BIC/MC and those to the right that of summarizers without them. Parenthetical figures like '(5K)' and '(20K)' indicate the number of iterations we took them to: thus BIC(5K) refers to a summarizer based on C4.5/BIC with scores averaged over 5,000 runs. BSE denotes a reference summarizer based on a regular C4.5, which it involves no resampling of training data. LEAD refers to a summarizer which works Table 1 : N represents the number of sentences in G1K3 to G3K3. Sentences with three or more votes in their favor are marked positive, that is, for each sentence marked positive, at least three people are in favor of including it in a summary. by selecting sentences from the top of the text. It is generally considered a hard-to-beat approach in the summarization literature. Table 4 shows results for G3K3 (a news story domain). There we find a significantly improvement to performance of C4.5, whether it operates with BIC or MC. The effect is clearly visible across a whole range of compression rates, and more so at smaller rates. Table 3 demonstrates that the Bayesian approach is also effective for G2K3 (an editorial domain), outperforming both BSE and LEAD by a large margin.", |
|
"cite_spans": [ |
|
{ |
|
"start": 638, |
|
"end": 639, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1140, |
|
"end": 1147, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1517, |
|
"end": 1524, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1776, |
|
"end": 1783, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Similarly, we find that our approach comfortably beats LEAD in G1K3 (a column domain). Note the dashes for BSE. What we mean by these, is that we obtained no meaningful results for it, because we were unable to rank sentences based on predictions by BSE. To get an idea of how this happens, let us look at a decision tree BSE builds for G1K3, which is shown in figure 4. What we have there is a decision tree consisting of a single leaf. 7 Thus for whatever sentence we feed to the tree, it throws back the same membership probability, which is 65/411. But then this would make a BSE based summarizer utterly useless, as it reduces to generating a summary by picking at random, a particular portion of text. 8 7 This is not at all surprising as over 80% of sentences in a non resampled text are negative for the most of the time.", |
|
"cite_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 439, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "8 Its expected performance (averaged over 10 6 runs) comes Now Figure 5 shows what happens with the Bayesian model (MC), for the same data. There we see a tree of a considerable complexity, with 24 leaves and 18 split nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 71, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Let us now turn to the issues with \u03bb. As we might recall, \u03bb influences the shape of a Dirichlet distribution: a large value of \u03bb causes the distribution to have less variance and therefore to have a more acute peak around the expectation. What this means is that increasing the value of \u03bb makes it more likely to have us drawing samples closer to the expectation. As a consequence, we would have the MC model acting more like the BIC model, which is based on MAP estimates. That this is indeed the case is demonstrated by table 5, which gives results for the MC model on G1K3 to G3K3 at \u03bb = 1. We see that the MC behaves less like the BIC at \u03bb = 1 than at \u03bb = 5 (table 2 through 4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Of a particular interest in table 5 is G1K3, where the MC suffers a considerable degradation in performance, compared to when it works with \u03bb = 5. G2K3 and G3K3, again, witness some degradation in performance, though not as extensive as in G1K3. It is interesting that at times the MC even works better with \u03bb = 1 than \u03bb = 5 in G2K3 and G3K3. 9 to: 0.1466 (r = 0.05), 0.1453 (r = 0.1), 0.1508 (r = 0.15), 0.1530 (r = 0.2), 0.1534 (r = 0.25), and 0.1544 (r = 0.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "9 The results suggest that if one like to have some improvement, it is probably a good idea to set \u03bb to a large value. But All in all, the Bayesian model proves more effective in leveraging performance of the summarizer on a DOV exhibiting a complex, multiply peaked form as in G1K3 and G2K3, and less on a DOV which has a simple, single-peak structure as in G3K3 (cf. figure 1). 10", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The paper showed how it is possible to incorporate information on human judgments for text summarization in a principled manner through Bayesian modeling, and also demonstrated how the approach leverages performance of a summarizer, using data collected from human subjects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The present study is motivated by the view that that summarization is a particular form of collaborative filtering (CF), wherein we view a summary as a particular set of sentences favored by a particular user or a group of users just like any other things people would normally have preference for, such as CDs, books, paintings, emails, news articles, etc. Importantly, under CF, we would not be asking, what is the 'correct' or gold standard summary for document X? -the question that consumed much of the past research on summarization. Rather, what we are asking is, what summary is popularly favored for X?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Indeed the fact that there could be as many summaries as angles to look at the text from may favor in general how to best set \u03bb requires some experimenting with data and the optimal value may vary from domain to domain. An interesting approach would be to empirically optimize \u03bb using methods suggested in MacKay and Peto (1994) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 306, |
|
"end": 328, |
|
"text": "MacKay and Peto (1994)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "10 Incidentally, summarizers, Bayesian or not, perform considerably better on G3K3 than on G1K3 or G2K3. This happens presumably because a large portion of votes concentrate in a rather small region of text there, a property any classifier should pick up easily. the CF view of summary: the idea of what constitutes a good summary may vary from person to person, and may well be influenced by particular interests and concerns of people we elicit data from.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Among some recent work with similar concerns, one notable is the Pyramid scheme (Nenkova and Passonneau, 2004) where one does not declare a particular human summary a absolute reference to compare summaries against, but rather makes every one of multiple human summaries at hand bear on evaluation; Rouge (Lin and Hovy, 2003) represents another such effort. The Bayesian summarist represents yet another, whereby one seeks a summary most typical of those created by humans.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 110, |
|
"text": "(Nenkova and Passonneau, 2004)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 325, |
|
"text": "(Lin and Hovy, 2003)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "See Yu et al. (2004) andCowans (2004) for its use in IR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since texts generally vary in length, we may set n to a sufficiently large number so that none of texts of interest may exceed it in length. For texts shorter than n, we simply add empty sentences to make them as long as n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is fairly straightforward to sample from a Dirichlet posterior by resorting to a gamma distribution, which is what is happening here. In case one is working with a distribution it is hard to sample from, one would usually rely on Markov chain Monte Carlo (MCMC) or variational methods to do the job.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Although one could reasonably argue for normalizing W (\u03c8) by sentence length, it is not entirely clear at the moment whether it helps in the way of improving performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We do not use recall for a evaluation measure, as the number of positive instances varies from text to text, and may indeed exceed the length of a summary under a particular compression rate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Bayesian Statistical Modelling", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Congdon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Congdon. 2003. Bayesian Statistical Modelling. John Wiley and Sons.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Information Retrieval Using Hierarchical Dirichlet Processes", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Cowans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. 27th ACM SIGIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip J. Cowans. 2004. Information Retrieval Using Hierarchical Dirichlet Processes. In Proc. 27th ACM SIGIR.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The alternating decision tree learning algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Freund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llew", |
|
"middle": [], |
|
"last": "Mason", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. 16th ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Freund and Llew Mason. 1999. The alternating decision tree learning algorithm,. In Proc. 16th ICML.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Experiments with a new boosting algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Freund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Schapire", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proc. 13th ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Freund and Robert E. Schapire. 1996. Experiments with a new boosting algorithm. In Proc. 13th ICML.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic evaluation of summaries using n-gram co-occurance statistics", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. Automatic eval- uation of summaries using n-gram co-occurance statis- tics. In Proc. HLT-NAACL 2003.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A Hierarchical Dirichlet Language Model. Natural Language Engineering", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linda", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Mackay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bauman Peto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David J. C. MacKay and Linda C. Bauman Peto. 1994. A Hierarchical Dirichlet Language Model. Natural Lan- guage Engineering.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Introduction to Monte Carlo methods", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"J C" |
|
], |
|
"last": "Mackay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Learning in Graphical Models", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. J. C. MacKay. 1998. Introduction to Monte Carlo methods. In M. I. Jordan, editor, Learning in Graphi- cal Models, Kluwer Academic Press.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Evaluation Content Selection in Summarization: The Pyramid Method", |
|
"authors": [ |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ani Nenkova and Rebecca Passonneau. 2004. Evalua- tion Content Selection in Summarization: The Pyra- mid Method. In Proc. HLT-NAACL 2004.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "C4.5: Programs for Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [], |
|
"last": "Quinlan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Witten", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian H. Witten and Eibe Frank. 2000. Data Mining: Prac- tical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Nonparametric Hierarchical Bayesian Framework for Information Filtering", |
|
"authors": [ |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shipeng", |
|
"middle": [], |
|
"last": "Volker Tresp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. 27th ACM SIGIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Yu, Volker Tresp, and Shipeng Yu. 2004. A Non- parametric Hierarchical Bayesian Framework for In- formation Filtering. In Proc. 27th ACM SIGIR.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Genre-by-genre vote distribution of the second, one for the third, and none for the fourth, then we would have", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Histograms of random draws from Dirichlet(p 1 , p 2 |\u03bbc 1 , \u03bbc 2 ) with \u03bb = 1 (left panel), and \u03bb = 1000 (right panel).", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Given a document d and a compression rate r, what a summarizer would do is simply rank sentences in d based on P (y i |v v v) and pick an r portion of highest ranking sentences.", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "A non Bayesian C4.5 trained on G1K3. A Bayesian (MC) C4.5 trained on G1K3.", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>r</td><td colspan=\"4\">BIC (5K) MC (20K) BSE LEAD</td></tr><tr><td>0.05</td><td>0.4583</td><td>0.4583</td><td>\u2212</td><td>0.3333</td></tr><tr><td>0.10</td><td>0.4167</td><td>0.4167</td><td>\u2212</td><td>0.3472</td></tr><tr><td>0.15</td><td>0.3333</td><td>0.3472</td><td>\u2212</td><td>0.2604</td></tr><tr><td>0.20</td><td>0.2757</td><td>0.2861</td><td>\u2212</td><td>0.2306</td></tr><tr><td>0.25</td><td>0.2525</td><td>0.2772</td><td>\u2212</td><td>0.2233</td></tr><tr><td>0.30</td><td>0.2368</td><td>0.2535</td><td>\u2212</td><td>0.2066</td></tr></table>", |
|
"html": null, |
|
"text": "G1K3. \u03bb = 5. Dashes indicate no meaningful results." |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td colspan=\"2\">: G2K3. \u03bb = 5.</td></tr><tr><td>r</td><td colspan=\"2\">BIC (5K) MC (20K)</td><td>BSE</td><td>LEAD</td></tr><tr><td>0.05</td><td>0.6000</td><td>0.5800</td><td colspan=\"2\">0.4200 0.5400</td></tr><tr><td>0.10</td><td>0.4200</td><td>0.4200</td><td colspan=\"2\">0.3533 0.3933</td></tr><tr><td>0.15</td><td>0.3427</td><td>0.3560</td><td colspan=\"2\">0.2980 0.3147</td></tr><tr><td>0.20</td><td>0.3033</td><td>0.3213</td><td colspan=\"2\">0.2780 0.2767</td></tr><tr><td>0.25</td><td>0.2993</td><td>0.2776</td><td colspan=\"2\">0.2421 0.2397</td></tr><tr><td>0.30</td><td>0.2743</td><td>0.2750</td><td colspan=\"2\">0.2170 0.2054</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td colspan=\"2\">: G3K3. \u03bb = 5.</td></tr><tr><td>r</td><td colspan=\"2\">BIC (5K) MC (20K)</td><td>BSE</td><td>LEAD</td></tr><tr><td>0.05</td><td>0.9600</td><td>0.9600</td><td colspan=\"2\">0.8400 0.9600</td></tr><tr><td>0.10</td><td>0.7600</td><td>0.7600</td><td colspan=\"2\">0.6800 0.7000</td></tr><tr><td>0.15</td><td>0.6133</td><td>0.6000</td><td colspan=\"2\">0.5867 0.5133</td></tr><tr><td>0.20</td><td>0.5233</td><td>0.5233</td><td colspan=\"2\">0.4967 0.4533</td></tr><tr><td>0.25</td><td>0.4367</td><td>0.4367</td><td colspan=\"2\">0.3960 0.3840</td></tr><tr><td>0.30</td><td>0.4033</td><td>0.4033</td><td colspan=\"2\">0.3640 0.3673</td></tr><tr><td/><td/><td>0 (411.0/65.0)</td><td/></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>r</td><td>G1K3 G2K3 G3K3</td></tr><tr><td colspan=\"2\">0.05 0.3333 0.5400 0.9600</td></tr><tr><td colspan=\"2\">0.10 0.3333 0.3867 0.7800</td></tr><tr><td colspan=\"2\">0.15 0.2917 0.3960 0.5867</td></tr><tr><td colspan=\"2\">0.20 0.2549 0.3373 0.5200</td></tr><tr><td colspan=\"2\">0.25 0.2480 0.2910 0.4347</td></tr><tr><td colspan=\"2\">0.30 0.2594 0.2652 0.4100</td></tr></table>", |
|
"html": null, |
|
"text": "MC (20K). \u03bb = 1." |
|
} |
|
} |
|
} |
|
} |