|
{ |
|
"paper_id": "N07-1041", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:48:00.050004Z" |
|
}, |
|
"title": "Combining Probability-Based Rankers for Action-Item Detection", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Bennett", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Microsoft Research * One Microsoft Way Redmond", |
|
"location": { |
|
"postCode": "98052", |
|
"region": "WA" |
|
} |
|
}, |
|
"email": "paul.n.bennett@microsoft.com" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Carbonell", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper studies methods that automatically detect action-items in e-mail, an important category for assisting users in identifying new tasks, tracking ongoing ones, and searching for completed ones. Since action-items consist of a short span of text, classifiers that detect action-items can be built from a document-level or a sentence-level view. Rather than commit to either view, we adapt a contextsensitive metaclassification framework to this problem to combine the rankings produced by different algorithms as well as different views. While this framework is known to work well for standard classification, its suitability for fusing rankers has not been studied. In an empirical evaluation, the resulting approach yields improved rankings that are less sensitive to training set variation, and furthermore, the theoretically-motivated reliability indicators we introduce enable the metaclassifier to now be applicable in any problem where the base classifiers are used.", |
|
"pdf_parse": { |
|
"paper_id": "N07-1041", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper studies methods that automatically detect action-items in e-mail, an important category for assisting users in identifying new tasks, tracking ongoing ones, and searching for completed ones. Since action-items consist of a short span of text, classifiers that detect action-items can be built from a document-level or a sentence-level view. Rather than commit to either view, we adapt a contextsensitive metaclassification framework to this problem to combine the rankings produced by different algorithms as well as different views. While this framework is known to work well for standard classification, its suitability for fusing rankers has not been studied. In an empirical evaluation, the resulting approach yields improved rankings that are less sensitive to training set variation, and furthermore, the theoretically-motivated reliability indicators we introduce enable the metaclassifier to now be applicable in any problem where the base classifiers are used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "From business people to the everyday person, email plays an increasingly central role in a modern lifestyle. With this shift, e-mail users desire improved tools to help process, search, and organize the information present in their ever-expanding inboxes. A system that ranks e-mails according to the From: Henry Hutchins <hhutchins@innovative.company.com> To: Sara Smith; Joe Johnson; William Woolings Subject: meeting with prospective customers Hi All, I'd like to remind all of you that the group from GRTY will be visiting us next Friday at 4:30 p.m. The schedule is: + 9:30 a.m. Informal Breakfast and Discussion in Cafeteria + 10:30 a.m. Company Overview + 11:00 a.m. Individual Meetings (Continue Over Lunch) + 2:00 p.m. Tour of Facilities + 3:00 p.m. Sales Pitch In order to have this go off smoothly, I would like to practice the presentation well in advance. As a result, I will need each of your parts by Wednesday. Keep up the good work! -Henry Figure 1 : An E-mail with Action-Item (italics added). likelihood of containing \"to-do\" or action-items can alleviate a user's time burden and is the subject of ongoing research throughout the literature.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 957, |
|
"end": 965, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In particular, an e-mail user may not always process all e-mails, but even when one does, some emails are likely to be of greater response urgency than others. These messages often contain actionitems. Thus, while importance and urgency are not equal to action-item content, an effective action-item detection system can form one prominent subcomponent in a larger prioritization system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Action-item detection differs from standard text classification in two important ways. First, the user is interested both in detecting whether an email contains action-items and in locating exactly where these action-item requests are contained within the email body. Second, action-item detection attempts to recover the sender's intent -whether she means to elicit response or action on the part of the receiver.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we focus on the primary problem of presenting e-mails in a ranked order according to their likelihood of containing an action-item. Since action-items typically consist of a short text spana phrase, sentence, or small passage -supervised input to a learning system can either come at the document-level where an e-mail is labeled yes/no as to whether it contains an action-item or at the sentence-level where each span that is an actionitem is explicitly identified. Then, a corresponding document-level classifier or aggregated predictions from a sentence-level classifier can be used to estimate the overall likelihood for the e-mail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Rather than commit to either view, we use a combination technique to capture the information each viewpoint has to offer on the current example. The STRIVE approach has been shown to provide robust combinations of heterogeneous models for standard topic classification by capturing areas of high and low reliability via the use of reliability indicators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, using STRIVE in order to produce improved rankings has not been previously studied. Furthermore, while they introduce some reliability indicators that are general for text classification problems as well as ones specifically tied to na\u00efve Bayes models, they do not address other classification models. We introduce a series of reliability indicators connected to areas of high/low reliability in kNN, SVMs, and decision trees to allow the combination model to include such factors as the sparseness of training example neighbors around the current example being classified. In addition, we provide a more formal motivation for the role these variables play in the resulting metaclassification model. Empirical evidence demonstrates that the resulting approach yields a context-sensitive combination model that improves the quality of rankings generated as well as reducing the variance of the ranking quality across training splits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In contrast to related combination work, we focus on improving rankings through the use of a metaclassification framework. In addition, rather than simply focusing on combining models from different classification algorithms, we also examine combining models that have different views, in that both the qualitative nature of the labeled data and the application of the learned base models differ. Furthermore, we improve upon work on context-sensitive combination by introducing reliability indicators which model the sensitivity of a classifier's output around the current prediction point. Finally, we focus on the application of these methods to action-item dataa growing area of interest which has been demonstrated to behave differently than more standard text classification problems (e.g. topic) in the literature (Bennett and Carbonell, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 821, |
|
"end": 850, |
|
"text": "(Bennett and Carbonell, 2005)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There are three basic problems for action-item detection. (1) Document detection: Classify an e-mail as to whether or not it contains an action-item. (2) Document ranking: Rank the e-mails such that all e-mail containing action-items occur as high as possible in the ranking. (3) Sentence detection: Classify each sentence in an e-mail as to whether or not it is an action-item.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action-Item Detection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Here we focus on the document ranking problem. Improving the overall ranking not only helps users find e-mails with action-items quicker (Bennett and Carbonell, 2005) but can decrease response times and help ensure that key e-mails are not overlooked.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 166, |
|
"text": "(Bennett and Carbonell, 2005)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action-Item Detection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Since a typical user will eventually process all received mail, we assume that producing a quality ranking will more directly measure the impact on the user than accuracy or F1. Therefore, we focus on ROC curves and area under the curve (AUC) since both reflect the quality of the ranking produced.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action-Item Detection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "One of the most common approaches to classifier combination is stacking (Wolpert, 1992) . In this approach, a metaclassifier observes a past history of classifier predictions to learn how to weight the classifiers according to their demonstrated accuracies and interactions. To build the history, cross-validation over the training set is used to obtain predictions from each base classifier. Next, a metalevel representation of the training set is constructed where each example consists of the class label and the predictions of the base classifiers. Finally, a metaclassifier is trained on the metalevel representation to learn a model of how to combine the base classifiers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 87, |
|
"text": "(Wolpert, 1992)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Classifiers with Metaclassifiers", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "However, it might be useful to augment the history with information other than the predicted probabilities. For example, during peer review, reviewers", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Classifiers with Metaclassifiers", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "class class 0 0 1 1 0 0 1 1 0 1 Metaclassifier Reliability Indicators SVM Unigram 0 0 1 1 0 0 1 1 w 1 w 2 w 3 w n \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 r 1 r 2 r n Figure 2: Architecture of STRIVE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Classifiers with Metaclassifiers", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In STRIVE, an additional layer of learning is added where the metaclassifier can use the context established by the reliability indicators and the output of the base classifiers to make an improved decision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Classifiers with Metaclassifiers", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "typically provide both a 1-5 acceptance rating and a 1-5 confidence. The first of these is related to an estimate of class membership, P (\"accept | paper), but the second is closer to a measure of expertise or a self-assessment of the reviewer's reliability on an example-by-example basis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Classifiers with Metaclassifiers", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Automatically deriving such self-assessments for classification algorithms is non-trivial. The Stacked Reliability Indicator Variable Ensemble framework, or STRIVE, demonstrates how to extend stacking by incorporating such self-assessments as a layer of reliability indicators and introduces a candidate set of functions .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Classifiers with Metaclassifiers", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The STRIVE architecture is depicted in Figure 2 . From left to right: (1) a bag-of-words representation of the document is extracted and used by the base classifiers to predict class probabilities; (2) reliability indicator functions use the predicted probabilities and the features of the document to characterize whether this document falls within the \"expertise\" of the classifiers; (3) a metalevel classifier uses the base classifier predictions and the reliability indicators to make a more reliable combined prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 47, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combining Classifiers with Metaclassifiers", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "From the perspective of improving action-item rankings, we are interested in whether stacking or striving can improve the quality of rankings. However, we hypothesize that striving will perform better since it can learn a model that varies the combination rule based on the current example and thus, better capture when a particular classifier at the documentlevel or sentence-level, bag-of-words or n-gram representation, etc. will produce a reliable prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Classifiers with Metaclassifiers", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "While STRIVE has been shown to provide robust combination for topic classification, a formal motivation is lacking for the type of reliability indicators that are the most useful in classifier combination.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Assume we restrict our choice of metaclassifier to a linear model. One natural choice is to rank the e-mails according to the estimated posterior probability,P (class = action item | x), but in a linear combination framework it is actually more convenient to work with the estimated log-odds or logit transform which is monotone in the posterior,\u03bb = logP (class=action item|x) 1\u2212P (class=action item|x) (Kahn, 2004) . Now, consider applying a metaclassifier to a single base classifier. Given only a classifier's probability estimates, a metaclassifier cannot improve on the estimates if they are well-calibrated (DeGroot and Fienberg, 1986) . Thus a metaclassifier applied to a single base classifier corresponds to recalibration (Kahn, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 376, |
|
"text": "(class=action item|x)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 415, |
|
"text": "(Kahn, 2004)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 641, |
|
"text": "(DeGroot and Fienberg, 1986)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 731, |
|
"end": 743, |
|
"text": "(Kahn, 2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Assume each of the n base models gives an uncalibrated log-odds estimate\u03bb i . Then the combination model would have the form\u03bb", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "* (x) = W 0 (x)+ n i=1 W i (x)\u03bb i (x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where the W i are example dependent weight functions that the combination model learns. The obvious implication is that our reliability indicators can be informed by the optimal values for the weighting functions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We can determine the optimal weights in a simplified case with a single base classifier by assuming we are given \"true\" log-odds values, \u03bb, and a family of distributions \u2206 x such that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2206 x = p(z | x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "encodes what is local to x by giving the probability of drawing a point z near to x. We use \u2206 instead of \u2206 x for notational simplicity. Since \u2206 encodes the example dependent nature of the weights, we can drop x from the weight functions. To find weights that minimize the squared difference between the true log-odds and the estimated log-odds in the \u2206 vicinity of x, we can solve a standard regression", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "problem, argmin w 0 ,w 1 E \u2206 w 1\u03bb + w 0 \u2212 \u03bb 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Under the assumption VAR \u2206 \u03bb = 0, this yields:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "w 0 = E \u2206 [\u03bb] \u2212 w 1 E \u2206 \u03bb (1) w 1 = COV \u2206 \u03bb , \u03bb VAR \u2206 \u03bb = \u03c3 \u03bb \u03c3\u03bb \u03c1 \u03bb,\u03bb", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where \u03c3 and \u03c1 are the stdev and correlation coefficient under \u2206, respectively. The first parameter is a measure of calibration that addresses the question, \"How far off on average is the estimated log-odds from the true log-odds in the local context?\" The second is a measure of correlation, \"How closely does the estimated log-odds vary with the true log-odds?\" Note that the second parameter depends on the local sensitivity of the base classifier, VAR", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "1/2 \u2206 \u03bb = \u03c3\u03bb. Although we do not have true log-odds, we can introduce local density models to estimate the local sensitivity of the model. In particular, we introduce a series of reliability indicators by first defining a \u2206 distribution and either computing", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "VAR \u2206 \u03bb , E \u2206 \u03bb or the closely related terms VAR \u2206 \u03bb (z) \u2212\u03bb(x) , E \u2206 \u03bb (z) \u2212\u03bb(x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": ". We use the resulting values for an example as features for a linear metaclassifier. Thus we use a context-dependent bias term but leave the more general model for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formally Motivating Reliability Indicators", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "As discussed in Section 2.3, we wish to define local distributions in order to compute the local sensitivity and similar terms for the base classification models. To do so, we define local distributions that have the same \"flavor\" as the base classification model. First, consider the kNN classifier. Since we are concerned with how the decision function would change as we move locally around the current prediction point, it is natural to consider a set of shifts defined by the k neighbors. In particular, let d i denote the document that has been shifted by a factor \u03b2 i toward the ith neighbor, i.e.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "d i = d + \u03b2 i (n i \u2212 d).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We use the largest \u03b2 i such that the closest neighbor to the new point is the original document, i.e. the boundary of the Voronoi cell (see Figure 3) . Clearly, \u03b2 i will not exceed 0.5, and we can find it efficiently using a simple bisection algorithm. Now, let \u2206 be a uniform point-mass distribution over the shifted points and\u03bb kNN , the output score of the kNN model. Given this definition of \u2206, it is now straightforward to compute the kNN based reliability indicators:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 149, |
|
"text": "Figure 3)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "E \u2206 [\u03bb kNN (z) \u2212\u03bb kNN (x)] and Var 1/2 \u2206 [\u03bb kNN (z) \u2212\u03bb kNN (x)].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Similarly, we define variables for the SVM classifier by considering a document's locality in terms of nearby support vectors from the set of support vectors, V. To determine \u03b2 i , we define it in terms of the closest support vector in V to d. Let be half the distance to the nearest point in", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "V, i.e. = 1 2 min v\u2208V v \u2212 d . Then \u03b2 i = v i \u2212d . 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Thus, the shift vectors are all rescaled to have the same length. Now, we must define a probability for the shift. We use a simple exponential based on and the relative distance from the document to the support vector defining this shift. Let", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "d i \u223c \u2206 where P \u2206 (d i ) \u221d exp(\u2212 v i \u2212 d + 2 ) and V i=1 P \u2206 (d i ) = 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "2 Given this definition of \u2206, we compute the SVM based reliability indicators:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "E \u2206 [\u03bb SVM (z) \u2212 \u03bb SVM (x)] and Var 1/2 \u2206 [\u03bb SVM (z) \u2212\u03bb SVM (x)].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Space prevents us from presenting all the derivations here. However, we also define decision-tree based variables where the locality distribution gives high probability to documents that would land in nearby leaves. For a multinomial na\u00efve Bayes model (NB), we define a distribution of documents identical to the prediction document except having an occurrence of a single feature deleted. For the multivariate Bernoulli na\u00efve Bayes (MBNB) model that models feature presence/absence, we use a distribution over all documents that has one presence/absence bit flipped from the prediction document. It is interesting to note that the variables from the na\u00efve Bayes models can be shown to be equivalent to variables introduced by -although those were derived in a different fashion by analyzing the weight a single feature carries with respect to the overall prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Furthermore, from this starting point, we go on to define similar variables of possible interest. Including the two for each model described here, we define 10 kNN variables, 5 SVM variables, 2 decision-tree variables, 6 NB model based variables, and 6 MBNB variables. We describe these variables as well as implementation details and computational complexity results in (Bennett, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 386, |
|
"text": "(Bennett, 2006)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model-Based Reliability Indicators", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Our corpus consists of e-mails obtained from volunteers at an educational institution and covers subjects such as: organizing a research workshop, arranging for job-candidate interviews, publishing proceedings, and talk announcements. After eliminating duplicate e-mails, the corpus contains 744 messages with a total of 6301 automatically segmented sentences. A human panel labeled each phrase or sentence that contained an explicit request for information or action. 416 emails have no action-items and 328 e-mails contain action-items. Additional information such as annotator agreement, distribution of message length, etc. can be found in (Bennett and Carbonell, 2005 ). An anonymized corpus is available at http://www.cs.cmu.edu/\u02dcpbennett/action-item-dataset.html.", |
|
"cite_spans": [ |
|
{ |
|
"start": 644, |
|
"end": 672, |
|
"text": "(Bennett and Carbonell, 2005", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use two types of feature representation: a bagof-words representation which uses all unigram tokens as the feature pool; and a bag-of-n-grams where n includes all n-grams where n \u2264 4. For both representations at both the document-level and sentence-level, we used only the top 300 features by the chi-squared statistic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Representation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We used a s-cut variant of kNN common in text classification (Yang, 1999) and a tfidf-weighting of the terms with a distance-weighted vote of the neighbors to compute the output. k was set to be 2( log 2 N + 1) where N is the number of training points. 3 The score used as the uncalibrated logodds estimate of being an action-item is:", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 73, |
|
"text": "(Yang, 1999)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 254, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document-Level Classifiers kNN", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u03bb kNN (x) = n\u2208kNN(x)|c(n)= action item cos(x, n) \u2212 n\u2208kNN(x)|c(n) = action item cos(x, n).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document-Level Classifiers kNN", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We used a linear SVM as implemented in the SVM light package v6.01 (Joachims, 1999) with a tfidf feature representation and L2-norm. All default settings were used. SVM's margin score,", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 83, |
|
"text": "(Joachims, 1999)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SVM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03b1 i y i K(xi, x j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SVM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ", has been shown to empirically behave like an uncalibrated log-odds estimate (Platt, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 91, |
|
"text": "(Platt, 1999)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SVM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the decision-tree implementation, we used the WinMine toolkit and refer to this as Dnet below (Microsoft Corporation, 2001) . Dnet builds decision trees using a Bayesian machine learning algorithm (Chickering et al., 1997; Heckerman et al., 2000) . The estimated log-odds is computed from a Laplace correction to the empirical probability at a leaf node.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 127, |
|
"text": "(Microsoft Corporation, 2001)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 226, |
|
"text": "(Chickering et al., 1997;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 250, |
|
"text": "Heckerman et al., 2000)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decision Trees", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use a multinomial na\u00efve Bayes (NB) and a multivariate Bernoulli na\u00efve Bayes classifier (MBNB) (McCallum and Nigam, 1998) . For these classifiers, we smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively. Since these are probabilistic, they issue log-odds estimates directly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 123, |
|
"text": "(McCallum and Nigam, 1998)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Na\u00efve Bayes", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each e-mail is automatically segmented into sentences using RASP (Carroll, 2002) . Since the corpus has fine grained labels, we can train classifiers to classify a sentence. Each classifier in Section 3.3 is also used to learn a sentence classifier. However, we then must make a document-level prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 80, |
|
"text": "(Carroll, 2002)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence-Level Classifiers", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In order to produce a ranking score, the confidence that the document contains an action-item is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence-Level Classifiers", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "\u03bb(d) = 1 n(d) s\u2208d|\u03c0(s)=1\u03bb (s), \u2203s\u2208d|\u03c0(s) = 1 1 n(d) max s\u2208d\u03bb (s) o.w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence-Level Classifiers", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where s is a sentence in document d, \u03c0 is the classifier's 1/0 prediction,\u03bb is the score the classifier assigns as its confidence that \u03c0(s) = 1, and n(d) is the greater of 1 and the number of (unigram) tokens in the document. In other words, when any sentence is predicted positive, the document score is the length normalized sum of the sentence scores above threshold. When no sentence is predicted positive, the document score is the maximum sentence score normalized by length. The length normalization compensates for the fact that we are more likely to emit a false positive the longer a document is.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence-Level Classifiers", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "To examine the hypothesis that the reliability indicators provide utility beyond the information present in the output of the 20 base classifiers (2 representations * 2 views * 5 classifiers), we construct a linear stacking model which uses only the base classifier outputs and no reliability indicators as a baseline. For the implementation, we use SVM light with default settings. The inputs to this classifier are normalized to have zero mean and a scaled variance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stacking", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Since we are constructing base classifiers for both the bag-of-words and bag-of-n-grams representations, this gives 58 reliability indicators from computing the variables in Section 2.4 for the documentlevel classifiers (58 = 2 * [6 + 6 + 10 + 5 + 2]).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Striving", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "Although the model-based indicators are defined for each sentence prediction, to use them at the document-level we must somehow combine the reliability indicators over each sentence. The simplest method is to average each classifier-based indicator across the sentences in the document. We do so and thus obtain another 58 reliability indicators. Furthermore, our model might benefit from some of the structure a sentence-level classifier offers when combining document predictions. Analogous to the sensitivity of each base model, we can consider such indicators as the mean and standard deviation of the classifier confidences across the sentences within a document. For each sentence-level base classifier, these become two more indicators which we can benefit from when combining document predictions. This introduces 20 more variables (20 = 2 representations * 2 * 5 classifiers).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Striving", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "Finally, we include the 2 basic voting statistic reliability-indicators (PercentPredictingPositive and PercentAgreeWBest) that found useful for topic classification. This yields a total of 138 reliability-indicators (138 = 58 + 20 + 58 + 2). With the 20 classifier outputs, there are a total of 158 input features for striving to handle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Striving", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "As with stacking, we use SVM light with default settings and normalize the inputs to this classifier to have zero mean and a scaled variance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Striving", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "We wish to improve the rankings of the e-mails in the inbox such that action-item e-mails occur higher in the inbox. Therefore, we use the area under the curve (AUC) of an ROC curve as a measure of ranking performance. AUC is a measure of overall model and ranking quality that has gained wider adoption recently and is equivalent to the Mann-Whitney-Wilcoxon sum of ranks test (Hanley and McNeil, 1982) . To put improvement in perspective, we can write our relative reduction in residual area (RRA) as 1\u2212AUC 1\u2212AUC baseline . We present gains relative to the best AUC performer (bRRA), and relative to perfect dynamic selection performance, (dRRA), which assumes we could accurately dynamically choose the best classifier per cross-validation run.", |
|
"cite_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 403, |
|
"text": "(Hanley and McNeil, 1982)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Measures", |
|
"sec_num": "3.7" |
|
}, |
|
{ |
|
"text": "The F1 measure is the harmonic mean of precision and recall and is common throughout text classification (Yang and Liu, 1999) . Although we are not concerned with F1 performance here, some users of the system might be interested in improving ranking while having negligible negative effect on F1. Therefore, we examine F1 to ensure that an improvement in ranking will not come at the cost of a statistically significant decrease in F1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 125, |
|
"text": "(Yang and Liu, 1999)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Measures", |
|
"sec_num": "3.7" |
|
}, |
|
{ |
|
"text": "To evaluate performance of the combination systems, we perform 10-fold cross-validation and compute the average performance. For significance tests, we use a two-tailed t-test (Yang and Liu, 1999) to compare the values obtained during each crossvalidation fold with a p-value of 0.05.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 196, |
|
"text": "(Yang and Liu, 1999)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Methodology", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "We examine two hypotheses: Stacking will outperform all of the base classifiers; Striving will outperform all the base classifiers and stacking. Table 1 presents the summary of results. The best performer in each column is in bold. If a combination method statistically significantly outperforms all base classifiers, it is underlined. Table 1 : Base classifier and combiner performance Now, we turn to the issue of whether combination improves the ranking of the documents. Examining the results in Table 1 , we see that STRIVE statistically significantly beats every other classifier according to AUC. Stacking outperforms the base classifiers with respect to AUC but not statistically significantly.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 152, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 343, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 500, |
|
"end": 507, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Methodology", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "Examining F1, we see that neither combination method outperforms the best base classifier, NB (sent,ngram). If we examine the hypothesis of whether this base classifier significantly outperforms either combination method, the hypothesis is rejected. Thus, STRIVE improves the overall ranking with a negligible effect on F1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results & Discussion", |
|
"sec_num": "3.9" |
|
}, |
|
{ |
|
"text": "Finally, we compare the ROC curves of striving, stacking, and two of the most competitive base classifiers in Figure 4 . We see that striving loses by a slight amount to stacking early in the curve but still beats the base classifiers. Later in the curve, it dominates all the classifiers. If we examine the curves using error bars, we see that the variance of STRIVE drops faster than the other classifiers as we move further along the x-axis. Thus, STRIVE's ranking quality varies less with changes to the training set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 118, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results & Discussion", |
|
"sec_num": "3.9" |
|
}, |
|
{ |
|
"text": "Several researchers have considered text classification tasks similar to action-item detection. Cohen et al. (2004) describe an ontology of \"speech acts\", such as \"Propose a Meeting\", and attempt to predict when an e-mail contains one of these speech acts. Corston-Oliver et al. (2004) consider detecting items in e-mail to \"Put on a To-Do List\" using a sentence-level classifier. In earlier work (Bennett and Carbonell, 2005) , we demonstrated that sentence-level classifiers typically outperform document-level classifiers on this problem and examined the underlying reasons why this was the case. Furthermore, we presented user studies demonstrating that users identify action-items more rapidly when using the system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 115, |
|
"text": "Cohen et al. (2004)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 285, |
|
"text": "Corston-Oliver et al. (2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 426, |
|
"text": "(Bennett and Carbonell, 2005)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In terms of classifier combination, a wide variety of work has been done in the arena. The STRIVE metaclassification approach extended Wolpert's stacking framework (Wolpert, 1992) to use reliability indicators. In recent work, Lee et al. (2006) derive variance estimates for na\u00efve Bayes and tree-augmented na\u00efve Bayes and use them in the combination model. Our work complements theirs by laying groundwork for how to compute variance estimates for models such as kNN that have no obvious probabilistic component.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 179, |
|
"text": "(Wolpert, 1992)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 244, |
|
"text": "Lee et al. (2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "While there are many interesting directions for future work, the most interesting is to directly integrate the sensitivity and calibration quantities derived into the more general model discussed in Section 2.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we took an existing approach to context-dependent combination, STRIVE, that used many ad hoc reliability indicators and derived a formal motivation for classifier model-based local sensitivity indicators. These new reliability indicators are efficiently computable, and the resulting combination outperformed a vast array of alternative base classifiers for ranking in an action-item detection task. Furthermore, the combination results yielded a more robust performance relative to variation in the training sets. Finally, we demonstrated that the STRIVE method could be successfully applied to ranking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We assume that the minimum distance is not zero. If it is zero, then we return zero for all of the variables.2 As is standard to handle different document lengths, we take the distance between documents after they have been normalized to the unit sphere.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This rule is not guaranteed be optimal for a particular value of N but is motivated by theoretical results which show such a rule converges to the optimal classifier as the number of training points increases(Devroye et al., 1996).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. NBCHD030010. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA), or the Department of Interior-National Business Center (DOI-NBC).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Feature representation for effective action-item detection", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Bennett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "SIGIR '05, Beyond Bag-of-Words Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul N. Bennett and Jaime Carbonell. 2005. Feature repre- sentation for effective action-item detection. In SIGIR '05, Beyond Bag-of-Words Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The combination of text classifiers using reliability indicators", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Bennett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Dumais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Horvitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Information Retrieval", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "67--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul N. Bennett, Susan T. Dumais, and Eric Horvitz. 2005. The combination of text classifiers using reliability indica- tors. Information Retrieval, 8:67-100.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Building Reliable Metaclassifiers for Text Learning", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bennett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul N. Bennett. 2006. Building Reliable Metaclassifiers for Text Learning. Ph.D. thesis, CMU. CMU-CS-06-121.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "High precision extraction of grammatical relations", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "COLING '02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Carroll. 2002. High precision extraction of grammatical relations. In COLING '02.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A Bayesian approach to learning Bayesian networks with local structure", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Chickering", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Heckerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Meek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "UAI '97", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D.M. Chickering, D. Heckerman, and C. Meek. 1997. A Bayesian approach to learning Bayesian networks with lo- cal structure. In UAI '97.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Learning to classify email into \"speech acts", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Vitor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Carvalho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "EMNLP '04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William W. Cohen, Vitor R. Carvalho, and Tom M. Mitchell. 2004. Learning to classify email into \"speech acts\". In EMNLP '04.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Task-focused summarization of email", |
|
"authors": [ |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Corston-Oliver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Ringger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Gamon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Campbell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Text Summarization Branches Out: Proceedings of the ACL '04 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Simon Corston-Oliver, Eric Ringger, Michael Gamon, and Richard Campbell. 2004. Task-focused summarization of email. In Text Summarization Branches Out: Proceedings of the ACL '04 Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Comparing probability forecasters: Basic binary concepts and multivariate extensions", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Morris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Degroot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fienberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Bayesian Inference and Decision Techniques", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Morris H. DeGroot and Stephen E. Fienberg. 1986. Comparing probability forecasters: Basic binary concepts and multivari- ate extensions. In P. Goel and A. Zellner, editors, Bayesian Inference and Decision Techniques. Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A Probabilistic Theory of Pattern Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Luc", |
|
"middle": [], |
|
"last": "Devroye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e1szl\u00f3", |
|
"middle": [], |
|
"last": "Gy\u00f6rfi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Lugosi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luc Devroye, L\u00e1szl\u00f3 Gy\u00f6rfi, and G\u00e1bor Lugosi. 1996. A Prob- abilistic Theory of Pattern Recognition. Springer-Verlag, New York, NY.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The meaning and use of the area under a recever operating characteristic (roc) curve", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hanley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mcneil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1982, |
|
"venue": "Radiology", |
|
"volume": "143", |
|
"issue": "1", |
|
"pages": "29--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James A. Hanley and Barbara J. McNeil. 1982. The meaning and use of the area under a recever operating characteristic (roc) curve. Radiology, 143(1):29-36.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Dependency networks for inference, collaborative filtering, and data visualization", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Heckerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Chickering", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Meek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rounthwaite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Kadie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "JMLR", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "49--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Heckerman, D.M. Chickering, C. Meek, R. Rounthwaite, and C. Kadie. 2000. Dependency networks for inference, collaborative filtering, and data visualization. JMLR, 1:49- 75.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Making large-scale svm learning practical", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Advances in Kernel Methods -Support Vector Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Joachims. 1999. Making large-scale svm learning practical. In Bernhard Sch\u00f6lkopf, Christopher J. Burges, and Alexander J. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Bayesian Aggregation of Probability Forecasts on Categorical Events", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kahn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph M. Kahn. 2004. Bayesian Aggregation of Probabil- ity Forecasts on Categorical Events. Ph.D. thesis, Stanford University, June.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Using query-specific variance estimates to combine bayesian classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Chi-Hoon", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Russ", |
|
"middle": [], |
|
"last": "Greiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaojun", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ICML '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chi-Hoon Lee, Russ Greiner, and Shaojun Wang. 2006. Using query-specific variance estimates to combine bayesian class- ifiers. In ICML '06.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A comparison of event models for naive bayes text classification", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kamal", |
|
"middle": [], |
|
"last": "Nigam", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "AAAI '98", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew McCallum and Kamal Nigam. 1998. A comparison of event models for naive bayes text classification. In AAAI '98, Workshops. TR WS-98-05.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Microsoft Corporation", |
|
"authors": [], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Microsoft Corporation. 2001. WinMine Toolkit v1.0. http://research.microsoft.com/ dmax/WinMine/ContactInfo.html.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Platt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Advances in Large Margin Classifiers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John C. Platt. 1999. Probabilistic outputs for support vec- tor machines and comparisons to regularized likelihood methods. In Alexander J. Smola, Peter Bartlett, Bern- hard Scholkopf, and Dale Schuurmans, editors, Advances in Large Margin Classifiers. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Stacked generalization", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wolpert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Neural Networks", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "241--259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David H. Wolpert. 1992. Stacked generalization. Neural Net- works, 5:241-259.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A re-examination of text categorization methods", |
|
"authors": [ |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "SIGIR '99", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In SIGIR '99.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "An evaluation of statistical approaches to text categorization", |
|
"authors": [ |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Information Retrieval", |
|
"volume": "1", |
|
"issue": "1/2", |
|
"pages": "67--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiming Yang. 1999. An evaluation of statistical approaches to text categorization. Information Retrieval, 1(1/2):67-88.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Illustration of the kNN shifts produced for a prediction point x using the numbered points as its neighborhood.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Figure 4: ROC curves (rotated).", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
} |
|
} |
|
} |
|
} |