Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:34:50.959581Z"
},
"title": "Enhanced Answer Type Inference from Questions using Sequential Models",
"authors": [
{
"first": "Vijay",
"middle": [],
"last": "Krishnan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sujatha",
"middle": [],
"last": "Das",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Soumen",
"middle": [],
"last": "Chakrabarti",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Question classification is an important step in factual question answering (QA) and other dialog systems. Several attempts have been made to apply statistical machine learning approaches, including Support Vector Machines (SVMs) with sophisticated features and kernels. Curiously, the payoff beyond a simple bag-ofwords representation has been small. We show that most questions reveal their class through a short contiguous token subsequence, which we call its informer span. Perfect knowledge of informer spans can enhance accuracy from 79.4% to 88% using linear SVMs on standard benchmarks. In contrast, standard heuristics based on shallow pattern-matching give only a 3% improvement, showing that the notion of an informer is non-trivial. Using a novel multi-resolution encoding of the question's parse tree, we induce a Conditional Random Field (CRF) to identify informer spans with about 85% accuracy. Then we build a meta-classifier using a linear SVM on the CRF output, enhancing accuracy to 86.2%, which is better than all published numbers.",
"pdf_parse": {
"paper_id": "H05-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "Question classification is an important step in factual question answering (QA) and other dialog systems. Several attempts have been made to apply statistical machine learning approaches, including Support Vector Machines (SVMs) with sophisticated features and kernels. Curiously, the payoff beyond a simple bag-ofwords representation has been small. We show that most questions reveal their class through a short contiguous token subsequence, which we call its informer span. Perfect knowledge of informer spans can enhance accuracy from 79.4% to 88% using linear SVMs on standard benchmarks. In contrast, standard heuristics based on shallow pattern-matching give only a 3% improvement, showing that the notion of an informer is non-trivial. Using a novel multi-resolution encoding of the question's parse tree, we induce a Conditional Random Field (CRF) to identify informer spans with about 85% accuracy. Then we build a meta-classifier using a linear SVM on the CRF output, enhancing accuracy to 86.2%, which is better than all published numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An important step in factual question answering (QA) and other dialog systems is to classify the question (e.g., Who painted Olympia?) to the anticipated type of the answer (e.g., person). This step is called \"question classification\" or \"answer type identification\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The answer type is picked from a hand-built taxonomy having dozens to hundreds of answer types (Harabagiu et al., 2000; Hovy et al., 2001 ; Kwok et al., 2001; Zheng, 2002; Dumais et al., 2002) . QA * soumen@cse.iitb.ac.in systems can use the answer type to short-list answer tokens from passages retrieved by an information retrieval (IR) subsystem, or use the type together with other question words to inject IR queries.",
"cite_spans": [
{
"start": 95,
"end": 119,
"text": "(Harabagiu et al., 2000;",
"ref_id": "BIBREF4"
},
{
"start": 120,
"end": 137,
"text": "Hovy et al., 2001",
"ref_id": "BIBREF5"
},
{
"start": 140,
"end": 158,
"text": "Kwok et al., 2001;",
"ref_id": "BIBREF8"
},
{
"start": 159,
"end": 171,
"text": "Zheng, 2002;",
"ref_id": "BIBREF15"
},
{
"start": 172,
"end": 192,
"text": "Dumais et al., 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early successful QA systems used manuallyconstructed sets of rules to map a question to a type, exploiting clues such as the wh-word (who, where, when, how many) and the head of noun phrases associated with the main verb (what is the tallest mountain in . . .).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the increasing popularity of statistical NLP, Li and Roth (2002) , Hacioglu and Ward (2003) and Zhang and Lee (2003) used supervised learning for question classification on a data set from UIUC that is now standard 1 . It has 6 coarse and 50 fine answer types in a two-level taxonomy, together with 5500 training and 500 test questions. Webclopedia (Hovy et al., 2001 ) has also published its taxonomy with over 140 types.",
"cite_spans": [
{
"start": 51,
"end": 69,
"text": "Li and Roth (2002)",
"ref_id": "BIBREF10"
},
{
"start": 72,
"end": 96,
"text": "Hacioglu and Ward (2003)",
"ref_id": "BIBREF3"
},
{
"start": 101,
"end": 121,
"text": "Zhang and Lee (2003)",
"ref_id": "BIBREF14"
},
{
"start": 354,
"end": 372,
"text": "(Hovy et al., 2001",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The promise of a machine learning approach is that the QA system builder can now focus on designing features and providing labeled data, rather than coding and maintaining complex heuristic rulebases. The data sets and learning systems quoted above have made question classification a welldefined and non-trivial subtask of QA for which algorithms can be evaluated precisely, isolating more complex factors at work in a complete QA system. Prior work: Compared to human performance, the accuracy of question classifiers is not high. In all studies, surprisingly slim gains have resulted from sophisticated design of features and kernels. Li and Roth (2002) used a Sparse Network of Winnows (SNoW) (Khardon et al., 1999) . Their features included tokens, parts of speech (POS), chunks (non-overlapping phrases) and named entity (NE) tags. They achieved 78.8% accuracy for 50 classes, which improved to 84.2% on using an (unpublished, to our knowledge) hand-built dictionary of \"semantically related words\". Hacioglu and Ward (2003) used linear support vector machines (SVMs) with question word 2grams and error-correcting output codes (ECOC)but no NE tagger or related word dictionary-to get 80.2-82% accuracy. Zhang and Lee (2003) used linear SVMs with all possible question word q-grams, and obtained 79.2% accuracy. They went on to design an ingenious kernel on question parse trees, which yielded visible gains for the 6 coarse labels, but only \"slight\" gains for the 50 fine classes, because \"the syntactic tree does not normally contain the information required to distinguish between the various fine categories within a coarse category\". Table 1 : Summary of % accuracy for UIUC data.",
"cite_spans": [
{
"start": 638,
"end": 656,
"text": "Li and Roth (2002)",
"ref_id": "BIBREF10"
},
{
"start": 697,
"end": 719,
"text": "(Khardon et al., 1999)",
"ref_id": "BIBREF6"
},
{
"start": 1006,
"end": 1030,
"text": "Hacioglu and Ward (2003)",
"ref_id": "BIBREF3"
},
{
"start": 1210,
"end": 1230,
"text": "Zhang and Lee (2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 1645,
"end": 1652,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) SNoW accuracy without the related word dictionary was not reported. With the related-word dictionary, it achieved 91%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) SNoW with a relatedword dictionary achieved 84.2% but the other algorithms did not use it. Our results are summarized in the last two rows, see text for details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce the notion of the answer type informer span of the question (in \u00a72): a short (typically 1-3 word) subsequence of question tokens that are adequate clues for question classification; e.g.: How much does an adult elephant weigh?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions:",
"sec_num": null
},
{
"text": "We show (in \u00a73.2) that a simple linear SVM using features derived from human-annotated informer spans beats all known learning approaches. This confirms our suspicion that the earlier approaches suffered from a feature localization problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions:",
"sec_num": null
},
{
"text": "Of course, informers are useful only if we can find ways to automatically identify informer spans. Surprisingly, syntactic pattern-matching and heuristics widely used in QA systems are not very good at capturing informer spans ( \u00a73.3). Therefore, the notion of an informer is non-trivial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions:",
"sec_num": null
},
{
"text": "Using a parse of the question sentence, we derive a novel set of multi-resolution features suitable for training a conditional random field (CRF) (Lafferty et al., 2001; Sha and Pereira, 2003) . Our feature design paradigm may be of independent interest ( \u00a74). Our informer tagger is about 85-87% accurate.",
"cite_spans": [
{
"start": 146,
"end": 169,
"text": "(Lafferty et al., 2001;",
"ref_id": "BIBREF9"
},
{
"start": 170,
"end": 192,
"text": "Sha and Pereira, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions:",
"sec_num": null
},
{
"text": "We use a meta-learning framework (Chan and Stolfo, 1993) in which a linear SVM predicts the answer type based on features derived from the original question as well as the output of the CRF. This meta-classifier beats all published numbers on standard question classification benchmarks ( \u00a74.4). Table 1 (last two rows) summarizes our main results.",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Chan and Stolfo, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions:",
"sec_num": null
},
{
"text": "Our key insight is that a human can classify a question based on very few tokens gleaned from skeletal syntactic information. This is certainly true of the most trivial classes (Who wrote Hamlet? or How many dogs pull a sled at Iditarod?) but is also true of more subtle clues (How much does a rhino weigh?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informer overview",
"sec_num": "2"
},
{
"text": "In fact, informal experiments revealed the surprising property that only one contiguous span of tokens is adequate for a human to classify a question. E.g., in the above question, a human does not even need the how much clue once the word weigh is available. In fact, \"How much does a rhino cost?\" has an identical syntax but a completely different answer type, not revealed by how much alone. The only exceptions to the single-span hypothesis are multifunction questions like \"What is the name and age of . . .\", which should be assigned to multiple answer types. In this paper we consider questions where one type suffices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informer overview",
"sec_num": "2"
},
{
"text": "Consider another question with multiple clues: Who is the CEO of IBM? In isolation, the clue who merely tells us that the answer might be a person or country or organization, while CEO is perfectly precise, rendering who unnecessary. All of the above applies a forteriori to what and which clues, which are essentially uninformative on their own, as in \"What is the distance between Pisa and Rome?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informer overview",
"sec_num": "2"
},
{
"text": "Conventional QA systems use mild analysis on the wh-clues, and need much more sophistication on the rest of the question (e.g. inferring author from wrote, and even verb subcategorization). We submit that a single, minimal, suitably-chosen contiguous span of question token/s, defined as the informer span of the question, is adequate for question classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informer overview",
"sec_num": "2"
},
{
"text": "The informer span is very sensitive to the structure of clauses, phrases and possessives in the question, as is clear from these examples (informers italicized): \"What is Bill Clinton's wife's profession\", and \"What country's president was shot at Ford's Theater\". The choice of informer spans also depends on the target classification system. Initially we wished to handle definition questions separately, and marked no informer tokens in \"What is digitalis\". However, what is is an excellent informer for the UIUC class DESC:def (description, definition).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informer overview",
"sec_num": "2"
},
{
"text": "We propose a meta-learning approach ( \u00a73.1) in which the SVM can use features from the original question as well as its informer span. We show ( \u00a73.2) that human-annotated informer spans lead to large improvements in accuracy. However, we show ( \u00a73.3) that simple heuristic extraction rules commonly used in QA systems (e.g. head of noun phrase following wh-word) cannot provide informers that are nearly as useful. This naturally leads us to designing an informer tagger in \u00a74. Figure 1 shows our meta-learning (Chan and Stolfo, 1993) framework. The combiner is a linear multi-class one-vs-one SVM 2 , as in the Zhang and Lee (2003) baseline. We did not use ECOC (Hacioglu and Ward, 2003) because the reported gain is less than 1%.",
"cite_spans": [
{
"start": 512,
"end": 535,
"text": "(Chan and Stolfo, 1993)",
"ref_id": "BIBREF0"
},
{
"start": 613,
"end": 633,
"text": "Zhang and Lee (2003)",
"ref_id": "BIBREF14"
},
{
"start": 664,
"end": 689,
"text": "(Hacioglu and Ward, 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 479,
"end": 487,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The meta-learning approach",
"sec_num": "3"
},
{
"text": "The word feature extractor selects unigrams and q-grams from the question. In our experience, q = 1 or q = 2 were best; if unspecified, all possible qgrams were used. Through tuning, we also found that the SVM \"C\" parameter (used to trade between training data fit and model complexity) must be set to 300 to achieve their published baseline numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The meta-learning approach",
"sec_num": "3"
},
{
"text": "We propose two very simple ways to derive features from informers for use with SVMs. Initially, assume that perfect informers are known for all questions; Figure 1 : The meta-learning approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adding informer features",
"sec_num": "3.1"
},
{
"text": "later ( \u00a74) we study how to predict informers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding informer features",
"sec_num": "3.1"
},
{
"text": "Informer q-grams: This comprises of all word qgrams within the informer span, for all possible q. E.g., such features enable effective exploitation of informers like length or height to classify to the NUMBER:distance class in the UIUC data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding informer features",
"sec_num": "3.1"
},
{
"text": "Informer q-gram hypernyms: For each word or compound within the informer span that is a Word-Net noun, we add all hypernyms of all senses. The intuition is that the informer (e.g. author, cricketer, CEO) is often narrower than a broad question class (HUMAN:individual). Following hypernym links up to person via WordNet produces a more reliably correlated feature. Given informers, other question words might seem useless to the classifier. However, retaining regular features from other question words is an excellent idea for the following reasons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding informer features",
"sec_num": "3.1"
},
{
"text": "First, we kept word sense disambiguation (WSD) outside the scope of this work because WSD entails computation costs, and is unlikely to be reliable on short single-sentence questions. Questions like How long . . . or Which bank . . . can thus become ambiguous and corrupt the informer hypernym features. Additional question words can often help nail the correct class despite the feature corruption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding informer features",
"sec_num": "3.1"
},
{
"text": "Second, while our CRF-based approach to informer span tagging is better than obvious alternatives, it still has a 15% error rate. For the questions where the CRF prediction is wrong, features from non-informer words give the SVM an opportunity to still pick the correct question class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding informer features",
"sec_num": "3.1"
},
{
"text": "Word features: Based on the above discussion, one boolean SVM feature is created for every word q-gram over all question tokens. In experiments, we found bigrams (q = 2) to be most effective, closely followed by unigrams (q = 1). As with informers, we can also use hypernyms of regular words as SVM features (marked \"Question bigrams + hypernyms\" in Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 350,
"end": 357,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adding informer features",
"sec_num": "3.1"
},
{
"text": "We first wished to test the hypothesis that identifying informer spans to an SVM learner can improve classification accuracy. Over and above the class labels, we had two volunteers tag the 6000 UIUC questions with informer spans (which we call \"perfect\"-agreement was near-perfect).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benefits from \"perfect\" informers",
"sec_num": "3.2"
},
{
"text": "Coarse Table 2 : Percent accuracy with linear SVMs, \"perfect\" informer spans, and various feature encodings.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 14,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "Observe in Table 2 that the unigram baseline is already quite competitive with the best prior numbers, and exploiting perfect informer spans beats all known numbers. It is clear that both informer qgrams and informer hypernyms are very valuable features for question classification. The fact that no improvement was obtained with over Question bigrams using Question hypernyms highlights the importance of choosing a few relevant tokens as informers and designing suitable features on them. Table 3 (columns b and e) shows the benefits from perfect informers broken down into broad question types. Questions with what as the trigger are the biggest beneficiaries, and they also form by far the most frequent category.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 2",
"ref_id": null
},
{
"start": 491,
"end": 498,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "The remaining question, one that we address in the rest of the paper, is whether we can effectively and accurately automate the process of providing informer spans to the question classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "In \u00a74 we will propose a non-trivial solution to the informer-tagging problem. Before that, we must jus-tify that such machinery is indeed required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informers provided by heuristics",
"sec_num": "3.3"
},
{
"text": "Some leading QA systems extract words very similar in function to informers from the parse tree of the question. Some (Singhal et al., 2000) pick the head of the first noun phrase detected by a shallow parser, while others use the head of the noun phrase adjoining the main verb (Ramakrishnan et al., 2004 ). Yet others (Harabagiu et al., 2000; Hovy et al., 2001 ) use hundreds of (unpublished to our knowledge) hand-built pattern-matching rules on the output of a full-scale parser.",
"cite_spans": [
{
"start": 118,
"end": 140,
"text": "(Singhal et al., 2000)",
"ref_id": "BIBREF13"
},
{
"start": 279,
"end": 305,
"text": "(Ramakrishnan et al., 2004",
"ref_id": "BIBREF11"
},
{
"start": 320,
"end": 344,
"text": "(Harabagiu et al., 2000;",
"ref_id": "BIBREF4"
},
{
"start": 345,
"end": 362,
"text": "Hovy et al., 2001",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Informers provided by heuristics",
"sec_num": "3.3"
},
{
"text": "A natural baseline is to use these extracted words, which we call \"heuristic informers\", with an SVM just like we used \"perfect\" informers. All that remains is to make the heuristics precise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informers provided by heuristics",
"sec_num": "3.3"
},
{
"text": "How: For questions starting with how, we use the bigram starting with how unless the next word is a verb. Wh: If the wh-word is not how, what or which, use the wh-word in the question as a separate feature. WhNP: For questions having what and which, use the WHNP if it encloses a noun. WHNP is the Noun Phrase corresponding to the Wh-word, given by a sentence parser (see \u00a74.2). NP1: Otherwise, for what and which questions, the first (leftmost) noun phrase is added to yet another feature subspace. Table 3 (columns c and f) shows that these already-messy heuristic informers do not capture the same signal quality as \"perfect\" informers. Our findings corroborate Li and Roth (2002) , who report little benefit from adding head chunk features for the fine classification task.",
"cite_spans": [
{
"start": 665,
"end": 683,
"text": "Li and Roth (2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 500,
"end": 507,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Informers provided by heuristics",
"sec_num": "3.3"
},
{
"text": "Moreover, observe that using heuristic informer features without any word features leads to rather poor performance (column c), unlike using perfect informers (column b) or even CRF-predicted informer (column d, see \u00a74). These clearly establish that the notion of an informer is nontrivial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informers provided by heuristics",
"sec_num": "3.3"
},
{
"text": "Given informers are useful but nontrivial to recognize, the next natural question is, how can we learn to identify them automatically? From earlier sections, it is clear (and we give evidence later, see Table 5 ) that sequence and syntax information will be important. We will model informer span identification as a sequence tagging problem. An automaton makes probabilistic transitions between hidden states y, one of which is an \"informer generating state\", and emits tokens x. We observe the tokens and have to guess which were produced from the \"informer generating state\".",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 210,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Using CRFs to label informers",
"sec_num": "4"
},
{
"text": "Hidden Markov models are extremely popular for such applications, but recent work has shown that conditional random fields (CRFs) (Lafferty et al., 2001; Sha and Pereira, 2003) have a consistent advantage over traditional HMMs in the face of many redundant features. We refer the reader to the above references for a detailed treatment of CRFs. Here we will regard a CRF as largely a black box 3 .",
"cite_spans": [
{
"start": 130,
"end": 153,
"text": "(Lafferty et al., 2001;",
"ref_id": "BIBREF9"
},
{
"start": 154,
"end": 176,
"text": "Sha and Pereira, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using CRFs to label informers",
"sec_num": "4"
},
{
"text": "To train a CRF, we need a set of state nodes, a transition graph on these nodes, and tokenized text where each token is assigned a state. Once the CRF is trained, it can be applied to a token sequence, pro- 3 We used http://crf.sourceforge.net/ ducing a predicted state sequence.",
"cite_spans": [
{
"start": 207,
"end": 208,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using CRFs to label informers",
"sec_num": "4"
},
{
"text": "We started with the common 2-state \"in/out\" model used in information extraction, shown in the left half of Figure 2 . State \"1\" is the informer-generating state. Either state can be initial and final (double circle) states. The 2-state model can be myopic. Consider the question pair A: What country is the largest producer of wheat? B: Name the largest producer of wheat The i\u00b11 context of producer is identical in A and B. In B, for want of a better informer, we would want producer to be flagged as the informer, although it might refer to a country, person, animal, company, etc. But in A, country is far more precise.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "State transition models",
"sec_num": "4.1"
},
{
"text": "Any 2-state model that depends on positions i \u00b1 1 to define features will fail to distinguish between A and B, and might select both country and producer in A. As we have seen with heuristic informers, polluting the informer pool can significantly hurt SVM accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State transition models",
"sec_num": "4.1"
},
{
"text": "Therefore we also use the 3-state \"begin/in/out\" (BIO) model. The initial state cannot be \"2\" in the 3-state model; all states can be final. The 3-state model allows at most one informer span. Once the 3-state model chooses country as the informer, it is unlikely to stretch state 1 up to producer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State transition models",
"sec_num": "4.1"
},
{
"text": "There is no natural significance to using four or more states. Besides, longer range syntax dependencies are already largely captured by the parser. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State transition models",
"sec_num": "4.1"
},
{
"text": "Sentences with similar parse trees are likely to have the informer in similar positions. This was the intuition behind Zhang et al.'s tree kernel, and is also our starting point. We used the Stanford Lexicalized Parser (Klein and Manning, 2003) to parse the question. (We assume familiarity with parse tree notation for lack of space.) Figure 3 shows a sample parse tree organized in levels. Our first step was to trans- i 1 2 3 4 5 6 7 yi 0 0 0 1 1 2 2 xi What is the capital city of Japan \u2193 Features for xis 1 WP,1 VBZ,1 DT,1 NN,1 NN,1 IN,1 NNP,1 2 WHNP,1 VP,1 NP,1 NP,1 NP,1 Null,1 NP,2 3 Null,1 Null,1 Null,1 Null,1 Null,1 PP,1 PP,1 4 Null,1 Null,1 NP,1 NP,1 NP,1 NP,1 NP,1 5 Null,1 SQ,1 SQ,1 SQ,1 SQ,1 SQ,1 SQ,1 6 SBARQ SBARQ SBARQ SBARQ SBARQ SBARQ SBARQ Table 4 : A multi-resolution tabular view of the question parse showing tag and num attributes. capital city is the informer span with y = 1. late the parse tree into an equivalent multi-resolution tabular format shown in Table 4 .",
"cite_spans": [
{
"start": 219,
"end": 244,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 421,
"end": 656,
"text": "i 1 2 3 4 5 6 7 yi 0 0 0 1 1 2 2 xi What is the capital city of Japan \u2193 Features for xis 1 WP,1 VBZ,1 DT,1 NN,1 NN,1 IN,1 NNP,1 2 WHNP,1 VP,1 NP,1 NP,1 NP,1 Null,1 NP,2 3 Null,1 Null,1 Null,1 Null,1 Null,1 PP,1",
"ref_id": "TABREF4"
},
{
"start": 786,
"end": 793,
"text": "Table 4",
"ref_id": null
},
{
"start": 1008,
"end": 1015,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features from a parse of the question",
"sec_num": "4.2"
},
{
"text": "A labeled question comprises the token sequence x i ; i = 1, . . . and the label sequence y i , i = 1, . . . Each x i leads to a column vector of observations. Therefore we use matrix notation to write down x: A table cell is addressed as x[i, ] where i is the token position (column index) and is the level or row index, 1-6 in this example. (Although the parse tree can be arbitrarily deep, we found that using features from up to level = 2 was adequate.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cells and attributes:",
"sec_num": null
},
{
"text": "Intuitively, much of the information required for spotting an informer can be obtained from the part of speech of the tokens and phrase/clause attachment information. Conversely, specific word information is generally sparse and misleading; the same word may or may not be an informer depending on its position. E.g., \"What birds eat snakes?\" and \"What snakes eat birds?\" have the same words but different informers. Accordingly, we observe two properties at each cell: tag: The syntactic class assigned to the cell by the parser, e.g. x [4, 2] .tag = NP. It is well-known that POS and chunk information are major clues to informer-tagging, specifically, informers are often nouns or noun phrases. num: Many heuristics exploit the fact that the first NP is known to have a higher chance of containing informers than subsequent NPs. To capture this positional information, we define num of a cell at [i, ] as one plus the number of distinct contiguous chunks to the left of [i, ] with tags equal to x [4, 2] .tag. E.g., at level 2 in the table above, the capital city forms the first NP, while Japan forms the second NP. Therefore x [7, 2] .num = 2.",
"cite_spans": [
{
"start": 538,
"end": 541,
"text": "[4,",
"ref_id": null
},
{
"start": 542,
"end": 544,
"text": "2]",
"ref_id": null
},
{
"start": 899,
"end": 904,
"text": "[i, ]",
"ref_id": null
},
{
"start": 1000,
"end": 1003,
"text": "[4,",
"ref_id": null
},
{
"start": 1004,
"end": 1006,
"text": "2]",
"ref_id": null
},
{
"start": 1132,
"end": 1135,
"text": "[7,",
"ref_id": null
},
{
"start": 1136,
"end": 1138,
"text": "2]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cells and attributes:",
"sec_num": null
},
{
"text": "In conditional models, it is notationally convenient to express features as functions on (x i , y i ). To one unfamiliar with CRFs, it may seem strange that y i is passed as an argument to features. At training time, y i is indeed known, and at testing time, the CRF algorithm efficiently finds the most probable sequence of y i s using a Viterbi search. True labels are not revealed to the CRF at testing time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cells and attributes:",
"sec_num": null
},
{
"text": "Cell features IsTag and IsNum: E.g., the observation \"y 4 = 1 and x [4, 2] .tag = NP\" is captured by the statement that \"position 4 fires the feature IsTag 1,NP,2 \" (which has a boolean value). There is an IsTag y,t, feature for each (y, t, ) triplet. Similarly, for every possible state y, every possible num value n (up to some maximum horizon), and every level , we define boolean features IsNum y,n, . E.g., position 7 fires the feature IsNum 2,2,2 in the 3-state model, capturing the statement \"x [7, 2] .num = 2 and y 7 = 2\".",
"cite_spans": [
{
"start": 68,
"end": 71,
"text": "[4,",
"ref_id": null
},
{
"start": 72,
"end": 74,
"text": "2]",
"ref_id": null
},
{
"start": 502,
"end": 505,
"text": "[7,",
"ref_id": null
},
{
"start": 506,
"end": 508,
"text": "2]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cells and attributes:",
"sec_num": null
},
{
"text": "IsNextTag: Context can be exploited by a CRF by coupling the state at position i with observations at positions adjacent to position i (extending to larger windows did not help). To capture this, we use more boolean features: position 4 fires the feature IsPrevTag 1,DT,1 because x [3, 1] .tag = DT and y 4 = 1. Position 4 also fires IsPrevTag 1,NP,2 because x [3, 2] .tag = NP and y 4 = 1. Similarly we define a IsNextTag y,t, feature for each possible (y, t, ) triple.",
"cite_spans": [
{
"start": 282,
"end": 285,
"text": "[3,",
"ref_id": null
},
{
"start": 286,
"end": 288,
"text": "1]",
"ref_id": null
},
{
"start": 361,
"end": 364,
"text": "[3,",
"ref_id": null
},
{
"start": 365,
"end": 367,
"text": "2]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adjacent cell features IsPrevTag and",
"sec_num": null
},
{
"text": "i fires feature IsEdge u,v if y i\u22121 = u and y i = v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State transition features IsEdge: Position",
"sec_num": null
},
{
"text": "There is one such feature for each state-pair (u, v) allowed by the transition graph. In addition we have sentinel features IsBegin u and IsEnd u marking the beginning and end of the token sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State transition features IsEdge: Position",
"sec_num": null
},
{
"text": "We study the accuracy of our CRF-based informer tagger wrt human informer annotations. In the next section we will see the effect of CRF tagging on question classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informer-tagging accuracy",
"sec_num": "4.3"
},
{
"text": "There are at least two useful measures of informer-tagging accuracy. Each question has a known set I k of informer tokens, and gets a set of tokens I c flagged as informers by the CRF. For each question, we can grant ourself a reward of 1 if I c = I k , and 0 otherwise. In \u00a73.1, informers were regarded as a separate (high-value) bag of words. Therefore, overlap between I c and I k would be a reasonable predictor of question classification accuracy. We use the Jaccard similarity |I k \u2229I c |/|I k \u222aI c |. Table 5 shows the effect of using diverse feature sets. Table 6 : Effect of number of CRF states, and comparison with the heuristic baseline (Jaccard accuracy expressed as %). Table 6 shows that the 3-state CRF performs much better than the 2-state CRF, especially on difficult questions with what and which. It also compares the Jaccard accuracy of informers found by the CRF vs. informers found by the heuristics described in \u00a73.3. Again we see a clear superiority of the CRF approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 508,
"end": 515,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 564,
"end": 571,
"text": "Table 6",
"ref_id": null
},
{
"start": 684,
"end": 691,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Informer-tagging accuracy",
"sec_num": "4.3"
},
{
"text": "Unlike the heuristic approach, the CRF approach is relatively robust to the parser emitting a somewhat incorrect parse tree, which is not uncommon. The heuristic approach picks the \"easy\" informer, who, over the better one, CEO, in \"Who is the CEO of IBM\". Its bias toward the NP-head can also be a problem, as in \"What country's president . . .\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informer-tagging accuracy",
"sec_num": "4.3"
},
{
"text": "We have already seen in \u00a73.2 that perfect knowledge of informers can be a big help. Because the CRF can make mistakes, the margin may decrease. In this section we study this issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question classification accuracy",
"sec_num": "4.4"
},
{
"text": "We used questions with human-tagged informers ( \u00a73.2) to train a CRF. The CRF was applied back on the training questions to get informer predictions, which were used to train the 1-vs-1 SVM metalearner ( \u00a73). Using CRF-tagged and not humantagged informers may seem odd, but this lets the SVM learn and work around systematic errors in CRF outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question classification accuracy",
"sec_num": "4.4"
},
{
"text": "Results are shown in columns d and g of Table 3 . Despite the CRF tagger having about 15% error, we obtained 86.2% SVM accuracy which is rather close to the the SVM accuracy of 88% with perfect informers.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Question classification accuracy",
"sec_num": "4.4"
},
{
"text": "The CRF-generated tags, being on the training data, might be more accurate that would be for unseen test cases, potentially misleading the SVM. This turns out not to be a problem: clearly we are very close to the upper bound of 88%. In fact, anecdotal evidence suggests that using CRF-assigned tags actually helped the SVM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question classification accuracy",
"sec_num": "4.4"
},
{
"text": "We presented a new approach to inferring the type of the answer sought by a well-formed natural language question. We introduced the notion of a span of informer tokens and extract it using a sequential graphical model with a novel feature representation derived from the parse tree of the question. Our approach beats the accuracy of recent algorithms, even ones that used max-margin methods with sophisticated kernels defined on parse trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "An intriguing feature of our approach is that when an informer (actor) is narrower than the ques-tion class (person), we can exploit direct hypernymy connections like actor to Tom Hanks, if available. Existing knowledge bases like WordNet and Wikipedia, combined with intense recent work (Etzioni et al., 2004) on bootstrapping is-a hierarchies, can thus lead to potentially large benefits.",
"cite_spans": [
{
"start": 288,
"end": 310,
"text": "(Etzioni et al., 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://l2r.cs.uiuc.edu/\u02dccogcomp/Data/ QA/QC/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.csie.ntu.edu.tw/\u02dccjlin/ libsvm/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments: Thanks to Sunita Sarawagi for help with CRFs, and the reviewers for improving the presentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Experiments in multistrategy learning by meta-learning",
"authors": [
{
"first": "K",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Stolfo",
"suffix": ""
}
],
"year": 1993,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "314--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Chan and S. J Stolfo. 1993. Experiments in mul- tistrategy learning by meta-learning. In CIKM, pages 314-323, Washington, DC.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Web question answering: Is more always better",
"authors": [
{
"first": "S",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2002,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "291--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S Dumais, M Banko, E Brill, J Lin, and A Ng. 2002. Web question answering: Is more always better? In SIGIR, pages 291-298.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Web-scale information extraction in KnowItAll",
"authors": [
{
"first": "",
"middle": [],
"last": "O Etzioni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cafarella",
"suffix": ""
}
],
"year": 2004,
"venue": "WWW Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O Etzioni, M Cafarella, et al. 2004. Web-scale informa- tion extraction in KnowItAll. In WWW Conference, New York. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Question classification with support vector machines and error correcting codes",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT",
"volume": "",
"issue": "",
"pages": "28--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Hacioglu and W Ward. 2003. Question classifica- tion with support vector machines and error correcting codes. In HLT, pages 28-30.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "FALCON: Boosting knowledge for answer engines",
"authors": [
{
"first": "S",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Moldovan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pasca",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Girju",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morarescu",
"suffix": ""
}
],
"year": 2000,
"venue": "TREC 9",
"volume": "",
"issue": "",
"pages": "479--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S Harabagiu, D Moldovan, M Pasca, R Mihalcea, M Sur- deanu, R Bunescu, R Girju, V Rus, and P Morarescu. 2000. FALCON: Boosting knowledge for answer en- gines. In TREC 9, pages 479-488. NIST.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Question answering in Webclopedia",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "C.-Y",
"middle": [],
"last": "Junk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2001,
"venue": "TREC 9",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E Hovy, L Gerber, U Hermjakob, M Junk, and C.-Y Lin. 2001. Question answering in Webclopedia. In TREC 9. NIST.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Relational learning for NLP using linear threshold elements",
"authors": [
{
"first": "",
"middle": [],
"last": "R Khardon",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Valiant",
"suffix": ""
}
],
"year": 1999,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Khardon, D Roth, and L. G Valiant. 1999. Relational learning for NLP using linear threshold elements. In IJCAI.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL",
"volume": "41",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Klein and C. D Manning. 2003. Accurate unlexical- ized parsing. In ACL, volume 41, pages 423-430.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Scaling question answering to the Web",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kwok",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2001,
"venue": "WWW Conference",
"volume": "10",
"issue": "",
"pages": "150--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Kwok, O Etzioni, and D. S Weld. 2001. Scaling ques- tion answering to the Web. In WWW Conference, vol- ume 10, pages 150-161, Hong Kong.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Lafferty, A McCallum, and F Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In ICML.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning question classifiers",
"authors": [
{
"first": "X",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2002,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "556--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X Li and D Roth. 2002. Learning question classifiers. In COLING, pages 556-562.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Is question answering an acquired skill",
"authors": [
{
"first": "",
"middle": [],
"last": "G Ramakrishnan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Paranjpe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2004,
"venue": "WWW Conference",
"volume": "",
"issue": "",
"pages": "111--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G Ramakrishnan, S Chakrabarti, D. A Paranjpe, and P Bhattacharyya. 2004. Is question answering an ac- quired skill? In WWW Conference, pages 111-120, New York.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Shallow parsing with conditional random fields",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "134--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F Sha and F Pereira. 2003. Shallow parsing with condi- tional random fields. In HLT-NAACL, pages 134-141.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "AT&T at TREC-8",
"authors": [
{
"first": "A",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bacchiani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2000,
"venue": "TREC 8",
"volume": "",
"issue": "",
"pages": "317--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Singhal, S Abney, M Bacchiani, M Collins, D Hindle, and F Pereira. 2000. AT&T at TREC-8. In TREC 8, pages 317-330. NIST.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Question classification using support vector machines",
"authors": [
{
"first": "D",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "26--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Zhang and W Lee. 2003. Question classification using support vector machines. In SIGIR, pages 26-32.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "AnswerBus question answering system",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2002,
"venue": "HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z Zheng. 2002. AnswerBus question answering system. In HLT.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "2-and 3-state transition models."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Stanford Parser output example."
},
"TABREF4": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": ""
},
"TABREF6": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">\u2022 IsTag features are not adequate.</td><td/></tr><tr><td colspan=\"5\">\u2022 IsNum features improve accuracy 10-20%.</td></tr><tr><td colspan=\"5\">\u2022 IsPrevTag and IsNextTag (\"+Prev</td></tr><tr><td colspan=\"4\">+Next\") add over 20% of accuracy.</td><td/></tr><tr><td colspan=\"5\">\u2022 IsEdge transition features help exploit</td></tr><tr><td colspan=\"5\">Markovian dependencies and adds another</td></tr><tr><td colspan=\"5\">10-15% accuracy, showing that sequential</td></tr><tr><td colspan=\"3\">models are indeed required.</td><td/><td/></tr><tr><td>Type</td><td colspan=\"4\">#Quest. Heuristic 2-state 3-state</td></tr><tr><td/><td/><td>Informers</td><td>CRF</td><td>CRF</td></tr><tr><td>what</td><td>349</td><td>57.3</td><td>68.2</td><td>83.4</td></tr><tr><td>which</td><td>11</td><td>77.3</td><td>83.3</td><td>77.2</td></tr><tr><td>when</td><td>28</td><td>75.0</td><td>98.8</td><td>100.0</td></tr><tr><td>where</td><td>27</td><td>84.3</td><td>100.0</td><td>96.3</td></tr><tr><td>who</td><td>47</td><td>55.0</td><td>47.2</td><td>96.8</td></tr><tr><td>how *</td><td>32</td><td>90.6</td><td>88.5</td><td>93.8</td></tr><tr><td>rest</td><td>6</td><td>66.7</td><td>66.7</td><td>77.8</td></tr><tr><td>Total</td><td>500</td><td>62.4</td><td>71.2</td><td>86.7</td></tr></table>",
"html": null,
"text": "Effect of feature choices."
}
}
}
}