{ "paper_id": "2016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:04:06.897719Z" }, "title": "Word Substitution in Short Answer Extraction: A WordNet-based Approach", "authors": [ { "first": "Qingqing", "middle": [], "last": "Cai", "suffix": "", "affiliation": { "laboratory": "", "institution": "IPsoft / New York", "location": { "region": "NY", "country": "USA" } }, "email": "" }, { "first": "James", "middle": [], "last": "Gung", "suffix": "", "affiliation": { "laboratory": "", "institution": "IPsoft / New York", "location": { "region": "NY", "country": "USA" } }, "email": "" }, { "first": "Maochen", "middle": [], "last": "Guan", "suffix": "", "affiliation": { "laboratory": "", "institution": "IPsoft / New York", "location": { "region": "NY", "country": "USA" } }, "email": "" }, { "first": "Gerald", "middle": [], "last": "Kurlandski", "suffix": "", "affiliation": { "laboratory": "", "institution": "IPsoft / New York", "location": { "region": "NY", "country": "USA" } }, "email": "" }, { "first": "Adam", "middle": [], "last": "Pease", "suffix": "", "affiliation": { "laboratory": "", "institution": "IPsoft / New York", "location": { "region": "NY", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe the implementation of a short answer extraction system. It consists of a simple sentence selection front-end and a two phase approach to answer extraction from a sentence. In the first phase sentence classification is performed with a classifier trained with the passive aggressive algorithm utilizing the UIUC dataset and taxonomy and a feature set including word vectors. This phase outperforms the current best published results on that dataset. In the second phase, a sieve algorithm consisting of a series of increasingly general extraction rules is applied, using WordNet to find word types aligned with the UIUC classifications determined in the first phase. Some very preliminary performance metrics are presented.", "pdf_parse": { "paper_id": "2016", "_pdf_hash": "", "abstract": [ { "text": "We describe the implementation of a short answer extraction system. It consists of a simple sentence selection front-end and a two phase approach to answer extraction from a sentence. In the first phase sentence classification is performed with a classifier trained with the passive aggressive algorithm utilizing the UIUC dataset and taxonomy and a feature set including word vectors. This phase outperforms the current best published results on that dataset. In the second phase, a sieve algorithm consisting of a series of increasingly general extraction rules is applied, using WordNet to find word types aligned with the UIUC classifications determined in the first phase. Some very preliminary performance metrics are presented.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Short Answer Extraction refers to a set of information retrieval techniques that retrieve a short answer to a question from a sentence. For example, if we have the following question and answer sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) Q: Who was the first president of the United States? A: George Washington was the first president of the United States.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "we want to extract just the phrase \"George Washington\". But what if we have a mismatch in language between question and answer? What is an appropriate measure for word similarity or substitution in question answering? If we have the question answer pair", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) \"Bob walks to the store.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(3) \"Who ambles to the store?\" we probably want to answer \"Bob\", because \"walk\" and \"amble\" are similar and not inconsistent. In isolation, a human would likely judge \"walk\" and \"amble\" to be similar, and by many WordNet-based similarity measures they would be judged similar, since \"walk\" is found as WordNet synsets 201904930, 201912893, 201959776 and 201882170, and \"amble\" is 201918183, which is a direct hyponym of 201904930.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We can use Resnik's method (Resnik, 1995) to compute similarity. In particular we can use Ted Pedersen's (et al) implementation (Pedersen et al., 2004) , which gives the result of walk#n#4 amble#n#1 9.97400037941652 . Word2Vec (Mikolov et al., 2013a) using their 300-dimensional vectors trained on Google News, also gives a relatively high similarity score for the two words > model.similarity('walk', 'amble') 0.525", "cite_spans": [ { "start": 27, "end": 41, "text": "(Resnik, 1995)", "ref_id": "BIBREF17" }, { "start": 94, "end": 112, "text": "Pedersen's (et al)", "ref_id": null }, { "start": 128, "end": 151, "text": "(Pedersen et al., 2004)", "ref_id": "BIBREF15" }, { "start": 227, "end": 250, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "But what about if we have (4) \"Bob has an apple.\" (5) \"Who has a pear?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Is Similarity the Right Measure?", "sec_num": "2" }, { "text": "We find that this pair is even more similar than \"walk\" and \"amble\" > model.similarity('apple', 'pear') 0.645 and from Resnik's algorithm Concept #1: apple Concept #2: pear apple pear apple#n#1 pear#n#1 10.15 and yet clearly 4 is not a valid answer to 5. One possibility is that synset subsumption as a measure of word substitution (Kremer et al., 2014; Biemann, 2013 ) 1 2 may be the appropriate metric, rather than word similarity.", "cite_spans": [ { "start": 332, "end": 353, "text": "(Kremer et al., 2014;", "ref_id": "BIBREF7" }, { "start": 354, "end": 367, "text": "Biemann, 2013", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Is Similarity the Right Measure?", "sec_num": "2" }, { "text": "Our approach starts with the user's question and the sentence that is most likely to contain the answer, which is selected with the BM25 algorithm (Jones et al., 2000) . Then we identify the incoming question as a particular question type according to the UIUC taxonomy 3 . To this taxonomy we have added the yes/no question type. Then we pass the sentence and the question to a class written specifically to handle a particular UIUC question type. Generally, all the base question types behave differently from one another. Within a base question type, subtypes may be handled generically or with code specially targeted for that subtype. For this paper, we first discuss the approach to question classification, and then to answer extraction with a focus on the question subtypes that are amenable to a WordNet-based approach.", "cite_spans": [ { "start": 147, "end": 167, "text": "(Jones et al., 2000)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "3" }, { "text": "This section presents a question classifier with several novel semantic and syntactic features based on extraction of question foci. We use several sources of semantic information for representing features for each question focus. Our model uses a simple margin-based online algorithm. We achieve state-of-the-art performance on both finegrained and coarse-grained question classification. As the focus of this paper is on WordNet, we leave many details to a future paper and primarily report the features used, the learning algorithm and results, without further justification", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification", "sec_num": "4" }, { "text": "Question analysis is a crucial step in many successful question answering systems. Determining the expected answer type for a question can significantly constrain the search space of potential answers. For example, if the expected answer type is country, a system can rule out all documents or sentences not containing mentions of countries. Furthermore, accurately choosing the expected answer type is extremely important for systems that use type-specific strategies for answer selection. A system might, for example, have a specific unit for handling definition questions or reason questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "4.1" }, { "text": "In the last decade, many systems have been proposed for question classification (Li and Roth, 2006; Huang et al., 2008; Silva et al., 2011) . Li and Roth (Li and Roth, 2002) introduced a two-layered taxonomy of questions along with a dataset of 6000 questions divided into a training set of 5000 and test set of 500. This dataset (henceforth referred to as the UIUC dataset) has since become a standard benchmark for question classification systems.", "cite_spans": [ { "start": 80, "end": 99, "text": "(Li and Roth, 2006;", "ref_id": "BIBREF10" }, { "start": 100, "end": 119, "text": "Huang et al., 2008;", "ref_id": "BIBREF5" }, { "start": 120, "end": 139, "text": "Silva et al., 2011)", "ref_id": "BIBREF18" }, { "start": 142, "end": 173, "text": "Li and Roth (Li and Roth, 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "4.1" }, { "text": "There have been a number of advances in word representation research. Turian et al. (Turian et al., 2010) demonstrated the usefulness of a number of different methods for representing words, including word embeddings and Brown clusters (Brown et al., 1992) , within supervised NLP application such as named entity recognition and shallow parsing. Since then, largely due to advances in neural language models for learning word embeddings, such as WORD2VEC (Mikolov et al., 2013b) , word vectors have become essential features in a number of NLP applications.", "cite_spans": [ { "start": 84, "end": 105, "text": "(Turian et al., 2010)", "ref_id": "BIBREF20" }, { "start": 236, "end": 256, "text": "(Brown et al., 1992)", "ref_id": "BIBREF1" }, { "start": 456, "end": 479, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "4.1" }, { "text": "In this paper, we describe a new model for question classification that takes advantage of recent work in word embedding models, beating the previous state-of-the-art by a significant margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "4.1" }, { "text": "Question foci (also known as headwords) have been shown to be an important source of information for question analysis. Therefore, their accurate identification is a crucial component of question classifiers. Unlike past approaches using phrase-structure parses, we use rules based on a dependency parse to extract each focus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "4.1.1" }, { "text": "We first extract the question word (how, what, when, where, which, who, whom, whose, or why) or imperative (name, tell, say, or give). This is done by naively choosing the first question word in the sentence, or first imperative word if no question word is found. This approach works well in practice, though a more advanced method may be beneficial in more general domains than the TREC (Voorhees, 1999) questions of the UIUC dataset.", "cite_spans": [ { "start": 388, "end": 404, "text": "(Voorhees, 1999)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "4.1.1" }, { "text": "We then define specific rules for each type of question word. For example, what/which questions are treated differently than how questions. In how questions, we identify words like much and many as question foci, while treating the heads of these words (e.g. feet or people) as a separate type known as QUANTITY (as opposed to FOCUS. Furthermore, when the focus of a how question is itself the head (e.g. how much did it cost? or how long did he swim?), we again differentiate the type using a MUCH type and a SPAN type that includes words like long and short.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "4.1.1" }, { "text": "A head chunk such as type of car contains two words, type and car, which both provide potentially useful sources of information about the question type. We refer to words such as type, kind, and brand as specifiers. We extract the argument of a specifier (car) as well as the specifier itself (type) as question foci.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "4.1.1" }, { "text": "In addition to head words of the question word, we also extract question foci linked to the root of the question when the root verb is an entailment word such as is, called, named, or known. Thus, for questions like What is the name of the tallest mountain in the world?, we extract name and mountain as question foci. This can result in many question foci in the case of a sentence like What relative of the racoon is sometimes known as the cat-bear?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "4.1.1" }, { "text": "We apply an in-house implementation of the multi-class Passive-Aggressive algorithm (Crammer et al., 2006) to learn our model's parameters. Specifically, we use PA-I, with", "cite_spans": [ { "start": 84, "end": 106, "text": "(Crammer et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1.2" }, { "text": "\u03c4 t = min C, l t x t 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1.2" }, { "text": "for t = 1, 2, ... where C is the aggressiveness parameter, l t is the loss, and x t 2 is the squared norm of the feature vector for training example t. The Passive-Aggressive algorithm's name refers to its behavior: when the loss is 0, the parameters are unchanged, but when the loss is positive, the algorithm aggressively forces the loss to return to zero, regardless of step-size. \u03c4 (a Lagrange multiplier) is used to used to control the step-size. When C is increased, the algorithm has a more aggressive update.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1.2" }, { "text": "We replicate the evaluation framework used in (Li and Roth, 2006; Huang et al., 2008; Silva et al., 2011) . We use the full, unaltered 5500-question training set from UIUC for training, and evaluate on the 500-question test.", "cite_spans": [ { "start": 46, "end": 65, "text": "(Li and Roth, 2006;", "ref_id": "BIBREF10" }, { "start": 66, "end": 85, "text": "Huang et al., 2008;", "ref_id": "BIBREF5" }, { "start": 86, "end": 105, "text": "Silva et al., 2011)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4.2" }, { "text": "To demonstrate the impact of our model's novel features, we performed a feature ablation test (Table 2) in which we removed groups of features from the full feature set. Table 3 : System comparison of accuracies for fine (50-class) and coarse (6-class) question labels.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 177, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4.2" }, { "text": "Our model significantly outperforms all previous results for question classification on the UIUC dataset (Table 3) . Furthermore, we accomplished this without significant manual feature engineering or rule-writing, using a simple online-learning algorithm to determine the appropriate weights.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 114, "text": "(Table 3)", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "In this section we discuss techniques for short answer extraction once questions have been classified into a particular UIUC type. We employ a \"sieve\" approach, as in (Lee et al., 2011) , that has seen some success in tasks like coreference resolution and is creating a bit of a renaissance in rule-based, as opposed to machine learning, approaches in NLP. We provide in this paper one example of how instead of taking an either/or approach, both methods can be combined into a high performance system. We focus below on the sieves that are specific to question types where we have been able to profitably employ WordNet for finding the right short answer. Preliminary results have been positive employing this approach.", "cite_spans": [ { "start": 167, "end": 185, "text": "(Lee et al., 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Answer Extraction", "sec_num": "5" }, { "text": "We have two strategies that are used across the base question types: employing semantic role labels and recognizing appositives. The third test set is TREC-8 (Voorhees, 1999) .", "cite_spans": [ { "start": 158, "end": 174, "text": "(Voorhees, 1999)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Answer Extraction", "sec_num": "5" }, { "text": "We employ the semantic role labeling of ClearNLP (Choi, 2012) 5 . While the labels are consistent with PropBank (Palmer et al., 2005) , ClearNLP fixes the definition of several of the labels (A2-A5) that are left undefined in PropBank. A0 is the \"Agent\" relation, which is often the subject of the sentence. A1 is the \"Patient\" or object of the sentence. The remainder can be found in (Choi, 2012). Let's look at an example and the list the steps followed in the code to analyse the question and answer.", "cite_spans": [ { "start": 112, "end": 133, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Role Labels", "sec_num": "5.2" }, { "text": "A: As a boy, Abraham Lincoln loved books.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(6) Q: What did Lincoln love?", "sec_num": null }, { "text": "We have the following dependency graphs among the tokens in each sentence: 7What did Lincoln love ? 1. We collect basic information from the question and answer sentence (a) find the question word, e.g. \"what\", \"when\", \"where\", etc. In Example 6 it is \"what-1\" (b) Locate the verb node nearest to the question word. In Example 6 it is \"love-4\" (c) Find the semantic relations in the question. We find an Agent/A0 relationship between Lincoln-3 and the verb love-4. We find a Patient/A1 relationship between the question word What-1 and the verb love-4. (See Examples 11 and 12). (d) Find semantic relations in the answer sentence. We find an Agent/A0 relationship between Lincoln-6 and the verb loved-7. We find an ARGM-PRD relationship between As-1 and the verb loved-7. We find a Patient/A1 relationship between books-8 and the verb loved-7. (See Examples 11 and 12). (e) Perform a graph structure match between the question and answer graphs formed by the set of their semantic role labels. Find the parent graph node in the answer that matches as many nodes in the question as possible. In our example, loved-7 is the best match. (See Examples 11 and 12).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(6) Q: What did Lincoln love?", "sec_num": null }, { "text": "2. Collect and score candidate answer nodes. Score each semantic child for best parent found in the previous step, based on part of speech, named entity, dependency relations from Stanford's CoreNLP (Manning et al., 2014) , and semantic role label information. We initialize each child to a value of 1.0 and then penalize it by 0.01 for the presence of any out of a set of possible undesirable features, as follows:", "cite_spans": [ { "start": 199, "end": 221, "text": "(Manning et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "(6) Q: What did Lincoln love?", "sec_num": null }, { "text": "\u2022 The candidate's semantic role label starts with \"ARGM\", meaning that its semantic role is something other than A0-A5. (See Examples 11 and 12). Note that this is only applied in cases where the question type has been identified as \"Human\" or \"Entity\" \u2022 The node's dependency label = \"prep*\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(6) Q: What did Lincoln love?", "sec_num": null }, { "text": "indicating that it is a prepositional relationship. Note that this is only applied in cases where the question type has been identified as \"Human\" or \"Entity\" \u2022 If the candidate node is the same form (word spelling) as in the question, or its WordNet hyponym \u2022 If the candidate node is the same root (lemma) as in the question, or its Word-Net hyponym \u2022 If the candidate node is lower case. Note that this is only applied in cases where the question type has been identified as \"Human\" or \"Entity\" \u2022 If the candidate node has a child with a different semantic role label than in the question \u2022 If the candidate node is an adverb or a Wh-quantifier as marked by its part of speech label 3. Pick the dependency node with highest confidence score as the answer node. In our example we have As-1 = 0.97, Lincoln-6 = 0.96 and books-8 = 0.99.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(6) Q: What did Lincoln love?", "sec_num": null }, { "text": "Note that the step of scoring the answer nodes enumerates a small feature set with hand-set coefficients. We expect in a future phase to enumerate a much larger set of features, and then set the coefficients based on machine learning over our corpus of question-answer pairs. One simple experiment to show the value of semantic role labeling was conducted on a portion of our testing corpus. Using semantic role labels we achieved total of 638 correct answers out of 1460 questions (which was the total number in the IPsoft internal Q&A test set at the time of the test), for a correctness score of 43.7%. Without semantic role labels the result was 462 out of 1460, or 31.6%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(6) Q: What did Lincoln love?", "sec_num": null }, { "text": "The appositive is a grammatical construction in which one phrase elaborates or restricts another. For example, (13) My cousin, Bob, is a great guy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appositives", "sec_num": "5.3" }, { "text": "\"Bob\" further restricts the identity of \"My cousin\". We use the appositive grammatical relation to identify the answers to \"What\" questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appositives", "sec_num": "5.3" }, { "text": "Short answer extraction for the Entity question type has some specialized rules for some subtypes, and some rules which are applied generally to all the other subtypes. We are also exploring using WordNet (Fellbaum, 1998) synsets to get word lists that are members of each Entity subtype (see Table 4 ). This appears to have a significant effect, since 10 questions are answerable with this approach just addressing two of the 22 Entity subtypes. More work is needed to get comprehensive statistics.", "cite_spans": [ { "start": 205, "end": 221, "text": "(Fellbaum, 1998)", "ref_id": null } ], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Entity Question Type", "sec_num": "5.4" }, { "text": "5.4.1 Entity.animal Subtype 1. First try to find an appositive relationship. If there is one, use it as the answer. For example 14, if we ask \"Who is a great guy?\" we have a simple answer with \"Bob\" as the appositive. If that fails:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Question Type", "sec_num": "5.4" }, { "text": "2. try the approach described above in subsection 5.2 and keep the candidate with the highest confidence score 5.4.2 Entity.creative Subtype 1. First try to find an appositive relationship. If there is one, use it as the answer. If that fails:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Question Type", "sec_num": "5.4" }, { "text": "2. try the approach described above in subsection 5.2 and keep the candidate with the highest confidence score. If that fails:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Question Type", "sec_num": "5.4" }, { "text": "3. find the first capitalized sequence of words and return it 5.4.3 All Other Entity Subtypes 1. First try to find an appositive relationship. If there is one, use it as the answer. If that fails:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Question Type", "sec_num": "5.4" }, { "text": "2. try the approach described above in subsection 5.2 and keep the candidate with the highest confidence score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Question Type", "sec_num": "5.4" }, { "text": "Take for example the following (15) Q: What shrubs can be planted that will be safe from deer? A: Three old-time charmers make the list of shrubs unpalatable to deer: lilac, potentilla, and spiraea. Short Answer: Lilac, potentilla, and spiraea.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": "5.5" }, { "text": "Knowing from WordNet that 112310349:{lilac}, and 112659356:{spiraea, spirea} (although not potentilla) are hyponyms of shrub makes it easy to find the right dependency parse subtree for the short answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": "5.5" }, { "text": "Similarly for (16) Q: What athletic game did dentist William Beers write a standard book of rules for? A: In 1860, Beers began to codify the first written rules of the modern game of lacrosse. Short Answer: Lacrosse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": "5.5" }, { "text": "knowing that 100455599:{game} is a hypernym of 100477392:{lacrosse} makes finding the right answer in the sentence easy. Table 4 lists all the types and subtypes in the UIUC taxonomy and the WordNet (Fellbaum, 1998) synset numbers that correspond to semantic types for the UIUC types. These are used to get all words that are in the given synsets as well as all words in the synsets that are more specific in the WordNet hyponym hierarchy than those listed. Note that below we prepend to the synset numbers a number for their part of speech. In the current scheme all are nouns, so the first number is always a \"1\". We only elaborate subtypes of Entity, Human, and Location as the other categories do not use WordNet for matching.", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 128, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Example", "sec_num": "5.5" }, { "text": "Using a WordNet-based word replacement method appears to be better for question answering than using word similarity metrics. In preliminary tests 10 questions in a portion of our corpora are answerable with this approach just addressing two of the 22 Entity subtypes with WordNet based matching. While more experimentation is needed, the results are intuitive and promising. The current approach should be validated and compared against other approaches on current data sets such as (Pe\u00f1as et al., 2015) . ", "cite_spans": [ { "start": 484, "end": 504, "text": "(Pe\u00f1as et al., 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://dkpro-similarity-asl. googlecode.com/files/TWSI2.zip 2 http://www.anc.org/MASC/coinco.tgz", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://cogcomp.cs.illinois.edu/Data/ QA/QC/definition.html http://cogcomp.cs.illinois.edu/Data/QA/ QC/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Creating a system for lexical substitutions from scratch using crowdsourcing. Language Resources and Evaluation", "authors": [ { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2013, "venue": "", "volume": "47", "issue": "", "pages": "97--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Biemann. 2013. Creating a system for lexi- cal substitutions from scratch using crowdsourcing. Language Resources and Evaluation, 47(1):97-122.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Class-based ngram models of natural language", "authors": [ { "first": "Peter", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Desouza", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "Jenifer", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "", "middle": [], "last": "Lai", "suffix": "" } ], "year": 1992, "venue": "Computational linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Brown, Peter Desouza, Robert Mercer, Vincent dellaPietra, and Jenifer Lai. 1992. Class-based n- gram models of natural language. Computational linguistics, 18(4):467-479.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Optimization of Natural Language Processing Components for Robustness and Scalability", "authors": [ { "first": "D", "middle": [], "last": "Jinho", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinho D. Choi. 2012. Optimization of Natural Lan- guage Processing Components for Robustness and Scalability. Ph.D. thesis, University of Colorado at Boulder, Boulder, CO, USA. AAI3549172.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Shai Shalev-Shwartz, and Yoram Singer", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Keshet", "suffix": "" } ], "year": 2006, "venue": "The Journal of Machine Learning Research", "volume": "7", "issue": "", "pages": "551--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. The Journal of Ma- chine Learning Research, 7:551-585.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Question classification using head words and their hypernyms", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Thint", "suffix": "" }, { "first": "Zengchang", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "927--936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Marcus Thint, and Zengchang Qin. 2008. Question classification using head words and their hypernyms. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing, pages 927-936. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A probabilistic model of information retrieval: development and comparative experiments: Part 1", "authors": [ { "first": "K", "middle": [], "last": "Jones", "suffix": "" }, { "first": "S", "middle": [], "last": "Walker", "suffix": "" }, { "first": "S", "middle": [ "E" ], "last": "Robertson", "suffix": "" } ], "year": 2000, "venue": "formation Processing & Management", "volume": "36", "issue": "", "pages": "779--808", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sparck Jones, S. Walker, and S.E. Robertson. 2000. A probabilistic model of information retrieval: de- velopment and comparative experiments: Part 1. In- formation Processing & Management, 36(6):779 - 808.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "What Substitutes Tell Us -Analysis of an \"All-Words\" Lexical Substitution Corpus", "authors": [ { "first": "Gerhard", "middle": [], "last": "Kremer", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Thater", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerhard Kremer, Katrin Erk, Sebastian Pad, and Stefan Thater. 2014. What Substitutes Tell Us -Analysis of an \"All-Words\" Lexical Substitution Corpus. In Proceedings of EACL, Gothenburg, Sweden.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Stanford's Multi-pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, CONLL Shared Task '11", "volume": "", "issue": "", "pages": "28--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Ju- rafsky. 2011. Stanford's Multi-pass Sieve Corefer- ence Resolution System at the CoNLL-2011 Shared Task. In Proceedings of the Fifteenth Confer- ence on Computational Natural Language Learn- ing: Shared Task, CONLL Shared Task '11, pages 28-34, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning question classifiers", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th international conference on Computational linguistics", "volume": "1", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li and Dan Roth. 2002. Learning question clas- sifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1-7. Association for Computational Linguis- tics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning question classifiers: the role of semantic information", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2006, "venue": "Natural Language Engineering", "volume": "12", "issue": "03", "pages": "229--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li and Dan Roth. 2006. Learning question clas- sifiers: the role of semantic information. Natural Language Engineering, 12(03):229-249.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Stanford CoreNLP Natural Language Processing Toolkit", "authors": [ { "first": "Chris", "middle": [], "last": "Manning", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Manning, John Bauer, Mihai Surdeanu, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Process- ing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Workshop at ICLR. Now Pub", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In Proceedings of Workshop at ICLR. Now Pub.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013b. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The proposition bank: An annotated corpus of semantic roles", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational linguistics", "volume": "31", "issue": "1", "pages": "71--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated cor- pus of semantic roles. Computational linguistics, 31(1):71-106.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "WordNet::Similarity: Measuring the Relatedness of Concepts", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Michelizzi", "suffix": "" } ], "year": 2004, "venue": "Demonstration Papers at HLT-NAACL 2004, HLT-NAACL-Demonstrations '04", "volume": "", "issue": "", "pages": "38--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. WordNet::Similarity: Measuring the Relatedness of Concepts. In Demonstra- tion Papers at HLT-NAACL 2004, HLT-NAACL- Demonstrations '04, pages 38-41, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Overview of the CLEF question answering track", "authors": [ { "first": "Anselmo", "middle": [], "last": "Pe\u00f1as", "suffix": "" }, { "first": "Christina", "middle": [], "last": "Unger", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Paliouras", "suffix": "" }, { "first": "Ioannis", "middle": [ "A" ], "last": "Kakadiaris", "suffix": "" } ], "year": 2015, "venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction -6th International Conference of the CLEF Association", "volume": "", "issue": "", "pages": "539--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anselmo Pe\u00f1as, Christina Unger, Georgios Paliouras, and Ioannis A. Kakadiaris. 2015. Overview of the CLEF question answering track 2015. In Ex- perimental IR Meets Multilinguality, Multimodality, and Interaction -6th International Conference of the CLEF Association, CLEF 2015, Toulouse, France, September 8-11, 2015, Proceedings, pages 539-544.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Using information content to evaluate semantic similarity in a taxonomy", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95", "volume": "", "issue": "", "pages": "448--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In In Proceedings of the 14th International Joint Con- ference on Artificial Intelligence (IJCAI-95, pages 448-453. Morgan Kaufmann.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "From symbolic to subsymbolic information in question classification", "authors": [ { "first": "Joao", "middle": [], "last": "Silva", "suffix": "" }, { "first": "Lu\u00edsa", "middle": [], "last": "Coheur", "suffix": "" }, { "first": "Ana", "middle": [ "Cristina" ], "last": "Mendes", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Wichert", "suffix": "" } ], "year": 2011, "venue": "Artificial Intelligence Review", "volume": "35", "issue": "2", "pages": "137--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joao Silva, Lu\u00edsa Coheur, Ana Cristina Mendes, and Andreas Wichert. 2011. From symbolic to sub- symbolic information in question classification. Ar- tificial Intelligence Review, 35(2):137-154.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Question Generation as a Competitive Undergraduate Course Project", "authors": [ { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2008, "venue": "NSF Workshop on the Question Generation Shared Task and Evaluation Challenge", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah A. Smith, Michael Heilman, , and Rebecca Hwa. 2008. Question Generation as a Competitive Un- dergraduate Course Project. In NSF Workshop on the Question Generation Shared Task and Evalua- tion Challenge, Arlington, VA, September.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Word representations: a simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for compu- tational linguistics, pages 384-394. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Overview of the TREC 2002 Question Answering Track", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 11th Text Retrieval Conference (TREC)", "volume": "", "issue": "", "pages": "115--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen M. Voorhees. 1999. Overview of the TREC 2002 Question Answering Track. In In Proceedings of the 11th Text Retrieval Conference (TREC), pages 115- 123.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Feature ablation study: accuracies on coarse and fine-grained labels after removing specific features from the full feature set.", "content": "
SystemFine Coarse
Li and Roth 2002 84.2 91.0
Huang et al. 2008 89.2 93.4
Silva et al. 201190.8 95.0
Our System92.0 96.2
", "num": null, "type_str": "table", "html": null }, "TABREF5": { "text": "UIUC class to WordNet synset mappings", "content": "", "num": null, "type_str": "table", "html": null } } } }