{ "paper_id": "S14-2002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:32:10.304734Z" }, "title": "SemEval-2014 Task 2: Grammar Induction for Spoken Dialogue Systems", "authors": [ { "first": "Ioannis", "middle": [], "last": "Klasinas", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technical University of Crete", "location": { "postCode": "73100", "settlement": "Chania", "country": "Greece" } }, "email": "iklasinas@isc.tuc.gr" }, { "first": "Elias", "middle": [], "last": "Iosif", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Technical University of Athens", "location": { "postCode": "15780", "settlement": "Zografou", "country": "Greece" } }, "email": "" }, { "first": "Katerina", "middle": [], "last": "Louka", "suffix": "", "affiliation": { "laboratory": "", "institution": "Voiceweb S.A", "location": { "postCode": "15124", "settlement": "Athens", "country": "Greece" } }, "email": "klouka@voiceweb.eu" }, { "first": "Alexandros", "middle": [], "last": "Potamianos", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Technical University of Athens", "location": { "postCode": "15780", "settlement": "Zografou", "country": "Greece" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present the SemEval-2014 Task 2 on spoken dialogue grammar induction. The task is to classify a lexical fragment to the appropriate semantic category (grammar rule) in order to construct a grammar for spoken dialogue systems. We describe four subtasks covering two languages, English and Greek, and three speech application domains, travel reservation, tourism and finance. The classification results are compared against the groundtruth. Weighted and unweighted precision, recall and fmeasure are reported. Three sites participated in the task with five systems, employing a variety of features and in some cases using external resources for training. The submissions manage to significantly beat the baseline, achieving a f-measure of 0.69 in comparison to 0.56 for the baseline, averaged across all subtasks.", "pdf_parse": { "paper_id": "S14-2002", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present the SemEval-2014 Task 2 on spoken dialogue grammar induction. The task is to classify a lexical fragment to the appropriate semantic category (grammar rule) in order to construct a grammar for spoken dialogue systems. We describe four subtasks covering two languages, English and Greek, and three speech application domains, travel reservation, tourism and finance. The classification results are compared against the groundtruth. Weighted and unweighted precision, recall and fmeasure are reported. Three sites participated in the task with five systems, employing a variety of features and in some cases using external resources for training. The submissions manage to significantly beat the baseline, achieving a f-measure of 0.69 in comparison to 0.56 for the baseline, averaged across all subtasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This task aims to foster the application of computational models of lexical semantics to the field of spoken dialogue systems (SDS) for the problem of grammar induction. Grammars constitute a vital component of SDS representing the semantics of the domain of interest and allowing the system to correctly respond to a user's utterance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task has been developed in tight collaboration between the research community and commercial SDS grammar developers, under the auspices of the EU-IST PortDial project 1 . Among the project aims is to help automate the grammar development and localization process. Unlike previous approaches (Wang and Acero, 2006; Cramer, 2007) that have focused on full automation, Port-Dial adopts a human-in-the-loop approach were a developer bootstraps each grammar rule or request type with a few examples (use cases) and then machine learning algorithms are used to propose grammar rule enhancements to the developer. The enhancements are post-edited by the developer and new grammar rule suggestions are proposed by the system, in an iterative fashion until a grammar of sufficient quality is achieved. In this task, we focus on a snapshot of this process, where a portion of the grammar is already induced and post-edited by the developer and new candidate fragments are rolling in order to be classified to an existing rule (or rejected). The goal is to develop machine learning algorithms for classifying candidate lexical fragments to the correct grammar rule (semantic category). The task is equally relevant for both finite-state machine and statistical grammar induction.", "cite_spans": [ { "start": 295, "end": 317, "text": "(Wang and Acero, 2006;", "ref_id": null }, { "start": 318, "end": 331, "text": "Cramer, 2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this task the semantic hierarchy of SDS grammars has two layers, namely, low-and highlevel. Low-level rules are similar to gazetteers referring to terminal concepts that can be as represented as sets of lexical entries. For example, the concept of city name can be represented as = (\"London\", \"Paris\", ...). High-level rules are defined on top of low-level rules, while they can be lexicalized as textual fragments (or chunks), e.g., = (\"fly to \", ...). Using the above examples the sentence \"I want to fly to Paris\" will be first parsed as \"I want to fly to \" and finally as \"I want to \".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this task, we focus exclusively on high-level rule induction, assuming that the low-level rules are known. The problem of fragment extraction and selection is simplified by investigating the binary classification of (already extracted) fragments into valid and non-valid. The task boils down mainly to a semantic similarity estimation problem for the assignment of valid fragments into high-level rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The manual development of grammars is a timeconsuming and tedious process that requires human expertise, posing an obstacle to the rapid porting of SDS to new domains and languages. A semantically coherent workflow for SDS grammar development starts from the definition of lowlevel rules and proceeds to high-level ones. This process is also valid for the case of induction algorithms. Automatic or machine-aided grammar creation for spoken dialogue systems can be broadly divided in two categories (Wang and Acero, 2006) : knowledge-based (or top-down) and data-driven (or bottom-up) approaches.", "cite_spans": [ { "start": 499, "end": 521, "text": "(Wang and Acero, 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "Knowledge-based approaches rely on the manual or semi-automatic development of domainspecific grammars. They start from the domain ontology (or taxonomy), often in the form of semantic frames. First, terminal concepts in the ontology (that correspond to low-level grammar rules) get populated with values, e.g., , and then high-level concepts (that correspond to high-level grammar rules) get lexicalized creating grammar fragments. Finally, phrase headers and trailers are added to create full sentences. The resulting grammars often suffer from limited coverage (poor recall). In order to improve coverage, regular expressions and word/phrase order permutations are used, however at the cost of over-generalization (poor precision). Moreover, knowledge-based grammars are costly to create and maintain, as they require domain and engineering expertise, and they are not easily portable to new domains. This led to the development of grammar authoring tools that aim at facilitating the creation and adaptation of grammars. SGStudio (Semantic Grammar Studio), (Wang and Acero, 2006) , for example, enables 1) example-based grammar learning, 2) grammar controls, i.e., building blocks and operators for building more complex grammar fragments (regular expressions, lists of concepts), and 3) configurable grammar structures, allowing for domain-adaptation and word-spotting grammars. The Grammatical Framework Resource Grammar Library (GFRGL) (Ranta, 2004) enables the cre-ation of multilingual grammars adopting an abstraction formalism, which aims to hide the linguistic details (e.g., morphology) from the grammar developer.", "cite_spans": [ { "start": 1067, "end": 1089, "text": "(Wang and Acero, 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "Data-driven approaches rely solely on corpora (bottom-up) of transcribed utterances (Meng and Siu, 2002; Pargellis et al., 2004) . The induction of low-level rules consists of two steps dealing with the 1) identification of terms, and 2) assignment of terms into rules. Standard tokenization techniques can be used for the first step, however, different approaches are required for the case of multiword terms, e.g., \"New York\". In such cases, gazetteer lookup and named entity recognition can be employed (if the respective resources and tools are available), as well as corpus-based collocation metrics (Frantzi and Ananiadou, 1997) . Typically, the identified terms are assigned into lowlevel rules via clustering algorithms operating over a feature space that is built according to the term semantic similarity. The distributional hypothesis of meaning (Harris, 1954 ) is a widely-used approach for estimating term similarity. A comparative study of similarity metrics for the induction of SDS low-level rules is presented in (Pargellis et al., 2004) , while the combination of metrics was investigated in (Iosif et al., 2006) . Different clustering algorithms have been applied including hard- (Meng and Siu, 2002) and soft-decision (Iosif and Potamianos, 2007) agglomerative clustering.", "cite_spans": [ { "start": 84, "end": 104, "text": "(Meng and Siu, 2002;", "ref_id": "BIBREF11" }, { "start": 105, "end": 128, "text": "Pargellis et al., 2004)", "ref_id": "BIBREF16" }, { "start": 605, "end": 634, "text": "(Frantzi and Ananiadou, 1997)", "ref_id": "BIBREF4" }, { "start": 857, "end": 870, "text": "(Harris, 1954", "ref_id": "BIBREF6" }, { "start": 1030, "end": 1054, "text": "(Pargellis et al., 2004)", "ref_id": "BIBREF16" }, { "start": 1110, "end": 1130, "text": "(Iosif et al., 2006)", "ref_id": "BIBREF8" }, { "start": 1199, "end": 1219, "text": "(Meng and Siu, 2002)", "ref_id": "BIBREF11" }, { "start": 1238, "end": 1266, "text": "(Iosif and Potamianos, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "High-level rule induction is a less researched area that consists of two main sub-problems: 1) the extraction and selection of candidate fragments from a corpus, and 2) assignment of terms into rules. Regarding the first sub-problem, consider the fragments \"I want to depart from on\" and \"depart from \" for the air travel domain. Both express the meaning of departure city, however, the (semantics of the) latter fragment are more concise and generalize better. The application of syntactic parsers for segment extraction is not straightforward since the output is a full parse tree. Moreover, such parsers are typically trained over annotated corpora of formal language usage, while the SDS corpora often are ungrammatical due to spontaneous speech. There are few statistical parsing algorithms that rely only on plain lexical features (Ponvert et al., 2011; Bisk and Hockenmaier, 2012) however, as other algorithms, one needs to decide where to prune the parse tree. In (Georgiladakis et al., 2014) , the explicit extraction and selection of fragments is investigated following an example-driven approach where few rule seeds are provided by the grammar developer. The second sub-problem of highlevel rule induction deals with the formulation of rules using the selected fragments. Each rule is meant to consist of semantically similar fragments. For this purpose, clustering algorithms can be employed exploiting the semantic similarity between fragments as features. This is a challenging problem since the fragments are multi-word structures whose overall meaning is composed according to semantics of the individual constituents. Recently, several models have been proposed regarding phrase (Mitchell and Lapata, 2010) and sentence similarity (Agirre et al., 2012) , while an approach towards addressing the issue of semantic compositionality is presented in (Milajevs and Purver, 2014) .", "cite_spans": [ { "start": 985, "end": 1013, "text": "(Georgiladakis et al., 2014)", "ref_id": "BIBREF5" }, { "start": 1710, "end": 1737, "text": "(Mitchell and Lapata, 2010)", "ref_id": "BIBREF14" }, { "start": 1762, "end": 1783, "text": "(Agirre et al., 2012)", "ref_id": "BIBREF0" }, { "start": 1878, "end": 1905, "text": "(Milajevs and Purver, 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "The main drawback of data-driven approaches is the problem of data sparseness, which may affect the coverage of the grammar. A popular solution to the data sparseness bottleneck is to harvest in-domain data from the web. Recently, this has been an active research area both for SDS systems and language modeling in general. Data harvesting is performed in two steps: (i) query formulation, and (ii) selection of relevant documents or sentences (Klasinas et al., 2013) . Posing the appropriate queries is important both for obtaining in-domain and linguistically diverse sentences. In (Sethy et al., 2007) , an in-domain language model was used to identify the most appropriate n-grams to use as web queries. An indomain language model was used in (Klasinas et al., 2013) for the selection of relevant sentences. A more sophisticated query formulation was proposed in (Sarikaya, 2008) , where from each indomain utterance a set of queries of varying length and complexity was generated. These approaches assume the availability of in-domain data (even if limited) for the successful formulation of queries; this dependency is also not eliminated when using a mildly lexicalized domain ontology to formulate the queries, as in (Misu and Kawahara, 2006) . Selecting the most relevant sentences that get returned from web queries is typically done using statistical similarity metrics between in domain data and retrieved documents, for example the BLEU metric (Papineni et al., 2002) of n-gram similarity in (Sarikaya, 2008) and a metric of relative entropy (Kullback-Leibler) in (Sethy et al., 2007) . In cases where in-domain data is not available, cf. (Misu and Kawahara, 2006) , heuristics (pronouns, sentence length, wh-questions) and matches with out-of-domain language models can be used to identify sentences for training SDS grammars. In (Sarikaya, 2008) , the produced grammar fragments are also parsed and attached to the domain ontology. Harvesting web data can produce high-quality grammars while requiring up to 10 times less in-domain data (Sarikaya, 2008) .", "cite_spans": [ { "start": 444, "end": 467, "text": "(Klasinas et al., 2013)", "ref_id": "BIBREF9" }, { "start": 584, "end": 604, "text": "(Sethy et al., 2007)", "ref_id": null }, { "start": 747, "end": 770, "text": "(Klasinas et al., 2013)", "ref_id": "BIBREF9" }, { "start": 867, "end": 883, "text": "(Sarikaya, 2008)", "ref_id": null }, { "start": 1225, "end": 1250, "text": "(Misu and Kawahara, 2006)", "ref_id": "BIBREF13" }, { "start": 1457, "end": 1480, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF15" }, { "start": 1505, "end": 1521, "text": "(Sarikaya, 2008)", "ref_id": null }, { "start": 1577, "end": 1597, "text": "(Sethy et al., 2007)", "ref_id": null }, { "start": 1652, "end": 1677, "text": "(Misu and Kawahara, 2006)", "ref_id": "BIBREF13" }, { "start": 1844, "end": 1860, "text": "(Sarikaya, 2008)", "ref_id": null }, { "start": 2052, "end": 2068, "text": "(Sarikaya, 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "Further, data-driven approaches induce syntactic grammars but do not learn their corresponding meanings, for this purpose an additional step is required of parsing the grammar fragments and attaching them to the domain ontology (Sarikaya, 2008) . Also, in many cases it was observed that the fully automated bottom-up paradigm results to grammars of moderate quality (Wang and Acero, 2006) , especially on corpora containing longer sentences and more lexical variety (Cramer, 2007) . Finally, algorithms focusing on crosslingual grammar induction, like CLIoS (Kuhn, 2004) , are often even more resourceintensive, as they require training corpora of parallel text and sometimes also a grammar for one of the languages. Grammar quality can be improved by introducing a human in the loop of grammar induction (Portdial, 2014a); an expert that validates the automatically created results (Meng and Siu, 2002) .", "cite_spans": [ { "start": 228, "end": 244, "text": "(Sarikaya, 2008)", "ref_id": null }, { "start": 367, "end": 389, "text": "(Wang and Acero, 2006)", "ref_id": null }, { "start": 467, "end": 481, "text": "(Cramer, 2007)", "ref_id": "BIBREF3" }, { "start": 559, "end": 571, "text": "(Kuhn, 2004)", "ref_id": "BIBREF10" }, { "start": 884, "end": 904, "text": "(Meng and Siu, 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Prior Work", "sec_num": "2" }, { "text": "Next we describe in detail the candidate grammar fragment classification SemEval task. This task is part of a grammar rule induction scenario for high-level rules. The evaluation focuses in spoken dialogue system grammars for multiple domains and languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "The goal of the task is to classify a number fragment to the rules available in the grammar. For each grammar we provide a training and development set, i.e., a set of rules with the associated fragments and the test set which is composed of plain fragments. An excerpt of the train set for the rule \"\" is \"ARRIVE AT , ARRIVES AT , GOING TO \" and of the test set \"GOING INTO , AR-RIVES INTO \".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Design", "sec_num": "3.1" }, { "text": "In preliminary experiments during the task design we noticed that if the test set consists of valid fragments only, good classification performance is achieved, even when using the naive baseline system described later in this paper. To make the task more realistic we have included a set of \"junk\" fragments not corresponding to any specific rule. Junk fragments were added both in the train set where they are annotated as such and in the test set. For this task we have artificially created the junk fragments by removing or adding words from legitimate fragments. Example junk fragments used are \"HOLD AT AT