{ "paper_id": "C96-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:50:31.412390Z" }, "title": "FeasPar -A Feature Structure Parser Learning to Parse Spoken Language", "authors": [ { "first": "Finn", "middle": [], "last": "Dag Bu\u00a2", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe and experimentally evaluate a system, FeasPar, that learns parsing spontaneous speech. To train and run FeasPar (Feature Structure Parser), only limited handmodeled knowledge is required. The FeasPar architecture consists of neural networks and a search. The networks spilt the incoming sentence into chunks, which are labeled with feature values and chunk relations. Then, the search finds the most probable and consistent feature structure. FeasPar is trained, tested and evaluated with the Spontaneous Schednling Task, and compared with a handmodeled LRparser. The handmodeling effort for Fea-sPar is 2 weeks. The handmodeling effort for the LR-parser was 4 months. FeasPar performed better than the LRparser in all six comparisons that are made.", "pdf_parse": { "paper_id": "C96-1033", "_pdf_hash": "", "abstract": [ { "text": "We describe and experimentally evaluate a system, FeasPar, that learns parsing spontaneous speech. To train and run FeasPar (Feature Structure Parser), only limited handmodeled knowledge is required. The FeasPar architecture consists of neural networks and a search. The networks spilt the incoming sentence into chunks, which are labeled with feature values and chunk relations. Then, the search finds the most probable and consistent feature structure. FeasPar is trained, tested and evaluated with the Spontaneous Schednling Task, and compared with a handmodeled LRparser. The handmodeling effort for Fea-sPar is 2 weeks. The handmodeling effort for the LR-parser was 4 months. FeasPar performed better than the LRparser in all six comparisons that are made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "When building a speech parsing component for small domains, an important goal is to get good performance. If low hand labor is involved, then it's even better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unification based formalisms, e.g. (Gazdar et al., 1985; Kaplan and Bresnan, 1982; Pollard and Sag, ]987), have been very successful for analyzing written language, because they have provided parses with rich and detailed linguistic information. However, these approaches have two major drawbacks: first, they require hand-designed symbolic knowledge like lexica and grammar rules, and second, this knowledge is too rigid, causing problems with ungranlmaticality and other deviations from linguistic rules. These deviations are manageable and low in number, when analyzing written language, but not for spoken language. The latter also contains spontaneous effects and speech recognition errors. (On the other hand, the good thing is that spoken language tend to contain less complex structures than written language.) Several methods have been suggested compensate for these speech related problems: e.g. score and penalties, probabilistic rules, and skipping words (Dowding et al., 1993; Seneff, 1992; Lavie and Tomita, 1993; Issar and Ward, 1993) .", "cite_spans": [ { "start": 35, "end": 56, "text": "(Gazdar et al., 1985;", "ref_id": "BIBREF5" }, { "start": 57, "end": 82, "text": "Kaplan and Bresnan, 1982;", "ref_id": "BIBREF8" }, { "start": 83, "end": 94, "text": "Pollard and", "ref_id": "BIBREF11" }, { "start": 967, "end": 989, "text": "(Dowding et al., 1993;", "ref_id": "BIBREF4" }, { "start": 990, "end": 1003, "text": "Seneff, 1992;", "ref_id": "BIBREF12" }, { "start": 1004, "end": 1027, "text": "Lavie and Tomita, 1993;", "ref_id": "BIBREF9" }, { "start": 1028, "end": 1049, "text": "Issar and Ward, 1993)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A small community have experimented with either purely statistical approaches (Brown et al., 1990; Schiitze, 1993) or connectionist based approaches (Berg, 1991; Miikkulainen and Dyer, 1991; Jain, 1991; Wermter and Weber, 1994) . The main problem when using statistical approaches for spoken language processing, is the large amounts of data required to train these models. All connectionist approaches to our knowledge, have suffered from one or more of the following problems: One, parses contains none or too few linguistic attributes to be used in translation or understanding, and/or it is not shown how to use their parse formalism in a total NLP system. Two, no clear and quantitative statement about overall performance is made. Three, the approach has not been evaluated with real world data, but with highly regular sentences. Four, millions of training sentences are required.", "cite_spans": [ { "start": 78, "end": 98, "text": "(Brown et al., 1990;", "ref_id": "BIBREF1" }, { "start": 99, "end": 114, "text": "Schiitze, 1993)", "ref_id": null }, { "start": 149, "end": 161, "text": "(Berg, 1991;", "ref_id": "BIBREF0" }, { "start": 162, "end": 190, "text": "Miikkulainen and Dyer, 1991;", "ref_id": "BIBREF10" }, { "start": 191, "end": 202, "text": "Jain, 1991;", "ref_id": "BIBREF7" }, { "start": 203, "end": 227, "text": "Wermter and Weber, 1994)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a parser that produces complex feature structures, as known from e.g. GPSG (Gazdar et al., 1985) . This parser requires only minor hand labeling, and learns the parsing task itself. It generalizes well, and is robust towards spontaneous effects and speech recognition errors.", "cite_spans": [ { "start": 101, "end": 122, "text": "(Gazdar et al., 1985)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The parser is trained and evaluated with the Spontaneous Scheduling Task, which is a negotiation situation, in which two subjects have to decide on time and place for a meeting. The subjects' calendars have conflicts, so that a few sug-gestions have to go back and tbrth before finding a time slot suitable for both. The data sets are real-world data, containing spontaneous speech effects. 3?he training set consists of 560 sentences, the deveJopment test set of 65 sentences, and the unseen evaluation set of 120 sentences. For clarity, tile examl)le sentences in this paper are among the simpler in the training set. The parser is trained with transcribed data only, but evaluated with transcribed and speech data (including speech recognition errors). The parser produces feature structures, holding semantic information. Feature structures are used as interlingua in the JANUS speech-to-speech translation system (Woszczyna el; . Within our research team, the design of the interlingua ILT was determined by the needs of uniticatkm based parser and generator writers. Consequently, the ILT design was ,lot tuned towards connectkmist systeins. On the contrary, our parser must learn the form of tile output provided by a unitication based parser.", "cite_spans": [ { "start": 918, "end": 932, "text": "(Woszczyna el;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows: First, a short tutorial on feature structures, and how to build them. Second, we describe the parser architec~ ture and how it works. Third, we describe the lexicon. Fourth, we describe the tmrser's neural aspects. Fifth, a search algorithm is motivated. Then results and conclusion follow.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Feature structures (Gazdar et al., 1985; Pollard and Sag, 1987) (day 27)))) relations. Atomic feature pairs belonging to the same branches, have the same relation to all other branches. Further, when comparing the sentence with its feature structure, it appears that there is a correspondence between fl'agments of the feature structure, and specific ctmnks of the sentence. In the example feature structure of Figure 1 , the following observations about feature pairs and relations apply:", "cite_spans": [ { "start": 19, "end": 40, "text": "(Gazdar et al., 1985;", "ref_id": "BIBREF5" }, { "start": 41, "end": 63, "text": "Pollard and Sag, 1987)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 411, "end": 419, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "\u2022 feature pairs: \"the twenty seventh\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "\u2022 relations: tile coinplex value of the tbature topic corresponds to the chunk \"by monday\", and tile complex value of the feature clarified corresponds to \"you mean monday the twenty seventh\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "Manually aligniug the sentence with fragments of the feature structure, gives a structure as shown in Figure 2 . A few coinments apply to this figure:", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "\u2022 The sentence is hierarchically split into chunks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "\u2022 Feature pairs are listed with their corresponding chunk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "\u2022 Relations are shown in square brackets, and express how a chunk relates to its parent chunk. Relations may contain more than one element. This allows several nesting levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "Once having obtained the information in Figure 2, producing a feature structure is straight forward, using the algorithm of Figure 3 . Sumruing up, we can define this procedure as the chunk'n'label principle of parsing: Figure 2: Chunk parse: Sentence aligned with its feature structure (see text for explanation).", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 46, "text": "Figure", "ref_id": null }, { "start": 124, "end": 132, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "1. Split the incoming sentence into hierarchical chunks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "2. Label each chuck with feature pairs and feature relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "3. Convert this into a feature structure, using the algorithm of Figure 3 . Figure 3: Algorithm for converting a parse to a feature structure", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 73, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Feature Structures", "sec_num": "2" }, { "text": "The chunk'n'label principle has a few theoretical limitations compared with the feature structure formalisms commonly used in unification-based parsing, e.g. (Gazdar et al., 1985) .", "cite_spans": [ { "start": 158, "end": 179, "text": "(Gazdar et al., 1985)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Theoretical Limitations", "sec_num": "3.1" }, { "text": "With the chunk'n'label principle, the feature structure has a maximum nesting depth. One could expect the maximal nesting depth to cause limitations. However, these limitations are only theoretical, because very deep nesting is hardly needed in practice for spoken language. Due to the ability to model relations of more than length 1, no nesting depth problems occurred while modeling over 600 sentences from the English Spontaneous Scheduling Task (ESST).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Depth", "sec_num": "3.1.1" }, { "text": "Many unification formalisms allow feature values to be shared. The chunk'n'label principle does not incorporate any mechanism for this. However, all work with ESST and ILT empirically showed that there is no need for structure sharing. This observation suggests that for semantic analysls, structure sharing is statistically insignificant, even if its existence is theoretically present.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structure Sharing", "sec_num": "3.1.2" }, { "text": "Baseline Parser", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "The chunk'n'label principle is the basis for the design and implementation of the FeasPar parser. FeasPar uses neural networks to learn to produce chunk parses. It has two modes: learn mode and run mode. In learn mode, manually modeled chunk parses are split into several separate training sets; one per neural network. Then, the networks are trained independently of each other, allowing for parallel training on several CPU's. In run mode, the input sentence is processed through all networks, giving a chunk parse, which is passed (frame *interval) (end ((frame *simple-time) (hour 12)))))) on to the converting algorithm shown in Figure 3 . In the following, tile three main modules required to produce a chunk parse are described:", "cite_spans": [], "ref_spans": [ { "start": 634, "end": 642, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "The Chunker splits an input sentence into chunks. It consists of three neural networks. The first network finds numbers. They are classified as being ordinal or cardinal numbers, and are presented as words to the following networks. The next network groups words together to phrases. The third network groups phrases together into clauses. In total, there are four levels of chunks: word/numbers, phrases, clauses and sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "The Linguistic Feature Labeler attaches features and atomic feature values (if applicable) to these chunks. For each feature, there is a network, which finds one or zero atomic values. Since there are many features, each chunk may get no, one or several pairs of features and atomic values. Since a feature normally only occurs at a certain ctmnk level, the network is tailored to decide on a particular feature at a particular chunk level. This specialization is there to prevent the learning task from becoming too complex. A special atomic feature value is called lexical feature value. It is indicated by '=' and means that the neural network only detects the occurrence of a value, whereas the value itself is found by a lexicon lookup. The lexical feature values are a true hybrid mechanism, where symbolic knowledge is included when the neural network signals so. Furthermore, features may be marked as up-features (e.g.../incl-excl in Figure 4 and 5). An up-feature is propagated up to its parent branch when building the feature structure (see Figure 6) .", "cite_spans": [], "ref_spans": [ { "start": 943, "end": 951, "text": "Figure 4", "ref_id": null }, { "start": 1053, "end": 1062, "text": "Figure 6)", "ref_id": null } ], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "The Chunk Relation Finder determines how a chunk relates to its parent chunk. It has one network per chunk level and chunk relation element.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "The following example illustrates in detail how the three parts work. ]~br clarity, this example assumes that all networks perform perfectly. The parser gets the English sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "\"i have a meeting till twelve\" The Chunker segments the sentence before passing it to the Linguistic Feature Labeler, which adds semantic labels (see Figure 4) . The Chunk Relation Finder then adds relations, where appropriate, and we get the chunk parse as shown in Figure 5 . Finally, processing it by the algorithm in Figure 3 , gives the final parse, the feature structure, as shown in Figure 6 .", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 159, "text": "Figure 4)", "ref_id": null }, { "start": 267, "end": 275, "text": "Figure 5", "ref_id": null }, { "start": 321, "end": 329, "text": "Figure 3", "ref_id": null }, { "start": 390, "end": 398, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "FeasPar uses a full word form lexicon. The lexicon consists of three parts: one, a syntactic and semantic microfeature vector per word, second, lexical feature values, and three, statistical microfeatures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon", "sec_num": "4.1" }, { "text": "Syntactic and semantic microfeatures are represented for each word as a vector of binary vahles. These vectors are used as input to the neural networks. As the neural networks learn their tasks based on the microfeatures, and not based on distinct words, adding new words using the same microfeatures is easy and does not degrade general-ization performance. The number and selection of microfeatures are domain dependent and must be made manually. For ESST, the lexicon contains domain independent syntactic and domain dependent semantic microfcatures. To manually model a 600 word ESST vocabulary requires 3 lull days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon", "sec_num": "4.1" }, { "text": "Lexical feature values are stored in look-up tables, which are accessed when the Linguistic Feature Labeler indicates a lexical feature value. These tables are generated automatically from the training data, and can easily be extended by hand for more generality and new words. An automatie ambiguity checker warns if similar words or phrases map to ambiguous lexical feature values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon", "sec_num": "4.1" }, { "text": "Statistical microfeatures are represented for each word as a vector of continuous values Vstat. These microfeatures, each of them representing a feature pair, are extracted automatically. For every feature value at a certain chunk level, if there exists a word such that, given this word in the training data, the feature value occurs in more than 50 % of tim cases. One continuous microfeature value v~t,t for a word w is set automatically to the percentage of feature value occurrence given that word w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon", "sec_num": "4.1" }, { "text": "All neural networks have one hidden layer, and are conventional feed-forward networks. The learning is done with standard back-propagation, com~ bined with the constructive learning algorithm PCL (Jain, 1991) , where learning starts using a small context, which is increased later in the learning process. This causes local dependencies to be learned first. Generalization performance is increased by sparse connectivity. This connection principle is based on the microfeatures in the lexicon that are relevant to a particular network. The Chunker networks are only connected to the syntactic microfeatures, because chunking is a syntactic task. With ESST, the Linguistic Feature Labeler and Chunk Relation Finder networks are connected only to the semantic microfeatures, and to relevant statistical microfeatures. All connectivity setup is automatic. Further techniques for improving performance are described in (Buo, 1996) . For the neural networks, the average test set performance is 95.4 %", "cite_spans": [ { "start": 196, "end": 208, "text": "(Jain, 1991)", "ref_id": "BIBREF7" }, { "start": 915, "end": 926, "text": "(Buo, 1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Neural Architecture and Training", "sec_num": "4.2" }, { "text": "The complete parse depends on many neural networks. Most networks have a certain error rate; only a few networks are perfect. When building complete feature structures, these network errors multiply up, resulting in not only that many feature structures are erroneous, but also inconsistent and making no sense. To compensate for this, we wrote a search algorithm. It's based on two information sources: First, scores that originates from the network output activations; second, a formal feature structure specification, stating what mixture of feature pairs are consistent. This specification was already available as an interlingua specification document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search", "sec_num": "5" }, { "text": "Using these two information sources, the search finds the feature structure with the highest score, under the constraint of being consistent. The search is described in more detail in (Bu0 and Waibel, 1996; Bu0, 1996) . FeasPar is compared with a handmodeled LRparser. The handmodeling effort for FeasPar is 2 weeks. The handmodeling effort tbr the LR-parser was 4 months.", "cite_spans": [ { "start": 184, "end": 206, "text": "(Bu0 and Waibel, 1996;", "ref_id": null }, { "start": 207, "end": 217, "text": "Bu0, 1996)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Search", "sec_num": "5" }, { "text": "The evaluation environment is the JANUS speech translation system for the Spontaneous Scheduling Task. The system have one parser and one generator per language. All parsers and generators are written using CMU's GLR/GLR* system (Lavie and Tomita, 1993) . They all share the same interlingua, ILT, which is a special case of LFG or feature structures.", "cite_spans": [ { "start": 229, "end": 253, "text": "(Lavie and Tomita, 1993)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Search", "sec_num": "5" }, { "text": "All Performance measures are run with transcribed (T) sentences and with speech (S) sentences containing speech recognition errors. Performance measure 1 is the feature accuracy, where all features of a parser-nmde feature structure are compared with feature of the correct handmodeled feature structure. Performance measure 2 is the end-to-end translation ratio for acceptable nontrivial sentences achieved when LR-generators are used as back-ends of the parsers. Performance measure 2 uses an English LR-generator (handmodeled for 2 years), providing results for Englishto-English translation, whereas performance measure 3 uses a German LR-generator (handmodeled for 6 months), hence providing results for Englishto-German translations. Results for an unseen, independent evaluation set are shown in Figure 7 .", "cite_spans": [], "ref_spans": [ { "start": 803, "end": 811, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Search", "sec_num": "5" }, { "text": "As we see, FeasPar is better than the LR-parser in all six comparison perforInance measures made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search", "sec_num": "5" }, { "text": "We described and experimentally evaluated a system, FeasPar, that learns parsing spontaneous speech. To train and run FeasPar (Feature Structure Parser), only limited handmodeled knowledge is required (chunk parses and a lexicon).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "l~5;asPar is based on a principle of chunks, their features and relations. The FeasPar architecture consists of two n'tajor parts: A neural network collection and a search. The neural networks first spilt the incoming sentence into chunks. Then each chunk is labeled with feature values and chunk relations. Finally, the search uses a formal feature structure specification as constraint, and outputs the most probable and consistent feature structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "FeasPar was trained, tested and evaluated with the Spontaneous Scheduling Task, and compared with a handmodeled LR-parser. FeasPar pertbrmed better than the LR-parser in all six comparison performance measures that were made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learifing Recursive Phrase Structure: Combining the St, rengths of PDP and X-Bar Syntax", "authors": [ { "first": "George", "middle": [], "last": "Berg", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Berg. 1991. Learifing Recursive Phrase Structure: Combining the St, rengths of PDP and X-Bar Syntax. Technical report TR 91-5, Dept. of Computer Science, University at Al- bany, State University of New York.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Statistical Approach To Machine ~lYanslation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "John", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Cocke", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Pietra", "suffix": "" }, { "first": "Fredrick", "middle": [ "Jelinek" ], "last": "Pietra", "suffix": "" }, { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Mercer", "suffix": "" }, { "first": "Paul", "middle": [ "S" ], "last": "Roossin", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "2", "pages": "79--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A Statistical Approach To Ma- chine ~lYanslation. Computational Linguistics, 16(2):79-85, June.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Search in a Learnable Spoken Language Parser", "authors": [], "year": 1996, "venue": "Proceedings of the 12th European Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finn Dag Bu0 and Alex Waibel. 1996. Search in a Learnable Spoken Language Parser. In Proceed- ings of the 12th European Conference on Arti- ficial Intelligence, August.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "FeasPar -A 1%ature Structure Parser Learning to Parse Spontaneous Speech", "authors": [ { "first": "", "middle": [], "last": "Finn Dag Bu0", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finn Dag Bu0. 1996. FeasPar -A 1%ature Structure Parser Learning to Parse Sponta- neous Speech. Ph.D. thesis, University of Karl- sruhe, upcoming.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Gemini: A Natural Language System for Spoken-Language Understanding", "authors": [ { "first": "J", "middle": [], "last": "Dowding", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Gawron", "suffix": "" }, { "first": "D", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "J", "middle": [], "last": "Bear", "suffix": "" }, { "first": "L", "middle": [], "last": "Cherny", "suffix": "" }, { "first": "R", "middle": [], "last": "Moore", "suffix": "" }, { "first": "D", "middle": [], "last": "Moran", "suffix": "" } ], "year": 1993, "venue": "Proceedings ARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "43--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Dowding, J. M. Gawron, D. Appelt, J. Bear, L. Cherny, R. Moore, and D. Moran. 1993. Gemini: A Natural Language System for Spoken-Language Understanding. In Proceed- ings ARPA Workshop on Human Language Technology, pages 43-48, Princeton, New Jer- sey, March. Morgan Kaufmann Publisher.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A theory of syntactic features", "authors": [ { "first": "G", "middle": [], "last": "Gazdar", "suffix": "" }, { "first": "E", "middle": [], "last": "Klein", "suffix": "" }, { "first": "G", "middle": [ "K" ], "last": "Pullum", "suffix": "" }, { "first": "I", "middle": [ "A" ], "last": "Sag", "suffix": "" } ], "year": 1985, "venue": "Generalized Phrase Structure Grammar", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Gazdar, E. Klein, G. K. Pullum, and I. A. Sag. 1985. A theory of syntactic features. In Generalized Phrase Structure Grammar, chap- ter 2. Blackwell Publishing, Oxford, England and Itarvard University Press, Cambridge, MA, USA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "CMU's robust spoken language understanding system", "authors": [ { "first": "Sunil", "middle": [], "last": "Issar", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Ward", "suffix": "" } ], "year": 1993, "venue": "Proceedings of Eurospeech", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunil Issar and Wayne Ward. 1993. CMU's ro- bust spoken language understanding system. In Proceedings of Eurospeech.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Connectionist Learning Architecture for Parsing Spoken Language", "authors": [ { "first": "Ajay", "middle": [ "N" ], "last": "Jain", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ajay N. Jain. 1991. A Connectionist Learning Ar- chitecture for Parsing Spoken Language. Ph.D. thesis, School of Computer Science, Carnegie Mellon University, Dec.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lexical-Functional Grammar: A Formal System for Grammatical Representation", "authors": [ { "first": "R", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "J", "middle": [], "last": "Bresnan", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "173--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Kaplan and J. Bresnan. 1982. Lexical- Functional Grammar: A Formal System for Grammatical Representation. In J. Bresnan, editor, The Mental Representation of Gram- matical Relations, pages 173-281. The MIT Press, Cambridge, MA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "GLR* -An Efficient Noise-skipping Parsing Algorithm for Context-free Grammars", "authors": [ { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "M", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1993, "venue": "Proceedings of Third Intcrnational Workshop on Parsing Technologies", "volume": "123", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Lavie and M. Tomita. 1993. GLR* -An Efficient Noise-skipping Parsing Algorithm for Context-free Grammars. In Proceedings of Third Intcrnational Workshop on Parsing Tech- nologies, pages 123 134.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Natural Language Processing With Modular PDP Networks and Distributed Lexicon", "authors": [ { "first": "R", "middle": [], "last": "Miikkulainen", "suffix": "" }, { "first": "M", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 1991, "venue": "Cognitive Scienec", "volume": "15", "issue": "", "pages": "343--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Miikkulainen and M. Dyer. 1991. Natural Language Processing With Modular PDP Net- works and Distributed Lexicon. Cognitive Sci- enec, 15:343 399.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Formal Foundations. In An In/ormation-Based Syntax and Semantics, chapter 2. CSLI Lecture Notes No.13. tIinrich Schiitze. 1993. rDanslation by Confusion", "authors": [ { "first": "C", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "I", "middle": [], "last": "Sag", "suffix": "" } ], "year": 1987, "venue": "Spring Symposium on Machinc Translation. AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Pollard and I. Sag. 1987. Formal Foundations. In An In/ormation-Based Syntax and Seman- tics, chapter 2. CSLI Lecture Notes No.13. tIinrich Schiitze. 1993. rDanslation by Confusion. In Spring Symposium on Machinc Translation. AAAI.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "TINA: A Natural Language System for Spoken Language Applications. Computational linguistics", "authors": [ { "first": "Stephanie", "middle": [], "last": "Seneff", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephanie Seneff. 1992. TINA: A Natural Lan- guage System for Spoken Language Applica- tions. Computational linguistics, 18(1).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning Fault-tolerant Spreech Parsing witt~ SCREEN", "authors": [], "year": null, "venue": "Proceedings of Twelfth National Conference on Artificial InteUigence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Learning Fault-tolerant Spreech Parsing witt~ SCREEN. In Proceedings of Twelfth National Conference on Artificial InteUigence, Seattle.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards Spontaneous Speech Translation", "authors": [ { "first": "M", "middle": [], "last": "Woszczyna", "suffix": "" }, { "first": "N", "middle": [], "last": "Aoki-Waibel", "suffix": "" }, { "first": "F", "middle": [ "D" ], "last": "Bu0", "suffix": "" }, { "first": "N", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "K", "middle": [], "last": "Horiguchi", "suffix": "" }, { "first": "T", "middle": [], "last": "Kemp", "suffix": "" }, { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "A", "middle": [], "last": "Mcnair", "suffix": "" }, { "first": "T", "middle": [], "last": "Polzin", "suffix": "" }, { "first": "I", "middle": [], "last": "Rogina", "suffix": "" }, { "first": "C", "middle": [ "P" ], "last": "Rose", "suffix": "" }, { "first": "T", "middle": [], "last": "Schultz", "suffix": "" }, { "first": "B", "middle": [], "last": "Suhm", "suffix": "" }, { "first": "M", "middle": [], "last": "Tomita", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 1994, "venue": "International Conference on Acoustics, Speech '~ Signal Processing", "volume": "93", "issue": "", "pages": "345--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Woszczyna, N. Aoki-Waibel, F. D. Bu0, N. Coccaro, K. Horiguchi, T. Kemp, A. Lavie, A. McNair, T. Polzin, I. Rogina, C.P. Rose, T. Schultz, B. Suhm, M. Tomita, and A. Waibel. 1994. JANUS 93: Towards Spon- taneous Speech Translation. In International Conference on Acoustics, Speech '~ Signal Pro- cessing, pages 345--348, vol. 1, Adelaide, Aus- tralia, April. IEEE.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Feature structure with the meaning \"by monday i assume you mean monday the twenty seventh\" of feature pairs with atomic wdues make up tile braimhes, and the ln'anches are connected with", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "empty set; assign(S,top_level_chunk); return(S); END; PROCEDURE assign(VAR S: set; C: chunk); BEGIN P := chunk_relation(C); F0R each relation element PE in P BEGIN S' := empty set; include (PE,S') in S; S := S'; END; FOR each feature pair FP in C include FP in S; F0R each chunk C' in C assign(S,C); END;", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "Feature structure parse Chunked and labeled sentence (labels shown in boldface) ([/((speech-act *state-constraint) (sentence-type *state)) (~((frame *booked)) ([who] ((frame ----*i)) Chunk parse (chunk relations shown in boldface)", "uris": null, "type_str": "figure" } } } }