{ "paper_id": "C98-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:28:47.078648Z" }, "title": "Spoken Dialogue Interpretation with the DOP Model", "authors": [ { "first": "Rens", "middle": [], "last": "Bod", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": { "addrLine": "Spuistraat 134", "postCode": "1012 VB", "settlement": "Amsterdam" } }, "email": "rens.bod@let.uva.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We show how the DOP model can be used for fast and robust processing of spoken input in a practical spoken dialogue system called OVIS. OVIS, Openbaar Vervoer Informatie Systeem (\"Public Transport Information System\"), is a Dutch spoken language information system which operates over ordinary telephone lines. The prototype system is the immediate goal of the NWO 1 Priority Programme \"Language and Speech Technology\". In this paper, we extend the original DOP model to context-sensitive interpretation of spoken input. The system we describe uses the OVIS corpus (10,000 trees enriched with compositional semantics) to compute from an input word-graph the best utterance together with its meaning. Dialogue context is taken into account by dividing up the OVIS corpus into context-dependent subcorpora. Each system question triggers a subcorpus by which the user answer is analyzed and interpreted. Our experiments indicate that the context-sensitive DOP model obtains better accuracy than the original model, allowing for fast and robust processing of spoken input.", "pdf_parse": { "paper_id": "C98-1021", "_pdf_hash": "", "abstract": [ { "text": "We show how the DOP model can be used for fast and robust processing of spoken input in a practical spoken dialogue system called OVIS. OVIS, Openbaar Vervoer Informatie Systeem (\"Public Transport Information System\"), is a Dutch spoken language information system which operates over ordinary telephone lines. The prototype system is the immediate goal of the NWO 1 Priority Programme \"Language and Speech Technology\". In this paper, we extend the original DOP model to context-sensitive interpretation of spoken input. The system we describe uses the OVIS corpus (10,000 trees enriched with compositional semantics) to compute from an input word-graph the best utterance together with its meaning. Dialogue context is taken into account by dividing up the OVIS corpus into context-dependent subcorpora. Each system question triggers a subcorpus by which the user answer is analyzed and interpreted. Our experiments indicate that the context-sensitive DOP model obtains better accuracy than the original model, allowing for fast and robust processing of spoken input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The Data-Oriented Parsing (DOP) model (cf. Bod 1992 Bod , 1995 Bod & Kaplan 1998; Scha 1992; Sima'an 1995 Sima'an , 1997 Rajman 1995) is a probabilistic parsing model which does not single out a narrowly predefined set of structures as the statistically significant ones. It accomplishes this by maintaining a large corpus of analyses of previously occurring utterances. New utterances are analyzed by combining subtrees from the corpus. The occurrence-frequencies of the subtrees are used to estimate the most probable analysis of an utterance.", "cite_spans": [ { "start": 43, "end": 51, "text": "Bod 1992", "ref_id": "BIBREF2" }, { "start": 52, "end": 62, "text": "Bod , 1995", "ref_id": "BIBREF3" }, { "start": 63, "end": 81, "text": "Bod & Kaplan 1998;", "ref_id": "BIBREF6" }, { "start": 82, "end": 92, "text": "Scha 1992;", "ref_id": "BIBREF19" }, { "start": 93, "end": 105, "text": "Sima'an 1995", "ref_id": "BIBREF20" }, { "start": 106, "end": 120, "text": "Sima'an , 1997", "ref_id": "BIBREF21" }, { "start": 121, "end": 133, "text": "Rajman 1995)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To date, DOP has mainly been applied to corpora of trees labeled with syntactic annotations. Let us illustrate this with a very simple example. Suppose that a corpus consists of only two trees:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(1) DOP computes the probability of substituting a subtree t on a specific node as the probability of selecting t among all subtrees in the corpus that could be substituted on that node. This probability is equal to the number of occurrences of t, divided by the total number of occurrences of subtrees t' with the same root label as t. Let rl(t) return the root label of t then: P(t) = #(t) / ~,t':rl(t')=rl(t)# (t') . The probability of a derivation is computed by the product of the probabilities of the subtrees is consists of. The probability of a parse tree is computed by the sum of the probabilities of all derivations that produce that parse tree. Bod (1992) demonstrated that DOP can be implemented using conventional context-free parsing techniques. However, the computation of the most probable parse of a sentence is NP-hard (Sima'an 1996) . The most probable parse can be estimated by iterative Monte Carlo sampling (Bod 1995) , but efficient algorithms exist only for sub-optimal solutions such as the most likely derivation of a sentence (Bod 1995 , Sima'an 1995 or the \"labelled recall parse\" of a sentence (Goodman 1996) . So far, the syntactic DOP model has been tested on the ATIS corpus and the Wall Street Journal corpus, obtaining significantly better test results than other stochastic parsers (Charniak 1996) . For example, Goodman (1998) compares the results of his DOP parser to a replication of Pereira & Schabes (1992) on the same training and test data. While the Pereira & Schabes method achieves 79.2% zero-crossing brackets accuracy, DOP obtains 86.1% on the same data (Goodman 1998: p. 179, table 4.4) . Thus the DOP method outperforms tile Pereira & Schabes method with an accuracy-increase of 6.9%, or an errorreduction of 33%. Goodman also performs a statistical analysis using t-test, showing that the differences are statistically significant beyond the 98th percentile.", "cite_spans": [ { "start": 413, "end": 417, "text": "(t')", "ref_id": null }, { "start": 657, "end": 667, "text": "Bod (1992)", "ref_id": "BIBREF2" }, { "start": 830, "end": 852, "text": "NP-hard (Sima'an 1996)", "ref_id": null }, { "start": 930, "end": 940, "text": "(Bod 1995)", "ref_id": "BIBREF3" }, { "start": 1054, "end": 1063, "text": "(Bod 1995", "ref_id": "BIBREF3" }, { "start": 1064, "end": 1078, "text": ", Sima'an 1995", "ref_id": "BIBREF20" }, { "start": 1124, "end": 1138, "text": "(Goodman 1996)", "ref_id": "BIBREF12" }, { "start": 1318, "end": 1333, "text": "(Charniak 1996)", "ref_id": "BIBREF11" }, { "start": 1349, "end": 1363, "text": "Goodman (1998)", "ref_id": null }, { "start": 1423, "end": 1447, "text": "Pereira & Schabes (1992)", "ref_id": "BIBREF17" }, { "start": 1602, "end": 1635, "text": "(Goodman 1998: p. 179, table 4.4)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In Bod ct al. (1996) , it was shown how DOP can be generalized to semantic interpretation by using corpora annotated with compositional semantics. In the current paper, wc extend the DOP model to spoken dialogue understanding, and we show how it can bc used as an efficient and robust NLP component in a practical spoken dialogue system called OVIS. OVIS, Openbaar Vervoer informatic Systcem (\"Public Transport Information System\"), is a Dutch spoken hmguagc information system which operates over ordinary telephone lines. The prototype system is the immediate goal of the NWO Priority Programme \"Language and Speech Technology\".", "cite_spans": [ { "start": 3, "end": 20, "text": "Bod ct al. (1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Tile backbone of any DOP model is an annotated language corpus. In the following section, we therefore start with a description of the corpus that was developed for the OVIS system, the \"OVIS corpus\". We then show how this corpus can be used by DOP to compute the most likely meaning M of a word string W: argmaxM P(M, W). Next we demonstrate how the dialogue context C can be integrated so as to compute argmaxM P(M, W I C). Finally, we interface 1)OP with speech and show how the most likely meaning M of an acoustic utterance A given dialogue context C is computed: argmaxt~t P(M, A I C). The last section of this paper deals with the experimental evaluation of the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The OVIS corpus currently consists of 10,000 syntactically and semantically annotated user utterances that were collected on the basis of a pilot version of the OVIS system 2. (Aust et al. 1995) , adapted to l)utch. was developed. This tag set was deliberately kept small so as to improve the robustness of the DOP parser. A correlate of this robustness is that the parser will overgenerate, but as long as the probability model can accurately select the correct utterance-analysis from all possible analyses, this overgeneration is not problematic. Robustness is further achieved by a special category, called ERROR. This category is nsed for stutters, false starts, and repairs. No grammar is used to determine the correct syntactic annotation; there is a small set of guidelines, that has the degree of detail necessary to avoid an \"anything goes\" attitude in the annotator, but leaves room for the annotator's perception of the structure of the utterance (see Bonnema et al. 1997) .", "cite_spans": [ { "start": 176, "end": 194, "text": "(Aust et al. 1995)", "ref_id": "BIBREF0" }, { "start": 964, "end": 984, "text": "Bonnema et al. 1997)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The OVIS corpus: trees enriched with compositional frame semantics", "sec_num": "2." }, { "text": "The semantic annotations are based on the update language defined for the OVIS dialogue manager by Veldhuijzen van Zanten (1996) . This language consists of a hierarchical fi'ame structure with slots and values for the origin and destination of a train connection, for the time at which the user wants to arrive or depart, etc. The distinction between slots and values can be regarded as a special case of ground and focus distinction (Vallduvi 1990) . Updates specify the ground and locus of the user utterances. For example, the utterance lk wil niet vamlaag maar morgen naar Ahnere (literally: \"I want not today but tomorrow to Almere\") yields the following update: An important property of this update language is that it allows encoding of speech-act information (v. Noord et al. 1997) . The \"#\" in the update means that the information between the square brackets (representing the focus of tim user-utterance) must be retracted, while the \"!\" denotes the corrected information. This update language is used to semantically enrich the syntactic nodes of the OVIS trees by means of the following annotation convention:", "cite_spans": [ { "start": 99, "end": 128, "text": "Veldhuijzen van Zanten (1996)", "ref_id": "BIBREF23" }, { "start": 435, "end": 450, "text": "(Vallduvi 1990)", "ref_id": "BIBREF22" }, { "start": 768, "end": 790, "text": "(v. Noord et al. 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The OVIS corpus: trees enriched with compositional frame semantics", "sec_num": "2." }, { "text": "\u2022 Every meaningful lexical node is annotated with a slot and/or value from the update language which represents the meaning of the lexical item.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The OVIS corpus: trees enriched with compositional frame semantics", "sec_num": "2." }, { "text": "\u2022 Every meaningful non-lexical node is annotated with a Jormula schema which indicates how its meaning representation can be put together out of the meaning representations assigned to its daughter nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The OVIS corpus: trees enriched with compositional frame semantics", "sec_num": "2." }, { "text": "In the examples below, these schemata use the variable dl to indicate tile meaning of the leftmost daughter constituent, d2 to indicate the meaning of the second daughter node constituent, etc. For instance, the full (syntactic and semantic) annotation for the above sentence lk wil niet vandaag maar morgen naar Almere is given in figure (5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The OVIS corpus: trees enriched with compositional frame semantics", "sec_num": "2." }, { "text": "Note that the top-node meaning of (5) is compositionally built up out of the meanings of its sub-constituents. Substituting the meaning representations into the corresponding variables yields the update expression (4). The OVIS annotations are in contrast with other corpora and systems (e.g. Miller et al. 1996) The manual annotation of 10,000 OVIS utterances may seem a laborious and error-prone process. In order to expedite this task, a flexible and powerful annotation workbench (SEMTAGS) was developed by Bonnema (1996) . SEMTAGS is a graphical interface, written in C using the XVIEW toolkit. It offers all functionality needed for examining, evaluating, and editing syntactic and semantic analyses. SEMTAGS is mainly used for correcting the output of the DOP parser. After the first 100 OVIS utterances were annotated and checked by hand, the parser used the subtrees of these annotations to produce analyses for the next 100 OVIS utterances. These new analyses were checked and corrected by the annotator using SEMTAGS, and were added to the total set of annotations. This new set of 200 analyses was then used by the DOP parser to predict the analyses for a next subset of OVIS utterances. In this incremental, bootstrapping way, 10,000 OVIS utterances were annotated in approximately 600 hours (supervision included). For further information on OVIS and how to obtain the corpus, see http://earth.let.uva.nl/-rens.", "cite_spans": [ { "start": 293, "end": 312, "text": "Miller et al. 1996)", "ref_id": "BIBREF14" }, { "start": 511, "end": 525, "text": "Bonnema (1996)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "The OVIS corpus: trees enriched with compositional frame semantics", "sec_num": "2." }, { "text": "An important advantage of a corpus annotated according to the Principle of Compositionality of Meaning is that the subtrees can directly he used by DOP for computing syntactic/semantic representations for new utterances. The only difference is that we now have composite labels which do not only contain syntactic but also semantic information. By way of illustration, we show how a representation for the input utterance lk wil van Venlo naar Ahnere ('7 want from Venlo to Almere\") can be constructed out of subtrees from the trees in figures (5) and 6 destination, place, town.almere)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using the OVIS corpus for data-oriented semantic analysis", "sec_num": "3." }, { "text": "The probability calculations for the semantic DOP model are simihu to the original DOP model. That is, the probability of a subtree t is equal to the number of occurrences of t in the corpus divided by the number of occurrences of all subtrees t' that can be substituted on the same node as t. The probability of a derivation D = tl o ... o t,~ is tim product elf the probabilities ell\" its subt,'ees ti. The probability of a parse tree T is the sum of tile probabilities of all derivations D that produce T. And the probability of a meaning M and a word string W is the sum of the probabilities of all parse trees T of W whose tol)-node meaning is logically equivalent to M (see Bod et al. 1996) .", "cite_spans": [ { "start": 680, "end": 696, "text": "Bod et al. 1996)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Using the OVIS corpus for data-oriented semantic analysis", "sec_num": "3." }, { "text": "As with the most probable parse, the most probable meaning M of a word string W cannot be computed in deterministic polynomial time. Although tile most probable meaning can be estimated by iterative Monte Carl() saml)ling (see Bod 1995) , the computation of a sulTiciently large number of random derivations is currently not efficient enough for a practical application. To date, only the most likely derivation can be computed in near to real-time (by a best-first Viterbi optimization algorithm). We therefore assume that most of the probability mass for each top-node meaning is focussed on a single derivation. Under this assmnl)tion, the most likely meaning of a string is the top-node lneaning generated by the most likely derivation of that string (see also section 5).", "cite_spans": [ { "start": 227, "end": 236, "text": "Bod 1995)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Using the OVIS corpus for data-oriented semantic analysis", "sec_num": "3." }, { "text": "context-dependent subcorpora", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extending DOP to dialogue context:", "sec_num": "4." }, { "text": "We now extend the semantic DOP model to compute the most likely meaning of a sentence given the previous dialogue. In general, the probability of a topnode meaning M and a particular word string W i given a dialogue-context", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extending DOP to dialogue context:", "sec_num": "4." }, { "text": "C i = Wi-l, Wi-2 ... WI is given by P(M, W i I Wi-l, Wi-2 ... WI).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extending DOP to dialogue context:", "sec_num": "4." }, { "text": "Since the OVIS user utterances are typically answers to previous system questions, we assume that the meaning of a word string W i does not depend on the full dialogue context but only on the previous (system) question W i_l. Under this assumption,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extending DOP to dialogue context:", "sec_num": "4." }, { "text": "For DOP, this formula means that the update semantics of a user utterance W i is computed on the basis of the subcorpus which contains all OVlS utterances (with their annotations) that are answers to the system question Wi_ 1. This gives rise to the following interesting model for dialogue processing: each system question triggers a context-dependent domain (a subcorpus) by which the user answer is analyzed and interpreted. Sittce the number of different system questions is a small closed set (see Veldhuijzen van Zanten 1996), we can create offdine for each subcorpus the conesponding DOP parser. In OVIS, the following context-dependent subcorpora can be distinguished:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P(M, W i I Ci) = P(M, W i I Wi_l)", "sec_num": null }, { "text": "(1) place subcorpus: utterances following questions like Front where to where do you want to travel? What is your destination ?, etc. Note that a subcorpus can contain utterances whose topic goes beyond the previous system question. For example, if the system asks Front where to where do you want to travel?, and the user answers with: Front Amsterdam to Groningen tomorrow morning, then the date-expression tomorrow morning ends up in the place-subcorpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P(M, W i I Ci) = P(M, W i I Wi_l)", "sec_num": null }, { "text": "It is interesting to note that this contextsensitive DOP model can easily be generalizcd to donmin-dependent interpretation: a corpus is clustered into subcorpora, where each subcorpus corresponds to a topic-dependent domain. A new utterance is interpreted by the domain in which it gets highest probability. Since snmll subcorpora tend to assign higher probabilities to utterances than large subcorpora (because relative frequencies of subtrees in small corpora tend to be highe,'), it follows that a language user strives for the smallest, most specific domain in which the perceived utterance can be analyzed, thus establishing a most specific common ground.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P(M, W i I Ci) = P(M, W i I Wi_l)", "sec_num": null }, { "text": "So far, we have dealt with the estimation of the probability P(M, W I C) of a meaning M and a word string W given a dialogue context C. However, in spoken dialogue processing, the word string W is not given. The input for DOP in the OVIS system are word-graphs produced by the speech recognizer (these word-graphs are generated by our project partners from the University of Nijmegen).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "A word-graph is a compact representation for all sequences of words that the speech recognizer hypothesizes for an acoustic utterance A (see e.g. figure 10 ). The nodes of the graph represent points in time, and a transition between two nodes i and j, represents a word w that may have been uttered between the corresponding points in time. For convenience we refer to transitions in the word-graph using the notation . The word-graphs are optimized to eliminate epsilon transitions. Such transitions represent periods of time when the speech recognizer hypothesizes that no words are uttered. Each transition is associated with an acoustic score. This is the negative logarithm (of base 10) of the acoustic probability P(a I w) for a hypothesized word w normalized by the length of w. Reconverting these acoustic scores into their corresponding probabilities, the acoustic probability P(AIW) for a hypothesized word string W can be computed by the product of the probabilities associated to each transition in the corresponding word-graph path. Figure (10) shows an example of a simplified word-graph for the uttered sentence Ik wil graag vamnorgen naar Leiden (\"I'd [258.80) The probabilistic interface between DOP and speech word-graphs thus consists of the interface between the DOP probabilities P(M, W IC) and the word-graph probabilities P(A I W) so as to compute the probability P(M, A I C) and argmax M P(M, A I C). We start by rewriting P(M, A I C) as:", "cite_spans": [ { "start": 1171, "end": 1176, "text": "(\"I'd", "ref_id": null }, { "start": 1177, "end": 1185, "text": "[258.80)", "ref_id": null } ], "ref_spans": [ { "start": 146, "end": 155, "text": "figure 10", "ref_id": null }, { "start": 1055, "end": 1066, "text": "Figure (10)", "ref_id": null } ], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "P(M,A IC) = )-\"wP(M,W,A IC) = Y'w P(M, W I C) \u2022 P(A I M, W, C)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "The probability P(M, W IC) is computed by the dialogue-sensitive DOP model as explained in the previous section. To estimate the probability P(A I M, W, C) on the basis of the information available in the word-graphs, we must make the following independence assumption: the acoustic utterance A depends only on the word string W, and 142 not on its context C and meaning M (cf. . Under this assumption:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "P(m,a fC) = ~'wP(M,W[C)' P(a IW)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "To make fast computation feasible, we furthermore assume that most of the probability mass for each meaning and acoustic utterance is focused on a single word string W (this will allow for efficient Viterbi best first search):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "P(M,A IC) = P(M, WrC). P(A IW)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "Thus, the probability of a meaning M for an acoustic utterance A given a context C is computed by the product of the DOP probability P(M, W I C) and the word-graph probability P(A I W).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "As to the parsing of word-graphs, it is wellknown that parsing algorithms for word strings can easily be generalized to word-graphs (e.g. van Noord 1995). For word strings, the initialization of the chart usually consists of entering each word w i into chart entry . For word-graphs, a transition corresponds to a word w between positions i and j where j is not necessarily equal to i+1 as is the case for word strings (see figure I0) . It is thus easy to see that for word-graphs the initialization of the chart consists of entering each word w from transition into chart entry . Next, parsing proceeds with the subtrees that are triggered by the dialogue context C (provided that all subtrees are converted into equivalent rewrite rules --see Bod 1992 , Sima'an 1995 . The most likely derivation is computed by a bottom-up best-first CKY parser adapted to DOP (Sima'an 1995 (Sima'an , 1997 . This parser has a time complexity which is cubic in the number of word-graph nodes and linear in the grammar size. The top-node meaning of the tree resulting from the most likely derivation is taken as the best meaning M for an utterance A given context C.", "cite_spans": [ { "start": 778, "end": 786, "text": "Bod 1992", "ref_id": "BIBREF2" }, { "start": 787, "end": 801, "text": ", Sima'an 1995", "ref_id": "BIBREF20" }, { "start": 895, "end": 908, "text": "(Sima'an 1995", "ref_id": "BIBREF20" }, { "start": 909, "end": 924, "text": "(Sima'an , 1997", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 441, "end": 451, "text": "figure I0)", "ref_id": null } ], "eq_spans": [], "section": "Interfacing DOP with speech", "sec_num": "5." }, { "text": "In our experimental evaluation of DOP we were interested in the following questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "(1) Is DOP fast enough for practical spoken dialogue understanding?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "(2) Can we constrain the OVIS subtrees without loosing accuracy?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "(3) What is the impact of dialogue context on the accuracy?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "For all experiments, we used a random split of the 10,000 OVIS trees into a 90% training set and a 10% test set. The training set was divided up into the four subcorpora described in section 4, which served to create the corresponding DOP parsers. The 1000 wordgraphs for the test set utterances were used as input. For each word-graph, the previous system question was known to determine the particular DOP parser, while the user utterances were kept apart. As to the complexity of the word-graphs: the average number of transitions per word is 4.2, and the average number of words per word-graph path is 4.6. All experiments were run on an SGI Indigo with a MIPS R10000 processor and 640 Mbyte of core memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "To establish the semantic accuracy of the sy,;tem, the best meanings produced by the DOP parser were compared with the meanings in the test set. Besides an exact match metric, we also used a more fine-grained evaluation for the semantic accuracy. Following the proposals in Boros et al. (1!)96) and van Noord et al. 1997 Both the updates m the OVIS test set and the updates produced by the DOP parser were translated into semantic units of tile form given above. The semantic accuracy was then evaluated in three different ways:", "cite_spans": [ { "start": 274, "end": 286, "text": "Boros et al.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "(1) match, the percentage of updates which were exactly correct (i.e. which exactly matched the updates m the test set); (2) precision, the number of correct semantic units divided by the number of semantic units which were produced; (3) recall, the number of correct semantic units divided by the number of semantic units in tile test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "As to question (1), we already suspect that it is not efficient to use all OVIS subtrees. We therefore performed experiments with versions of DOP where the subtree collection is restricted to subtrees with a certain maximum depth. The following table shows for four dilTerent maximum depths (where the maximum number of frontier words is limited to 3), the number of subtrec types in the training set, the semantic accuracy in terms of match, precision and recall (as percentages), and the average CPU time per wordgraph m seconds. The experiments show that at subtree-depth 4 the highest accuracy is achieved, but that only for subtree-depths 1 and 2 are the processing times fast enough for practical applications. Thus there is a trade-off between efficiency and accuracy: the efficiency deteriorates if the accuracy improves. We believe that a match of 78.5% and a corresponding precision and recall of resp. 83.0% and 84.3% (for the fast processing times at depth 2) is promising enough for further research. Moreover, by testing DOP directly on the word strings (without the word-graphs), a match of 97.8% was achieved. This shows that linguistic ambiguities do not play a significant role in this domain. The actual problem are the ambiguities in the word-graphs (i.e. the multiple paths).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "Secondly, we are concerned with the question as to whether we can impose constraints on tile subtrees other than their depth, in such a way that the accuracy does not deteriorate and perhaps even improves. To answer this question, we kept tile maximal subtreedepth constant at 3, and employed the following constraints:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "\u2022 Eliminating once-occurring subtrees: this led to a considerable decrease for all metrics; e.g. match decreased from 79.8% to 75.5%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "\u2022 Restricting subtree lexicalization: restricting the maximum number of words in the subtree frontiers to resp. 3, 2 and 1, showed a consistent decrease in semantic accuracy similar to the restriction of the subtree depth in table 1. Tile match dropped from 79.8% to 76.9% if each subtree was lexicalizcd with only one word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "\u2022 Eliminating subtrees with only non-head words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "this led also to a decrease in accuracy; tile most stringent metric decreased from 79.8% to 77.1%. Evidently, there can be important relations in OVIS that involve non-head words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "Finally, we arc interested in the impact of dialogue context on semantic accuracy. To test this, we neglected the previous system questions and created one DOP parser for tile whole training set. The semantic accuracy metric match dropped froin 79.8%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "to 77.4% (for depth 3). Moreover, the CPU time per sentence deteriorated by a factor of 4 (which is mainly due to the fact thai larger training sets yield slower DOP parsers).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "The following result nicely illustrates how the dialogue context can contribute to better predictions for the correct meaning of an utterance. In parsing the word-graph corresponding to the acoustic utterance Donderdag acht februari (\"Thursday eight February\"), the DOP model without dialogue context assigned highest probability to a derivation yielding the word string Dordrecht acht februari and its ineaning. Tile uttered word Donderdag was thus interpreted as the town Dordrecht which was indeed among the other hypothesized words in the word-graph. If the DOP model took into account the dialogue context, the previous system question When do you want to leave?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "was known and thus triggered the subtrees from the date-subcorpus only, which now correctly assigned the highest probability to Donderdag acht februari and its meaning, rather than to Dordrecht acht februari.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "We showed how the DOP model can be used for efficient and robust processing of spoken input in the OVIS spoken dialogue system. The system we described uses syntactically and semantically analyzed subtrees from the OVIS corpus to compute from an input word-graph the best utterance together with its meaning. We showed how dialogue context is integrated by dividing up the OVIS corpus into context-dependent subcorpora. Each system question triggers a suhcorpus by which the user utterance is analyzed and interpreted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "Efficiency was achieved by computing the most probable derivation rather than the most probable parse, and by restricting the depth and lexicalization of the OVIS subtrees. Robustness was achieved by the shallow syntactic/semantic annotations, including the use of the productive ERROR label for repairs and false starts. The experimental evaluation showed that DOP's blending of lexical relations with syntacticsemantic structure yields promising results. The experiments also indicated that elimination of subtrees diminishes the semantic accuracy, even when intuitively unimportant subtrees with only nonhead words are discarded. Neglecting dialogue context also diminished the accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "As future research, we want to investigate further optimization techniques for DOP, including finite-state approximations. We want to enrich the OVIS utterances with discourse annotations, such as co-reference links, in order to cope with anaphora resolution. We will also extend the annotations with feature structures and/or functional structures associated with the surface structures so as to deal with more complex linguistic phenomena (see Bod & Kaplan 1998) .", "cite_spans": [ { "start": 446, "end": 464, "text": "Bod & Kaplan 1998)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." } ], "back_matter": [ { "text": "We are grateful to Khalil Sima'an for using his DOP parser, and to Remko Bonnema for using SEMTAGS and the relevant semantic interfaces. We also thank Remko Bonnema, Ronald Kaplan, Remko Scha and Khalil Sima'an for helpful discussions and comments. The OVIS corpus was annotated by Mike de Kreek and Sascha SchLitz. This research was supported by NWO, the Netherlands Organization for Scientific Research (Priority Programme Language and Speech Technology).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Philips automatic train timetable information system", "authors": [ { "first": "H", "middle": [], "last": "Aust", "suffix": "" }, { "first": "M", "middle": [], "last": "Oerder", "suffix": "" }, { "first": "F", "middle": [], "last": "Seide", "suffix": "" }, { "first": "V", "middle": [], "last": "Steinbiss", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "24--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Aust, M. Oerder, F. Seide and V. Steinbiss. 1995. \"The Philips automatic train timetable information system\", Speech Communication, 17, pp 24%262.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Corpus-Based Approach to Semantic Interpretation", "authors": [ { "first": "M", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "R", "middle": [], "last": "Berg", "suffix": "" }, { "first": "R", "middle": [], "last": "Bod", "suffix": "" }, { "first": "", "middle": [], "last": "Scha", "suffix": "" } ], "year": 1994, "venue": "Proceedh~gs Ninth Amsterdam Colloquium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. van den Berg, R. Bod and R. Scha, 1994. \"A Corpus- Based Approach to Semantic Interpretation\", Proceedh~gs Ninth Amsterdam Colloquium, Amsterdam, The Netherlands.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Computational Model of Language Performance: Data Oriented Parsing", "authors": [ { "first": "R", "middle": [], "last": "Bod", "suffix": "" } ], "year": 1992, "venue": "Proceedings COLING-92", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bod, 1992. \"A Computational Model of Language Performance: Data Oriented Parsing\", Proceedings COLING- 92, Nantes, France.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Enriching Linguistics with Statistics: Performance Models of Natural Language, ILLC Dissertation Series 1995-14", "authors": [ { "first": "R", "middle": [], "last": "Bod", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bod, 1995. Enriching Linguistics with Statistics: Performance Models of Natural Language, ILLC Dissertation Series 1995-14, University of Amsterdam.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Prediction and Disambiguation by means of Data-Oriented Parsing", "authors": [ { "first": "R", "middle": [], "last": "Bod", "suffix": "" }, { "first": "R", "middle": [], "last": "Scha", "suffix": "" } ], "year": 1994, "venue": "Proceedings Twente Workshop on Language Technology (TWLTS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bod and R. Scha, 1994. \"Prediction and Disambiguation by means of Data-Oriented Parsing\", Proceedings Twente Workshop on Language Technology (TWLTS), Twente, The Netherlands.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Data-Oriented Approach to Semantic Interpretation", "authors": [ { "first": "R", "middle": [], "last": "Bod", "suffix": "" }, { "first": "R", "middle": [], "last": "Bonnema", "suffix": "" }, { "first": "R", "middle": [], "last": "Scha", "suffix": "" } ], "year": 1996, "venue": "Proceedings Workshop on Corpus-Oriented Semantic Analysis, ECAI-96", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bod, R. Bonnema and R. Scha, 1996. \"A Data-Oriented Approach to Semantic Interpretation\", Proceedings Work- shop on Corpus-Oriented Semantic Analysis, ECAI-96, Budapest, Hungary.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Probabilistic Corpus-Driven Model for Lexical-Functional Analysis", "authors": [ { "first": "R", "middle": [], "last": "Bod", "suffix": "" }, { "first": "R", "middle": [], "last": "Kaplan", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bod and R. Kaplan, 1998. \"A Probabilistic Corpus-Driven Model for Lexical-Functional Analysis\", this proceedings.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Data-Oriented Semantics", "authors": [ { "first": "R", "middle": [], "last": "Bonnema", "suffix": "" } ], "year": 1996, "venue": "Master's", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bonnema, 1996. Data-Oriented Semantics, Master's", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A DOP Model for Semantic Interpretation", "authors": [ { "first": "R", "middle": [], "last": "Bonnema", "suffix": "" }, { "first": "R", "middle": [], "last": "Bod", "suffix": "" }, { "first": "R", "middle": [], "last": "Scha", "suffix": "" } ], "year": 1997, "venue": "Proceedings ACL/EACL-97", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bonnema, R. Bod and R. Scha, 1997. \"A DOP Model for Semantic Interpretation\", Proceedings ACL/EACL-97, Madrid, Spain.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Towards understanding spontaneous speech: word accuracy vs. concept accuracy", "authors": [ { "first": "M", "middle": [], "last": "Boros", "suffix": "" } ], "year": 1996, "venue": "Proceedings 1CSLP'96", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Boros et al. 1996. \"Towards understanding spontaneous speech: word accuracy vs. concept accuracy.\" Proceedings 1CSLP'96, Philadelphia (PA).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Tree-bank Grammars", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1996, "venue": "Proceedings AAAI-96", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, 1996. \"Tree-bank Grammars\", Proceedings AAAI-96, Menlo Park (Ca).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Efficient Algorithms for Parsing the DOP Model", "authors": [ { "first": "J", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1996, "venue": "Ptvceedings Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Goodman, 1996. \"Efficient Algorithms for Parsing the DOP Model\", Ptvceedings Empirical Methods in Natural Language Processing, Philadelphia (PA).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "199& Parsing bzside-Out", "authors": [ { "first": "J", "middle": [], "last": "Goodman", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Goodman, 199& Parsing bzside-Out, Ph.D. thesis, Harvard University, Massachusetts.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A fully statistical approach to natural language interfaces", "authors": [ { "first": "S", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1996, "venue": "Proceedings A CL'96", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Miller et al. 1996. \"A fully statistical approach to natural language interfaces\", Proceedings A CL'96, Santa Cruz (Ca.).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The intersection of finite state automata and definite clause grammars", "authors": [ { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" } ], "year": 1995, "venue": "Proceedings ACL'95", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. van Noord, 1995. \"The intersection of finite state automata and definite clause grammars\", Proceedings ACL'95, Boston, Massachusetts.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Robust Grammatical Analysis for Spoken Dialogue Systems", "authors": [ { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" }, { "first": "G", "middle": [], "last": "Bouma", "suffix": "" }, { "first": "R", "middle": [], "last": "Koeling", "suffix": "" }, { "first": "M", "middle": [], "last": "Nederhof", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. van Noord, G. Bouma, R. Koeling and M. Nederhof, 1997. Robust Grammatical Analysis for Spoken Dialogue Systems, unpublished manuscript.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Inside-Outside Reestimation from Partially Bracketed Corpora", "authors": [ { "first": "F", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proceedings ACL'92", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pereira and Y. Schabes, 1992. \"Inside-Outside Reestima- tion from Partially Bracketed Corpora\", Proceedings ACL'92, Newark, Delaware.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Approche Probabiliste de l'Analyse Syntaxique", "authors": [ { "first": "M", "middle": [], "last": "Rajman", "suffix": "" } ], "year": 1995, "venue": "Traitement Automatique des Langues", "volume": "36", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Rajman 1995. \"Approche Probabiliste de l'Analyse Syntaxique\", Traitement Automatique des Langues, 36(1-2).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Virtuele Grammatica's en Creatieve Algoritmen", "authors": [ { "first": "R", "middle": [], "last": "Scha", "suffix": "" } ], "year": 1992, "venue": "GrammaflTT", "volume": "1", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Scha 1992. \"Virtuele Grammatica's en Creatieve Algorit- men\", GrammaflTT 1 ( 1 ).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Computational Complexity of Probabilistic Disambiguation by means of Tree Grammars", "authors": [ { "first": "K", "middle": [], "last": "Sima'an", "suffix": "" } ], "year": 1995, "venue": "Linguistic Theory. John Benjamins, Amsterdam. K. Sima'an", "volume": "136", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sima'an, 1995. \"An optimized algorithm for Data Oriented Parsing\", In: R. Mitkov and N. Nicolov (eds.), Recent Advances in Natural Language Processing 1995, volume 136 of Current Issues in Linguistic Theory. John Benjamins, Amsterdam. K. Sima'an, 1996. \"Computational Complexity of Probabilistic Disambiguation by means of Tree Grammars\", Proceedings COLING-96, Copenhagen, Denmark.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Explanation-Based Learning of Data-Oriented Parsing", "authors": [ { "first": "K", "middle": [], "last": "Sima'an", "suffix": "" } ], "year": 1997, "venue": "CoNLL97: Computational Natural Language Learning, ACL'97", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sima'an, 1997. \"Explanation-Based Learning of Data- Oriented Parsing\", in T. Ellison (ed.) CoNLL97: Computational Natural Language Learning, ACL'97, Madrid, Spain.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The Informational Component", "authors": [ { "first": "E", "middle": [], "last": "Vallduvi", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Vallduvi, 1990. The Informational Component. Ph.D. thesis, University of Pennsylvania, PA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Sen, antics of update expressions", "authors": [ { "first": "G", "middle": [], "last": "Veldhuijzen Van Zanten", "suffix": "" } ], "year": 1996, "venue": "NWO Priority Programme Language and Speech Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Veldhuijzen van Zanten, 1996. Sen, antics of update expressions. Technical Report 24. NWO Priority Programme Language and Speech Technology, The Hague.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "user.wants. ( ([# today] ; [! tomorrow]) ; destination, place, town. almere)", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "date subcorpus: utterances following questions like When do you want to travel?, When do you want to leave from X?, When do you want to arrive in Y?, etc. (3) time subcorpus: utterances following questions like At what time do you want to travel? At what time do you want to leave from X?, At what time do you want to arrive in Y?, etc. (4) yes/no subcorpus: utterances following y/n_ questions like Did you say that ... ? Thus you want to arrive at ... ?", "uris": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "html": null, "num": null, "text": "To combine subtrees, a node-substitution operation indicated as o is used. Node-substitution identifies the leftmost nonterminal frontier node of one tree with the root node of a second tree (i.e., the second tree is substituted on the leftmost nonterminal frontier node of the first tree). A new input sentence such as Mar3' likes Susan can thus be parsed by combining subtrees", "content": "
from this corpus, as in (2):
(2)
SoNPoNP=S
NPVl'MarySusanNPVP
VNPMaryVNP
I likesI likesI Susan
Other derivations may yield the same parse tree; for
instance:
(3)
SoNPoV=S
NPVPMarylikesNPVP
VNPMaryVNP
I SusanI likesI Susan
S
NF'VPNPVP
JohnVNPPeterVNP
I likesI MaryI hatesI Susan
I Netherlands Organization for Scientific Research
138
" }, "TABREF2": { "type_str": "table", "html": null, "num": null, "text": ", in that our annotation convention exploits the Principle of Compositionality of Meaning. 3 It is therefore not clear yet whether onr current treatment ought to be viewed as completely general, or whether a more sophisticated treatment in the vein of van denBerg et al. (1994) should be worked out. 140 naar van Venlo naar Voorburg, the meaning of the false start Van Voorburg naar is thus absent: (7) (origin.place.town.venlo ; destination.place, town. voorburg)", "content": "
(5)
S
dl.d2
vp
PERdl.d2
user
~ VMe
ik wants
I wilMPMP
MPCONMPPNP
/~[tomorrow destinati0n.place town.almere
ADVMPI I maar morgenI naarI ahnere
#today
I I
(6)
MP
ERRORMP
I(dI;d2)
Mp
(d l;d2)MPMP
dl.d2dl.d2
dl.d2destination.place pNPPNP
]origi .place low venl0 destin ion.place t0wn.vo0rburgdlSd2
0rigin.place town.voorburg / I van ,'oorburgvanvenlonau,roorburgPER user I ikVVP d l.d2MP
Note that the ERROR category has no semanticwants(dl;d2)
annotation; in the top-node semantics of Van Voorburg]
" } } } }