Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H91-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:32:50.432386Z"
},
"title": "",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Pieraccini",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Bell Laboratories",
"institution": "",
"location": {
"addrLine": "600 Mountain Avenue Murray Hill",
"postCode": "N J, 07974"
}
},
"email": ""
},
{
"first": "Esther",
"middle": [],
"last": "Levin",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Bell Laboratories",
"institution": "",
"location": {
"addrLine": "600 Mountain Avenue Murray Hill",
"postCode": "N J, 07974"
}
},
"email": ""
},
{
"first": "Chin-Hui",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Bell Laboratories",
"institution": "",
"location": {
"addrLine": "600 Mountain Avenue Murray Hill",
"postCode": "N J, 07974"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a model for a statistical representation of the conceptual structure in a restricted subset of spoken natural language. The model is used for segmenting a sentence into phrases and labeling them with concept relations (or cases). The model is trained using a corpus of annotated transcribed sentences. The performance of the model was assessed on two tasks, including DARPA ATIS class A sentences.",
"pdf_parse": {
"paper_id": "H91-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a model for a statistical representation of the conceptual structure in a restricted subset of spoken natural language. The model is used for segmenting a sentence into phrases and labeling them with concept relations (or cases). The model is trained using a corpus of annotated transcribed sentences. The performance of the model was assessed on two tasks, including DARPA ATIS class A sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The goal of a speech understanding system is generally that of translating a sequence of acoustic measurements of the speech signal into some form that represents the meaning conveyed by the sentence. One of the knowledge representation paradigms, known as semantic networks [2] establishes relations between conceptual entities using a graph structure. These concept relations, or linguistic cases, can be used to label different parts of a sentence in order to obtain its interpretation. The task itself defines the set of relevant cases. For instance, for the task of assigning the origin, the destination and the departure time of a flight, a convenient representation is in terms of the following set of cases: Note that although the first phrase (I would like to fly) conveys important information, it is considered irrelevant to this particular task, and therefore assigned to the DUMMY case. The segmentation of a sentence into cases (conceptual segmentation) can be described by labeling each word in the sentence with the index of the case it expresses. In the example above, the con-ceptuM segmentation is represented by the following sequence of labels:",
"cite_spans": [
{
"start": 275,
"end": 278,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "C = {01,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C = (cI,c2, c3...c12)",
"eq_num": "(1)"
}
],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "el = c2 ..... e5 = 04",
"eq_num": "(2)"
}
],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "e 6 = c? = C1 c 8 ~ c 9 z C 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "In this paper we tackle the problem of decoding the words constituting the spoken sentence and the corresponding sequence of case labels, from the speech signal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "Let us denote by A = a,,a2...aN,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP DECODING OF CASES",
"sec_num": null
},
{
"text": "the sequence of acoustic observations extracted from a spoken sentence, by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP DECODING OF CASES",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W = ~,,w2...WM,",
"eq_num": "(4)"
}
],
"section": "MAP DECODING OF CASES",
"sec_num": null
},
{
"text": "the sequence of words constituting the sentence, and by C = cl,c2...CM,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP DECODING OF CASES",
"sec_num": null
},
{
"text": "the sequence of case labels, where ci takes its values from a predefined set of conceptual relations C = {C1,C2,...CK}. The problem of finding W and C given A can be approached using the maximum a posteriori decoding (MAP). Following this criterion we want to find the sequence of words ~V and the sequence of cases C that maximizes the conditional probability P(~V, CJA) = max P(W,C]A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAP DECODING OF CASES",
"sec_num": null
},
{
"text": "This conditional probability can be written using the Bayes inversion formula as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6) WxC",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(W,C[A) = P(AIW,C)P(W[C)P(C) P(A)",
"eq_num": "(7)"
}
],
"section": "(6) WxC",
"sec_num": null
},
{
"text": "In this formula P(C) represents the a-priori probability of the sequence of cases, P(W I C) is the probability of a sentence expressing a given sequence of cases, and P(A I W,C) is the acoustic model. We can reasonably assume that the acoustic representation of a word is independent of the conceptual relation it belongs to, hence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6) WxC",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(AlW,C) = P(AIW),",
"eq_num": "(8)"
}
],
"section": "(6) WxC",
"sec_num": null
},
{
"text": "and this is the criterion that is usually maximized in stochastic based speech recognizers, for instance those using hidden Markov modeling [1] for the acoustic/phonetic decoding. In this paper we deal with the remaining terms",
"cite_spans": [
{
"start": 140,
"end": 143,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(6) WxC",
"sec_num": null
},
{
"text": "P(W [ C)P(C)= (9) M H P(w~lwi-,...wl,C)P(Wl l C) i=2 M II P(c~ L c~_1 ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6) WxC",
"sec_num": null
},
{
"text": "We proceed by assuming that: These are Markov processes of order n and m respectively, and if n and m are large we don't lose any generality by making this assumption. For practical purposes n and m should be small enough to allow a reliable estimation of the probabilities from a finite set of data. An additional assumption in equation 10is that a given word in the sentence, used for expressing a certain case, is independent of the case of the preceding words. Assuming that the sequence of words could be directly observed (for instance providing a transcription of the uttered sentence), and the sequence of cases is unknown, equations (10) and (11) describe a a hidden Markov process, where the states of the underlying model correspond to the cases, the observation probabilities of each state are represented by equation 10in the form of state local (n + 1)gram language models, and the transitions between the states are described by equation (11).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "cl)e(c~) i=2",
"sec_num": null
},
{
"text": "P(wi I wi-t ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "cl)e(c~) i=2",
"sec_num": null
},
{
"text": "A first evaluation of the model was performed based on a set of 825 sentences artificially generated by a finite state grammar [3] using a vocabulary of 41 different words. The sentences express different ways of making requests to travel between two cities. A typical example is:",
"cite_spans": [
{
"start": 127,
"end": 130,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE FROM-TO TASK",
"sec_num": null
},
{
"text": "The task consisted of identifying the origin and destination cities of the flight. The relevant cases of this task are then flight origin and flight destination. However the model has three states, ORIGIN, DESTINATION and DUMMY. 50 sentences, randomly selected out of the 825, were used to estimate the parameters of the model, i.e. the transition probabilities (equation 11) and the state local language models (equation 10), with n = 1 and m = 1 (i.e. the underlying Markov process was a 1 st order process and the state local language models were bi-grams). The training sentences were hand-labeled with the appropriate cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I want to travel into Boston and I am interested in flights between Boston and Washington",
"sec_num": null
},
{
"text": "The remaining 775 sentences were decoded using Viterbi decoding algorithm. The performance was assessed by counting the number of sentences that were segmented assigning the correct words (i.e. the correct city names) to the DESTINATION and ORIGIN states. We observed that 7% of the sentences (55 out of 775) had a wrong origin/destination assignment. In some of the wrong segmentations one of the relevant states was missing, the other state containing both the real destination and origin cities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I want to travel into Boston and I am interested in flights between Boston and Washington",
"sec_num": null
},
{
"text": "In other examples, similar to the sentences shown above, both the destination and the origin states were assigned to the same city name, that appeared twice in the sentence. To improve the performance we imposed some additional constraints in the decoding procedure. For a given sentence the decoded state sequence was searched among those sequences of states where both the origin and destination states were visited only once (i.e. when one of those states was left, the current partial path was not allowed to enter that state again). In addition, the phrases assigned to the origin and destination states had to include different city names. These constraints, representing a higher level a priori knowledge of the task, were imposed in the Viterbi decoding by keeping track of the past sequence of states for each partial candidate solution, and duplicating the partial solutions when two (or more) candidates merged at the same state and showed conflicting constraints. This approach resulted in a substantial improvement of the performance. Only one error was observed out of the 775 test sentences ( 0.13% error rate). The same level of performance was obtained in experiments using a 1-gram language model inside each state, but increasing the number of states to five:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I want to travel into Boston and I am interested in flights between Boston and Washington",
"sec_num": null
},
{
"text": "ORIGIN, DESTINATION, DUMMY, FROM, TO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I want to travel into Boston and I am interested in flights between Boston and Washington",
"sec_num": null
},
{
"text": "The last two states accounted for the expressions that usually precede the origin and destination city names respectively. For example the FROM state was associated to expressions of the kind: from, depart out of, leaving, etc., and the TO state was associated to expressions like: to, going to, arriving into, etc. This experiment indicates that there is a tradeoff between the number of states and the complexity (order) of the state language models. Expanding the set of states to reflect the linguistic structure of the sentences may result in a reduction of the number of parameters to be estimated during training, giving a more robust model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I want to travel into Boston and I am interested in flights between Boston and Washington",
"sec_num": null
},
{
"text": "The technique of case decoding is being applied to the class A sentences of the DARPA ATIS task. A sentence of this task can be analyzed in terms of 7 general cases, that are QUERY, generally associated to the phrases expressing the kind of request, OBJECT expressing the object of the query, ATTRIBUTE that describes some attributes of the object, RESTRICTION We defined 44 different cases for describing the whole set of 547 class A training sentences. The complete list of cases is shown in Table l . The training sentences (covered by a vocabulary of 501 words) were hand-labeled according to this set of states and the transition probabilities and the state local bigram models were estimated using the maximum likelihood criterion. Table 2 shows examples of the phrases used for estimating the bigram language models for some of the defined states. Considering the large number of parameters to be estimated ( i.e. the transition probabilities between the 44 states of the model and the 44 bigram models extended to the entire vocabulary of 501 words), and considering the small number of training sentences, this estimation poses robustness problems. One way to alleviate these problems consists of grouping the words in the vocabulary into equivalence classes. For example all the' city names can be grouped in the same class, as well as the airport names, the numbers, the airline names, etc.",
"cite_spans": [
{
"start": 349,
"end": 360,
"text": "RESTRICTION",
"ref_id": null
}
],
"ref_spans": [
{
"start": 494,
"end": 501,
"text": "Table l",
"ref_id": null
},
{
"start": 738,
"end": 745,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "THE ATIS TASK",
"sec_num": null
},
{
"text": "The testing of the system was performed on the transcribed Jun-90 and Feb-91 class A test sentences. New words were allocated to a new-word category that was assigned a small probability within each state. Table 3 reports the number of sentences, for each test set, that were correctly labeled by the case decoder, along with the statistics on the correctly assigned cases. Table 4 shows examples of correct segmentations from the FEB-91 test set. It is interesting to notice the allocation of the connective and to different cases in sentences 1),3), and 4)..Although sentences 1) and 3) contain similar expressions (between ... and ...), the system recognizes that in the first case the phrase refers to a period of time, while in the second case it refers to origin and destination cases. Moreover, sentence 3) shows that the concept relations origin and destination are not necessarily referred to the origin and destination of a flight, but can be referred to other events, like ground transportation in this case. This sensitivity to the context (to the value of the 0 BJ ECT in the example above) shown by certain cases must be taken into account by the module that will interpret the conceptual segmentation and generate the SQL query. In sentence 4) the word and is clearly interpreted as connecting two distinct restrictions on the query. The same phenomenon is shown in sentence 5) where the word or connects two alternative possible origins of the flight. Table 5 shows examples of incorrect segmentations from the FEB-91 test set. In sentence 1) the phrase used for Eastern should be assigned to the airline case. The error is due to the fact that the word Eastern was not observed in the training set. In sentence 2) the phrase through Dallas Fort Worth should have been labeled with the connect case, but this case has very few examples in the training set from Baltimore destin:",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 374,
"end": 381,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1468,
"end": 1475,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "THE ATIS TASK",
"sec_num": null
},
{
"text": "to Pittsburgh with a consequent poor estimation of the parameters related to it. The same problem, i.e. inadequate training, is also the cause of the wrong segmentation of sentence 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE ATIS TASK",
"sec_num": null
},
{
"text": "The goal of the understanding system is to retrieve the information in the ATIS database. In order to do this we are developing a module that translates the conceptual representation of the sentence obtained with the described method into an SQL query. Since the ambiguity of the sentence is resolved by the conceptual segmentation, this module implements a deterministic mapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": null
},
{
"text": "We proposed a very simple semantic grammar for the ATIS task. The grammar was designed to be rich enough to handle most queries, but limited in certain ways so as to facilitate parsing by very simple and well-understood HMM methods. The advantages of this approach are its straightforward integration with an HMM based speech recognizer, and its capability of learning from examples. Even with an extremely small training set, the system was able to assign the correct analysis to more than 80% of the class A sentences in both the JUN-90 and FEB-91 test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors gratefully acknowledge the helpful advice and consultation of Ken Church, Alexandra Gertner, A1 Gorin, Fernando Pereira, and Evelyne Tzoukerman.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Continuous Speech Recognition by Statistical Methods",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1976,
"venue": "Proceedings oflEEE",
"volume": "64",
"issue": "4",
"pages": "532--556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, F., \"Continuous Speech Recognition by Statistical Meth- ods\"Proceedings oflEEE, vol. 64, no. 4, pp. 532-556, 1976",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Conceptual Structures: Information Processing in Mind and Machine",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sowa",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sowa, J., F. Conceptual Structures: Information Processing in Mind and Machine, Addison-Wesley, Reading, MA, 1984.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adaptive Language Acquisition from a Subset of the Airline Reservation Task",
"authors": [
{
"first": "A",
"middle": [
"N"
],
"last": "Gertner",
"suffix": ""
},
{
"first": "A",
"middle": [
"L"
],
"last": "Gorin",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Roe",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gertner, A. N., Gorin, A. L., Roe, D. B., \" Adaptive Language Acquisition from a Subset of the Airline Reservation Task,\" paper in preparation.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "C2, C3, C4}, where Cl = ORIGIN, C2 = DESTINATION, 03 = DEPARTURE_TIME, and C4 = DUMMY. The introduction of a DUMMY case is useful for covering all the parts of the sentence that are not relevant to the task. A sentence like ! would like to fly from Boston to Chicago next Saturday night can be analyzed as: \u2022 DUMMY: I would like to fly \u2022 ORIGIN: from Boston \u2022 DESTINATION: to Chicago \u2022 DEPARTURE_TIME: next Saturday night.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "wl, C) = (10) P(wi I ~',-~... w~_., ci), and P(cilci_t...ct) = (11) P(ci l ei-1...ci-m).",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>\u2022 Q_ATTR: could I get</td></tr><tr><td>\u2022 RESTRICTION: from San Francisco to Dallas on the 25th</td></tr><tr><td>of April</td></tr><tr><td>We can further analyze some of the cases into more detailed con-</td></tr><tr><td>ceptual relations, giving the following representation:</td></tr><tr><td>s ATTRIBUTE</td></tr><tr><td>o a_fare: economy</td></tr><tr><td>\u2022 RESTRICTION</td></tr><tr><td>o origin: from San Francisco</td></tr><tr><td>o destination: to Dallas</td></tr><tr><td>O date : on the 25th of April</td></tr><tr><td>What type of economy fare could I get from San Francisco to</td></tr><tr><td>Dallas on the 25th of April</td></tr><tr><td>is segmented as:</td></tr><tr><td>\u2022 QUERY: What type of</td></tr><tr><td>\u2022 ATTRIBUTE: economy</td></tr><tr><td>\u2022 OBJECT: fare</td></tr></table>",
"text": "describing the restrictions on the values of the answer, Q_ATTR describing possible attributes of the query, AND including connectives like and, or, also, indicating that the sentence may have more that one query. Of course we include a DUMMY state like in the above mentioned examples. For example, a sentence like:",
"num": null,
"type_str": "table"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>QUERY</td><td>RESTRICTION</td><td>date</td></tr><tr><td>OBJECT</td><td/><td>origin</td></tr><tr><td>ATTRIBUTE</td><td>attribute</td><td>destin</td></tr><tr><td/><td>a_date</td><td>time</td></tr><tr><td/><td>a_origin</td><td>airline</td></tr><tr><td/><td>a_destin</td><td>flcode</td></tr><tr><td/><td>a_tirne</td><td>meal</td></tr><tr><td/><td>a_airline</td><td>ground</td></tr><tr><td/><td>a_flcode</td><td>aircraft</td></tr><tr><td/><td>a_aircraft</td><td>class</td></tr><tr><td/><td>a_class</td><td>fare</td></tr><tr><td/><td>a_fare</td><td>stop</td></tr><tr><td/><td>a_stop</td><td>atplace</td></tr><tr><td/><td>a_atplace</td><td>dept_time</td></tr><tr><td/><td>a _way</td><td>arvl_time</td></tr><tr><td/><td>a_restrict</td><td>way</td></tr><tr><td/><td>a_table</td><td>restrict</td></tr><tr><td/><td>a_body</td><td>table</td></tr><tr><td>Q_AT T R</td><td/><td>range</td></tr><tr><td>'AND</td><td/><td>speed</td></tr><tr><td>, DUMMY</td><td/><td>body</td></tr><tr><td/><td/><td>day</td></tr><tr><td/><td/><td>connect</td></tr><tr><td>QUERY</td><td>I would like</td><td/></tr><tr><td/><td>can I have a list of</td><td/></tr><tr><td/><td>it give me a description of</td><td/></tr><tr><td>OBJECT</td><td>the flights</td><td/></tr><tr><td/><td>the fare on</td><td/></tr><tr><td/><td>a price on a ticket</td><td/></tr><tr><td>origin</td><td>arriving from Dallas</td><td/></tr><tr><td/><td>from Atlanta airport</td><td/></tr><tr><td/><td>between airport B WI</td><td/></tr><tr><td/><td>departing Atlanta</td><td/></tr><tr><td>destin</td><td>and Boston</td><td/></tr><tr><td/><td>arriving in San Francisco</td><td/></tr><tr><td/><td>going to San Francisco</td><td/></tr><tr><td/><td>returning to Atlanta</td><td/></tr><tr><td colspan=\"2\">dept_time leaving after 1:00 pm</td><td/></tr><tr><td/><td>that depart in the afternoon</td><td/></tr><tr><td>way</td><td>round-trip</td><td/></tr><tr><td/><td>return</td><td/></tr><tr><td/><td>that are round-trip</td><td/></tr><tr><td>class</td><td>a class Q W ticket</td><td/></tr><tr><td/><td>a 1st class ticket</td><td/></tr><tr><td/><td colspan=\"2\">which have 1st class service available</td></tr></table>",
"text": "The set of cases in the ATIS task",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td colspan=\"5\">TEST Number of Sentences I Number of I</td><td>Cases</td></tr><tr><td/><td>sentences</td><td>correct</td><td>cases</td><td/><td>correct</td></tr><tr><td>JUN-90</td><td>98</td><td>87 (88.7%) i</td><td>419</td><td colspan=\"2\">]398 (95.0%)</td></tr><tr><td>FEB-91 i</td><td>148</td><td>119 (80.4%)</td><td>713</td><td colspan=\"2\">671 (94.1%)</td></tr></table>",
"text": "Examples of phrases assigned to cases in the training sentences",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td colspan=\"2\">1)Please list all flights between Baltimore and Atlanta</td></tr><tr><td colspan=\"2\">on Tuesdays between 4 in the afternoon and 9 in the eveninc</td></tr><tr><td colspan=\"2\">DUMMY: Please</td></tr><tr><td colspan=\"2\">QUERY: list all</td></tr><tr><td colspan=\"2\">OBJECT: the flights</td></tr><tr><td>origin:</td><td>between Baltimore</td></tr><tr><td>destin:</td><td>and Atlanta</td></tr><tr><td>day:</td><td>on Tuesdays</td></tr><tr><td>time:</td><td>between 4 in the afternoon and 9 in the evening</td></tr><tr><td colspan=\"2\">2) What's the cheapest round-trip airfare on American</td></tr><tr><td colspan=\"2\">flight 1074 from Dallas to Philadelphia</td></tr><tr><td colspan=\"2\">QUERY: What's</td></tr><tr><td>a_fare:</td><td>the cheapest</td></tr><tr><td>a_way:</td><td>round-trip</td></tr><tr><td colspan=\"2\">OBJECT: airfare</td></tr><tr><td>airline:</td><td>on American</td></tr><tr><td>flcode:</td><td>flight 1074</td></tr><tr><td>origin:</td><td>from Dallas</td></tr><tr><td>destin:</td><td>to Philadelphia</td></tr><tr><td colspan=\"2\">3) What kind of ground transportation is there between the</td></tr><tr><td colspan=\"2\">airport and dowrltown Atlanta</td></tr><tr><td colspan=\"2\">QUERY: What kind of</td></tr><tr><td colspan=\"2\">OBJECT: ground transportation</td></tr><tr><td colspan=\"2\">Q_ATTR: is there</td></tr><tr><td>origin:</td><td>between the airport</td></tr><tr><td>destin:</td><td>and downtown Atlanta</td></tr><tr><td colspan=\"2\">4) What are the restrictions on the cheapest fare from</td></tr><tr><td colspan=\"2\">Pittsburgh to Denver and from Denver to San Francisco</td></tr><tr><td colspan=\"2\">QUERY: What are</td></tr><tr><td colspan=\"2\">OBJECT: the restrictions</td></tr><tr><td>fare: i</td><td>on the cheapest fare</td></tr><tr><td>!origin:</td><td>from Pittsburgh</td></tr><tr><td>destin:</td><td>to Denver</td></tr><tr><td>AND:</td><td>and</td></tr><tr><td>origin:</td><td>from Denver</td></tr><tr><td>destin:</td><td>to San Francisco</td></tr><tr><td colspan=\"2\">5)Display flights from Oakland or San Francisco to Denver</td></tr><tr><td colspan=\"2\">Q U E RY: Display</td></tr><tr><td colspan=\"2\">OBJECT: flights</td></tr><tr><td>origin:</td><td>from Oakland</td></tr><tr><td>AND:</td><td>or</td></tr><tr><td>origin:</td><td>San Francisco</td></tr><tr><td>destin:</td><td>to Denver</td></tr></table>",
"text": "Results with two different test sets",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>: Examples of correctly decoded test sentences from FEB-91</td></tr><tr><td>test set</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table/>",
"text": "Examples of incorrectly decoded test sentences from FEB-91 test set",
"num": null,
"type_str": "table"
}
}
}
}