Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:34:49.523473Z"
},
"title": "A Salience Driven Approach to Robust Input Interpretation in Multimodal Conversational Systems",
"authors": [
{
"first": "Joyce",
"middle": [
"Y"
],
"last": "Chai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Michigan State University East Lansing",
"location": {
"postCode": "48824",
"region": "MI"
}
},
"email": "jchai@cse.msu.edu"
},
{
"first": "Shaolin",
"middle": [],
"last": "Qu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Michigan State University East Lansing",
"location": {
"postCode": "48824",
"region": "MI"
}
},
"email": "qushaoli@cse.msu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To improve the robustness in multimodal input interpretation, this paper presents a new salience driven approach. This approach is based on the observation that, during multimodal conversation, information from deictic gestures (e.g., point or circle) on a graphical display can signal a part of the physical world (i.e., representation of the domain and task) of the application which is salient during the communication. This salient part of the physical world will prime what users tend to communicate in speech and in turn can be used to constrain hypotheses for spoken language understanding, thus improving overall input interpretation. Our experimental results have indicated the potential of this approach in reducing word error rate and improving concept identification in multimodal conversation.",
"pdf_parse": {
"paper_id": "H05-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "To improve the robustness in multimodal input interpretation, this paper presents a new salience driven approach. This approach is based on the observation that, during multimodal conversation, information from deictic gestures (e.g., point or circle) on a graphical display can signal a part of the physical world (i.e., representation of the domain and task) of the application which is salient during the communication. This salient part of the physical world will prime what users tend to communicate in speech and in turn can be used to constrain hypotheses for spoken language understanding, thus improving overall input interpretation. Our experimental results have indicated the potential of this approach in reducing word error rate and improving concept identification in multimodal conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multimodal conversational systems promote more natural and effective human machine communication by allowing users to interact with systems through multiple modalities such as speech and gesture (Cohen et al., 1996; Johnston et al., 2002; Pieraccini et al., 2004) . Despite recent advances, interpreting what users communicate to the system is still a significant challenge due to insufficient recognition (e.g., speech recognition) and understanding (e.g., language understanding) performance. Significant improvement in the robustness of multimodal interpretation is crucial if multimodal systems are to be effective and practical for real world applications.",
"cite_spans": [
{
"start": 195,
"end": 215,
"text": "(Cohen et al., 1996;",
"ref_id": "BIBREF8"
},
{
"start": 216,
"end": 238,
"text": "Johnston et al., 2002;",
"ref_id": "BIBREF20"
},
{
"start": 239,
"end": 263,
"text": "Pieraccini et al., 2004)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous studies have shown that, in multimodal conversation, multiple modalities tend to complement each other (Cassell et al. 1994) . Fusing two or more modalities can be an effective means of reducing recognition uncertainties, for example, through mutual disambiguation (Oviatt 1999) . For semantically-rich modalities such as speech and penbased gesture, mutual disambiguation usually happens at the fusion stage where partial semantic representations from individual modalities are disambiguated and combined into an overall interpretation (Johnston 1998 , Chai et al., 2004a . One problem is that some critical but low probability information from individual modalities (e.g., recognized alternatives with low probabilities) may never reach the fusion stage. Therefore, this paper addresses how to use information from one modality (e.g., deictic gesture) to directly influence the semantic processing of another modality (e.g., spoken language understanding) even before the fusion stage.",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "(Cassell et al. 1994)",
"ref_id": "BIBREF3"
},
{
"start": 274,
"end": 287,
"text": "(Oviatt 1999)",
"ref_id": "BIBREF26"
},
{
"start": 546,
"end": 560,
"text": "(Johnston 1998",
"ref_id": "BIBREF19"
},
{
"start": 561,
"end": 581,
"text": ", Chai et al., 2004a",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular we present a new salience driven approach that uses gesture to influence spoken language understanding. This approach is based on the observation that, during multimodal conversation, information from deictic gestures (e.g., point or circle) on a graphical interface can signal a part of the physical world (i.e., representation of the domain and task) of the application which is salient during the communication. This salient part of the physical world will prime what users tend to communicate in speech and thus in turn can be used to constrain hypotheses for spoken language understanding. In particular, this approach incorporates a notion of salience from deictic gestures into language models for spoken language processing. Our experimental results indicate the potential of this approach in reducing word error rate and improving concept identification from spoken utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections, we first introduce the current architecture for multimodal interpretation. Then we describe our salience driven approach and present empirical results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Input interpretation is the identification of semantic meanings in user inputs. In multimodal conversation, user inputs can come from multiple channels (e.g., speech and gesture). Thus, most work on input interpretation is based on semantic fusion that includes individual recognizers and a sequential integration processes as shown in Figure 1 . In this approach, a system first creates possible partial meaning representations from recognized hypotheses (e.g., N-best lists) independently of other modalities. For example, suppose a user says \"what is the price of this painting\" and at the same time points to a position on the screen. The partial meaning representations from the speech input and the gesture input are shown in (a-b) in Figure 1 . The system uses the partial meaning representations to disambiguate each other and combines compatible partial representations together into an overall semantic representation as in Figure1(c) .",
"cite_spans": [
{
"start": 934,
"end": 944,
"text": "Figure1(c)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 741,
"end": 749,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "3 Input Interpretation",
"sec_num": "2"
},
{
"text": "In this architecture, the partial semantic representations from individual modalities are crucial for mutual disambiguation during multimodal fusion. The quality of partial semantic representations depends on how individual modalities are processed. For example, if the speech input is recognized as \"what is the prize of this pant\", then the partial representation from the speech input will not be created in the first place. Without a candidate partial representation, it is not likely for multimodal fusion to reach an overall meaning of the input given this late fusion architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3 Input Interpretation",
"sec_num": "2"
},
{
"text": "Thus, a problem with the semantics-based fusion approach is that information from multiple modalities is only used during the fusion stage to disambiguate or combine partial semantic representations. This late use of information from other sources in the pipelined process can cause the loss of some low probability information (e.g., recognized alternatives with low probabilities which did not make it to the Nbest list) which could be very crucial in terms of the overall interpretation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3 Input Interpretation",
"sec_num": "2"
},
{
"text": "It is desirable to use information from multiple sources at an earlier stage before partial representations are created from individual modalities. For example, in ((Bangalore and Johnston 2000), a finite-state approach was applied to tightly couple multimodal language processing (e.g., gesture and speech) and speech recognition to improve recognition hypotheses. To further address this issue, in this paper, we present a salience driven approach that particularly applies gesture information (e.g., pen-based deictic gestures) to robust spoken language understanding before multimodal fusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3 Input Interpretation",
"sec_num": "2"
},
{
"text": "We first give a brief overview on the notion of salience and how salience modeling is applied in earlier work on natural language and multimodal language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work on Salience Modeling",
"sec_num": null
},
{
"text": "Linguistic salience describes the accessibility of entities in a speaker/hearer's memory and its implication in language production and interpretation. Many theories on linguistic salience have been developed, including how the salience of entities affects the form of referring expressions as in the Givenness Hierarchy (Gundel et al., 1993) and the local coherence of discourse as in the Centering Theory (Grosz et al., 1995) . Salience modeling is used for both language generation and language interpretation; the latter is more relevant to our work. Most salience-based interpretation has focused on reference resolution for both linguistic referring expressions (e.g., pronouns) (Lappin and Leass 1995) and multimodal expressions (Hul et al. 1995; Eisenstein and Christoudias 2004) . Visual salience considers an object salient when it attracts a user's visual attention more than others. The cause of such attention depends on many factors including user intention, familiarity, and physical characteristics of objects. For example, an object may be salient when it has some properties the others do not have, such as it is the only one that is highlighted, or the only one of a certain size, category, or color (Landragin et al., 2001) . Visual salience can also be useful in input interpretation, for example, for multimodal reference resolution (Kehler 2000) and cross-modal coreference interpretation (Byron et al., 2005) .",
"cite_spans": [
{
"start": 321,
"end": 342,
"text": "(Gundel et al., 1993)",
"ref_id": "BIBREF14"
},
{
"start": 407,
"end": 427,
"text": "(Grosz et al., 1995)",
"ref_id": "BIBREF12"
},
{
"start": 685,
"end": 708,
"text": "(Lappin and Leass 1995)",
"ref_id": null
},
{
"start": 736,
"end": 753,
"text": "(Hul et al. 1995;",
"ref_id": null
},
{
"start": 754,
"end": 787,
"text": "Eisenstein and Christoudias 2004)",
"ref_id": "BIBREF9"
},
{
"start": 1219,
"end": 1243,
"text": "(Landragin et al., 2001)",
"ref_id": "BIBREF24"
},
{
"start": 1355,
"end": 1368,
"text": "(Kehler 2000)",
"ref_id": "BIBREF22"
},
{
"start": 1412,
"end": 1432,
"text": "(Byron et al., 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work on Salience Modeling",
"sec_num": null
},
{
"text": "We believe that salience modeling should go beyond reference resolution. Our view is that the salience not only affects the use of referring expressions (and thus is useful for interpreting referring expressions), but also influences the linguistic context of the referring expressions. The spoken utterances that contain these expressions tend to describe information relating to the salient objects (e.g., properties or actions). Therefore, our goal in this paper is to take salience modeling a step further from reference resolution, towards overall language understanding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work on Salience Modeling",
"sec_num": null
},
{
"text": "The new salience driven approach is based on the cognitive theory of Conversation Implicature (Grice 1975) and earlier empirical findings of user speech and gesture behavior in multimodal conversation (Oviatt 1999) . The theory of Conversation Implicature (Grice 1975) states that speakers tend to make their contribution as informative as is required (for the current purpose of communication) and not make their contribution more informative than is required. In the context of multimodal conversation that involves speech and pen-based gesture, this theory indicates that users most likely will not make any unnecessary deictic gestures unless those gestures help in communicating users' intention. This is especially true since gestures usually take an extra effort from a user. When a pen-based gesture is intentionally delivered by a user, the information conveyed is often a crucial component in interpretation (Chai et al., 2005) . Figure 2 : The salience driven approach: the salience distribution calculated from gesture is used to tailor language models for spoken language understanding Speech and gesture also tend to complement each other. For example, when a speech utterance is accompanied by a deictic gesture (e.g., point or circle) on a graphical display, the speech input tends to issue commands or inquiries about properties of objects, and the deictic gestures tend to indicate the objects of interest. In addition, as shown in (Oviatt 1999) , the deictic gestures often occur before spoken utterances. Our previous work (Chai et al., 2004b) also showed that 85% of time gestures occurred before corresponding speech units. Therefore, gestures can be used as an earlier indicator to anticipate the content of communication in the subsequent spoken utterances.",
"cite_spans": [
{
"start": 94,
"end": 106,
"text": "(Grice 1975)",
"ref_id": "BIBREF13"
},
{
"start": 201,
"end": 214,
"text": "(Oviatt 1999)",
"ref_id": "BIBREF26"
},
{
"start": 256,
"end": 268,
"text": "(Grice 1975)",
"ref_id": "BIBREF13"
},
{
"start": 918,
"end": 937,
"text": "(Chai et al., 2005)",
"ref_id": "BIBREF4"
},
{
"start": 1450,
"end": 1463,
"text": "(Oviatt 1999)",
"ref_id": "BIBREF26"
},
{
"start": 1543,
"end": 1563,
"text": "(Chai et al., 2004b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 940,
"end": 948,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "4.1 A Salience Driven Approach",
"sec_num": "4"
},
{
"text": "The general idea of the salience based approach is shown in Figure 2 . For each application domain, there is a physical world representation that captures domain knowledge (details are described later). A deictic gesture can activate several objects on the graphical display. This activation will signal a distribution of objects that are salient. The salient objects are mapped to the physical world representation to indicate a salient part of representation that includes relevant properties or tasks related to the salient objects. This salient part of the physical world is likely to be the potential content of the spoken communication, and thus can be used to tailor language models for spoken language understanding. This process is shown in the middle shaded box of Figure 2 . It bridges gesture understanding and language understanding at a stage before multimodal fusion. Note that the use of gesture information can be applied at different stages: during speech recognition to generate hypotheses or post processing of recognized hypotheses before language understanding. In this paper, we focus on the latter.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Figure 2",
"ref_id": null
},
{
"start": 775,
"end": 783,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": null
},
{
"text": "The physical world representation includes the following components:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": null
},
{
"text": "\u2022 Domain Model. This component captures the relevant knowledge about the domain including domain objects, properties of the objects, relations between objects, and task models related to objects. Previous studies have shown that domain knowledge can be used to improve spoken language understanding (Wai et al, 2001 ). Currently, we apply a frame-based representation where a frame represents an object (or a type of object) in the domain and frame elements represent attributes and tasks related to the objects. Each frame element is associated with a semantic tag which indicates the semantic content of that element. In the future, the domain model might also include knowledge about the interface, for example, visual properties and spatial relations between objects on the interface.",
"cite_spans": [
{
"start": 299,
"end": 315,
"text": "(Wai et al, 2001",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": null
},
{
"text": "w 1 w n \u2026\u2026 \u2026\u2026 Time t 2 t 3 t n ) (e P n t ) | ( 3 t g e P ) ( 3 t n t \u03b1 ) ( 2 t n t \u03b1 ) ( 1 t n t \u03b1 w i w i+1 t 1 ) | ( 2 t g e P ) | ( 1 t g e P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": null
},
{
"text": "Figure 3: Salience modeling: the salience distribution at time t n is calculated by a joint effect of gestures that happen before t n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": null
},
{
"text": "\u2022 Domain Grammar. This component specifies grammar and vocabularies used to process language inputs. There are two types of representation. The first type is a semantics-based context free grammar where each non-terminal symbol represents a semantic tag (indicating semantic information such as the semantic type of an object, etc). Each word (i.e., the terminal symbol) in the lexicon relates to one or more semantic tags. Some of these semantic tags are directly linked to the frame elements in the domain model since they represent certain properties or tasks. This grammar was manually developed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": null
},
{
"text": "The second type of representation is based on annotated user spoken utterances. The data are annotated in terms of relevant semantic information (i.e., using semantic tags) in the utterance and the intended objects of interest (which are directly linked to the domain model). Based on the annotated data, N-grams can be learned to represent the dependency of language in our domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.2",
"sec_num": null
},
{
"text": "Based on the physical world representation, our approach supports the following operations: Salience modeling. This operation calculates a salience distribution of entities in the physical world. In our current investigation, we limit the scope of entities to a closed set of objects from our physical world representation since the system has knowledge about those objects. These entities could have different salience values depending on whether they are visible on the graphical display, gestured by a user, or mentioned in the prior conversation. In this paper, we focus on the salience modeling using gesture information only. Salience driven language understanding. This operation maps the salience distribution to the physical world representation and uses the salient world to influence spoken language understanding. Note that, in this paper, we are not concerned with acoustic models for speech recognition, but rather we are interested in the use of the salience distribution to prime language models and facilitate language understanding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.2",
"sec_num": null
},
{
"text": "We use a vector e r to represent entities in the physical world representation. For each entity e e k r \u2208 , we use to represent its salience value at time t n . For all the entities, we use P )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "( k t e P n ) (e n t v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "to represent a salience distribution at time t n . Figure 3 shows a sequence of words with corresponding gestures that occur at t 1 , t 2 , and t 3 . As shown in Figure 3 , the salience distribution at any given time t n is influenced by a joint effect from this sequence of gestures that happen before t n etc. Depending on its time of occurrence, each gesture may have a different impact on the salience distribution at time t n . Note that although each gesture may have a short duration, here we only consider the beginning time of a gesture. Therefore, for an entity e k , its salience value at time t n is computed as follows: ",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 3",
"ref_id": null
},
{
"start": 162,
"end": 170,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "In Equation (1), m (m \u2265 1) is the number of gestures that have occurred before t n . The different impact of a gesture g at time t i that contributes to the salience distribution at time t n is represented as the weight",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "i t ) ( i n t t g \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "in Equation (1). Currently, we calculate the weight depending on the temporal distance as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": ") ( ] 2000 ) ( exp[ ) ( i n i n t t t t t t g i n \u2265 \u2212 \u2212 = \u03b1 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "Equation (2) indicates that at a given time t n (measured in milliseconds), the closer a gesture (at t i ) is to the time t n , the higher impact this gesture has on the salience distribution (Chai et al., 2004b) .",
"cite_spans": [
{
"start": 192,
"end": 212,
"text": "(Chai et al., 2004b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "It is worth mentioning that a deictic gesture on the graphic display (e.g., pointing and circling) could have ambiguous interpretation by itself. For example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "given an interface, a point or a circle on the screen could result in selection of different entities with different probabilities. Therefore, in Equation 1, is the selection probability which indicates the likelihood of selecting an entity e given a gesture at time t i . This selection probability is calculated by a function of the distance between the location of the entity and the focus point of the recognized gesture on the display (Chai et al., 2004a) . A normalization factor is incorporated to ensure that the summation of selection probabilities over all possible entities adds up to one.",
"cite_spans": [
{
"start": 440,
"end": 460,
"text": "(Chai et al., 2004a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "( | ) i t P e g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "When no gesture is involved in a given input, the salience distribution at any given time is a uniform distribution. If one or more gestures are involved, then Equation (1) is used to calculate the salience distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Modeling",
"sec_num": null
},
{
"text": "The salience distribution of entities identified based on the gesture information (as described above) is used to constrain hypotheses for language understanding. More specifically, for each onset of a spoken word at time t (i.e., the beginning time stamp of a spoken word), the salience distribution at t can be calculated based on a sequence of gestures that happen before t by Equation (1). This salience distribution can then be used to prime language models for spoken language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salience Driven Spoken Language Understanding",
"sec_num": null
},
{
"text": "We first give a brief background of language modeling. Given an observed speech utterance O, the goal of speech recognition is to find a sequence of words W* so that W P , where P(O|W) is the acoustic model and P(W) is the language model. In traditional speech recognition systems, the acoustic model provides the probability of observing the acoustic features given hypothesized word sequences and the language model provides the probability of a sequence of words. The language model is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": null
},
{
"text": "* arg max ( | ) ( ) OW = ) | ( )... | ( ) | ( ) ( ) ( 1 1 2 1 3 1 2 1 1 \u2212 = n n n w w P w w w P w w P w P w P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": null
},
{
"text": "Using the Markov assumption, the language model can be approximated by a bigram model as in:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": null
},
{
"text": "\u220f = \u2212 = n i i i n w w P w P 1 1 1 ) | ( ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": null
},
{
"text": "To improve the speech understanding results for spoken language interfaces, many systems have applied a loosely-integrated approach which decouples the language model from the acoustic model (Zue et al., 1991 , Harper et al., 2000 . This allows the development of powerful language models independent of the acoustic model, for example, utilizing topics of the utterances (Gildea and Hofmann 1999) , syntactic or semantic labels (Heeman 1999) , and linguistic structures (Chelba and Jelinek 2000, Wang and Harper 2002) . Recently, we have seen work on language understanding based on environment (Schuler 2003) and language modeling using visual context (Roy and Mukherjee 2005) . Our salience driven approach is inspired by this earlier work. Here, we do not address the acoustic model of speech recognition, but rather incorporate the salience distribution for language modeling. In particular, our focus is on investigating the effect of incorporating additional information from other modalities (e.g., gesture) with traditional language models.",
"cite_spans": [
{
"start": 191,
"end": 208,
"text": "(Zue et al., 1991",
"ref_id": "BIBREF33"
},
{
"start": 209,
"end": 230,
"text": ", Harper et al., 2000",
"ref_id": "BIBREF15"
},
{
"start": 372,
"end": 397,
"text": "(Gildea and Hofmann 1999)",
"ref_id": "BIBREF10"
},
{
"start": 429,
"end": 442,
"text": "(Heeman 1999)",
"ref_id": "BIBREF16"
},
{
"start": 471,
"end": 482,
"text": "(Chelba and",
"ref_id": "BIBREF7"
},
{
"start": 483,
"end": 505,
"text": "Jelinek 2000, Wang and",
"ref_id": null
},
{
"start": 506,
"end": 518,
"text": "Harper 2002)",
"ref_id": "BIBREF32"
},
{
"start": 596,
"end": 610,
"text": "(Schuler 2003)",
"ref_id": "BIBREF30"
},
{
"start": 654,
"end": 678,
"text": "(Roy and Mukherjee 2005)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": null
},
{
"text": "The calculated salience distribution is used to prime the language model. More specifically, we use a class-based bigram model from (Brown et al, 1992) :",
"cite_spans": [
{
"start": 132,
"end": 151,
"text": "(Brown et al, 1992)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": ") | ( ) | ( ) | ( 1 1 \u2212 \u2212 = i i i i i i c c P c w P w w P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": "(3) In Equation 3, c i is the class of the word w i , which could be a syntactic class or a semantic class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": "is the class transition probability, which reflects the grammatical formation of utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": "is the word class probability which measures the probability of seeing a word w i given a class c i . The class-based N-gram model can make better use of limited training data by clustering words into classes. A number of researchers have shown that the class-based N-gram model can successfully improve the performance of speech recognition (Jelinek 1990 , Heeman 1999 , Kneser and Ney 1993 , Samuelsson and Reichl, 1999 .",
"cite_spans": [
{
"start": 342,
"end": 355,
"text": "(Jelinek 1990",
"ref_id": "BIBREF18"
},
{
"start": 356,
"end": 369,
"text": ", Heeman 1999",
"ref_id": "BIBREF16"
},
{
"start": 370,
"end": 391,
"text": ", Kneser and Ney 1993",
"ref_id": "BIBREF23"
},
{
"start": 392,
"end": 421,
"text": ", Samuelsson and Reichl, 1999",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": ") | ( 1 \u2212 i i c c P ) | ( i i c w P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": "In our approach, the \"class\" used in the classbased bigram model comes from combined semantic and functional classes designed for our domain. For example, \"this\" is tagged as Demonstrative, and \"price\" is tagged as AttrPrice. As shown in Equation (3), there are two types of parameter estimation. In terms of the class transition probability, as in earlier work, we directly use the annotated data. In terms of the word class distribution, we incorporate the notion of salience. We use the salience distribution to dynamically adjust the world class probability as follows: Table 1 : Related information about the evaluation data: user type, the number of turns per user, and the baseline word recognition rate.",
"cite_spans": [],
"ref_spans": [
{
"start": 574,
"end": 581,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": ") | ( i i c w P ) ( ) | ( ) | , ( ) | ( k t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": "In Equation 4, P is the salience value for an entity at time t i (the onset of the spoken word w i ), which can be calculated by Equation (1). Equation 4indicates that only information associated with the salient entities is used to estimate the word class distribution. In other words, the word class probability favors the salient physical world as indicated by the salience distribution ) ( k t e i k e ) (e P i t v . More specifically, at time t i , given a semantic class c i , the choice of word \"w i \" is dependent on the salient physical world at the moment, which is represented as the salience distribution ) (e P i t v at time t i . For all w i , the summation of this word class probability is equal to one. Furthermore, given an entity , and are not dependent on time t i , but rather on the domain and the use of language expressions. Therefore they can be estimated based on the training data that are annotated in terms of semantic information and the intended objects of interest (as discussed in Section 4.1). Since the annotated data is very limited, the sparse data can become a problem for the maximum likelihood estimation. Therefore, a smoothing technique based on the Katz backoff model (Katz, 1987) is applied. For example, to calculate , if a word w i has one or more occurrences in the training data associated with the class c i and the entity , then its count is discounted by a fraction in the maximum likelihood estimation. If w i does not occur, then this approach backs off to the domain grammar and redistributes the remaining probability mass uniformly among words in the lexicon that are linked with class c i and entity e . ",
"cite_spans": [
{
"start": 1211,
"end": 1223,
"text": "(Katz, 1987)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Primed Language Model",
"sec_num": null
},
{
"text": "We evaluated the salience model during post processing recognized hypotheses. Given possible hypotheses from a speech recognizer, we use the salience-based language model to identify the most likely sequence of words. The salience distribution based on gesture was used to favor words that are consistent with the attention indicated by gestures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The data collected from our previous user studies were used in our evaluation (Chai et al., 2004b) . In these studies, users interacted with our multimodal interface using both speech and deictic gestures to find information about real estate properties. In particular, each user was asked to accomplish five tasks. Each of these tasks required the user to retrieve different types of information from our interface. For example, one task was to find the least expensive house in the most populated town. The data were recorded from eleven subjects including five nonnative speakers and six native speakers. Each user's voice was individually trained before the study. Table 1 shows the relevant information about the data such as the total number of inputs (or turns) from each subject, the number of speech alone inputs without any gesture, and the baseline recognition results without using salience-based post processing in terms of the word error rate (WER). In total, we have collected 226 user inputs with an average of eight words per spoken utterance 1 . As shown in Table 1 , the majority of inputs consisted of both speech and gesture. Since currently we only use gesture information in salience modeling, our approach will not affect speech only inputs. To train the salience-based model, we applied the leave-one-out approach. The data from each user was held out as the testing data and the remaining users were used as the training data to acquire relevant probability estimations in Equation (3) and (4). Figure 5 shows the comparison results between the baseline and the salience-based model in terms of word error rate (WER). The word error rate as a result of salience-based post processing is significantly better than that from the baseline recognizer (t = 4.75, p < 0.001). The average WER reduction is about 12%.",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "(Chai et al., 2004b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 669,
"end": 677,
"text": "Table 1",
"ref_id": null
},
{
"start": 1077,
"end": 1084,
"text": "Table 1",
"ref_id": null
},
{
"start": 1522,
"end": 1530,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "We further evaluated how the salience based model affects the final understanding results. This is because an improvement in WER may not directly lead to an improvement in understanding. We applied our semantic grammar on a sequence of words resulting from both the baseline and the saliencebased post-processing to identify key concepts. In total, there were 686 concepts from the transcribed speech utterances. Table 2 shows the evaluation results. Precision measures the percentage of correctly identified concepts out of the total number of concepts identified based on a sequence of words.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Recall measures the percentage of correctly identified concepts out of the total number of intended concepts from user's utterance. F-measurement combines precision and recall together as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "1 , Recall Precision Recall Precision ) 1 ( 2 2 = + \u00d7 \u00d7 + = \u03b2 \u03b2 \u03b2 where F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": ". Table 2 shows that, on average, the concept identification based on the word sequence resulting from the salience-based approach performs better than the baseline in terms of both precision and recall. Figure 6 provides two examples to show the difference between the baseline recognition and the salience-based post processing. The evaluation reported here is only an initial step based on a limited domain. The small scale in the number of objects and the vocabulary size can only demonstrate the potential of the salience-based approach to a limited degree. To further understand the advantages and issues of this approach, we are currently working on a more complex domain with richer concepts and relations, as well as larger vocabularies.",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 9,
"text": "Table 2",
"ref_id": null
},
{
"start": 204,
"end": 212,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "It is worth mentioning that the goal of this work is to explore whether salience modeling based on other modalities (e.g., gesture) can be used to prime traditional language models to facilitate spoken language processing. The salience driven approach based on additional modalities can be combined with more sophisticated language modeling (e.g., better parameter estimation) in the future. Transcription: How much is this gray house Baseline recognition: How much is this great house Salience-based processing: How much is this gray house Figure 6 : Examples of utterances with baseline recognition and improved recognition from the salience-based processing. Table2. Overall concept identification comparison between the baseline and the salience driven model.",
"cite_spans": [],
"ref_spans": [
{
"start": 541,
"end": 549,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "This paper presents a new salience driven approach to robust input interpretation in multimodal conversational systems. This approach takes advantage of rich information from multiple modalities. Information from deictic gestures is used to identify a part of the physical world that is salient at a given point of communication. This salient part of the physical world is then used to prime language models for spoken language understanding. Our experimental results have shown the potential of this approach in reducing word error rate and improving concept identification from spoken utterances in our application. Although currently we have only investigated the use of gesture information in salience modeling, the salience driven approach can be extended to include other modalities (e.g., eye gaze) and information (e.g., conversation context). Our future work will specifically investigate how to combine information from multiple sources in salience modeling and how to apply the salience models in different early stages of processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "The difference between the number of user inputs reported here and that in(Chai et al., 2004b) was caused by the situation where one intended user input (which was the unit for counting in our previous work) was split into a couple turns (which constitute the new counts here).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by a CAREER grant IIS-0347548 from the National Science Foundation. The authors would like to thank anonymous reviewers for their helpful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Integrating Multimodal Language Processing with Speech Recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bangalore, S. and Johnston, M. 2000. Integrating Multimodal Language Processing with Speech Recognition. In Proceedings of ICSLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P., Della Pietra, V. J., deSouza, P. V., Lai, J. C, and Mercer, R. L. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467-479.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Utilizing Visual Attention for Cross-Modal Coreference Interpretation. Spring Lecture Notes in Computer Science: Proceedings of Context-05",
"authors": [
{
"first": "D",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mampilly",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "83--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byron, D., Mampilly, T., Sharma, V., and Xu, T. 2005. Utilizing Visual Attention for Cross-Modal Coreference Interpretation. Spring Lecture Notes in Computer Science: Proceedings of Context-05, page 83-96.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Modeling the interaction between speech and gesture",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cassell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stone",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Douville",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Prevost",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Achorn",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Badler",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pelachaud",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cassell, J., Stone, M., Douville, B., Prevost, S., Achorn, B., Steedman, M., Badler, N., and Pelachaud, C. 1994. Modeling the interaction between speech and gesture. Cognitive Science Society.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Linguistic Theories in Efficient Multimodal Reference Resolution: an Empirical Investigation",
"authors": [
{
"first": "J",
"middle": [
"Y"
],
"last": "Chai",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Prasov",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Blaim",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "The 10th International Conference on Intelligent User Interfaces (IUI-05)",
"volume": "",
"issue": "",
"pages": "43--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chai, J. Y., Prasov, Z., Blaim, J., and Jin, R. 2005. Linguistic Theories in Efficient Multimodal Reference Resolution: an Empirical Investigation. The 10th International Conference on Intelligent User Interfaces (IUI-05), pp. 43-50, San Diego, CA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Optimization in Multimodal Interpretation",
"authors": [
{
"first": "J",
"middle": [
"Y"
],
"last": "Chai",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "M",
"middle": [
"X"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "Prasov",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chai, J. Y., Hong, P., Zhou, M. X, and Prasov, Z. 2004b. Optimization in Multimodal Interpretation. In Proceedings of ACL, pp. 1-8, Barcelona, Spain.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Probabilistic Approach to Reference Resolution in Multimodal User Interfaces",
"authors": [
{
"first": "J",
"middle": [
"Y"
],
"last": "Chai",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of 9th International Conference on Intelligent User Interfaces (IUI-04)",
"volume": "",
"issue": "",
"pages": "70--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chai, J. Y., Hong, P., and Zhou, M. 2004a. A Probabilistic Approach to Reference Resolution in Multimodal User Interfaces. Proceedings of 9th International Conference on Intelligent User Interfaces (IUI-04), pp. 70-77, Madeira, Portugal.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Structured language modeling",
"authors": [
{
"first": "C",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 2000,
"venue": "Computer Speech and Language",
"volume": "14",
"issue": "4",
"pages": "283--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelba, C. and Jelinek, F. 2000. Structured language modeling. Computer Speech and Language, 14(4):283-332.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Quickset: Multimodal Interaction for Distributed Applications",
"authors": [
{
"first": "P",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mcgee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oviatt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pittman",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Clow",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of ACM Multimedia",
"volume": "",
"issue": "",
"pages": "31--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, P., Johnston, M., McGee, D., Oviatt, S., Pittman, J.; Smith, I., Chen, L., and Clow, J. 1996. Quickset: Multimodal Interaction for Distributed Applications. Proceedings of ACM Multimedia, 31-40.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A salience-based approach to gesture-speech alignment",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "",
"middle": [
"C"
],
"last": "Christoudias",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT/NAACL'04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eisenstein J. and Christoudias. C. 2004. A salience-based approach to gesture-speech alignment. In Proceedings of HLT/NAACL'04.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Topic-based language models using EM",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, D. and Hofmann, T. 1999. Topic-based language models using EM. In Proceedings of Eurospeech.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Gaze durations during speech reflect word selection and phonological encoding",
"authors": [
{
"first": "Z",
"middle": [
"M"
],
"last": "Griffin",
"suffix": ""
}
],
"year": 2001,
"venue": "Cognition",
"volume": "82",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffin, Z. M. 2001. Gaze durations during speech reflect word selection and phonological encoding. Cognition 82, B1-B14.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Centering: A framework for modeling the local coherence of discourse",
"authors": [
{
"first": "B",
"middle": [
"J"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grosz, B. J., Joshi, A. K., and Weinstein, S. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Logic and Conversation",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Speech Acts",
"volume": "",
"issue": "",
"pages": "41--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grice, H. P. Logic and Conversation. 1975. In Cole, P., and Morgan, J., eds. Speech Acts. New York, New York: Academic Press. 41-58.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Cognitive Status and the Form of Referring Expressions in Discourse",
"authors": [
{
"first": "J",
"middle": [
"K"
],
"last": "Gundel",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Hedberg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zacharski",
"suffix": ""
}
],
"year": 1993,
"venue": "Language",
"volume": "69",
"issue": "2",
"pages": "274--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gundel, J. K., Hedberg, N., and Zacharski, R. 1993. Cognitive Status and the Form of Referring Expressions in Discourse. Language 69(2):274-307.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Effectiveness of Corpus-Induced Dependency Grammars for Post-processing Speech",
"authors": [
{
"first": "M",
"middle": [
"."
],
"last": "Harper",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Helzerman",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the North American Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "102--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harper, M.., White, C., Wang, W., Johnson, M., and Helzerman, R. 2000. The Effectiveness of Corpus-Induced Dependency Grammars for Post-processing Speech. Proceedings of the North American Association for Computational Linguistics, 102-109.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "POS tags and decision trees for language modeling",
"authors": [
{
"first": "",
"middle": [
"P"
],
"last": "Heeman",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Process (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeman. P. 1999. POS tags and decision trees for language modeling. In Proceedings of the Conference on Empirical Methods in Natural Language Process (EMNLP).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic Referent Resolution of Deictic and Anaphoric Expressions",
"authors": [
{
"first": "C",
"middle": [],
"last": "Huls",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Classen",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "1",
"pages": "59--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huls, C., Bos, E., and Classen, W. 1995. Automatic Referent Resolution of Deictic and Anaphoric Expressions. Computational Linguistics, 21(1):59-79.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Self-organized language modeling for speech recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1990,
"venue": "Readings in Speech Recognition",
"volume": "",
"issue": "",
"pages": "450--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, F. 1990. Self-organized language modeling for speech recognition. In Waibel, A. and Lee, K. F. (Eds). Readings in Speech Recognition, pp. 450-506.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unification-based Multimodal parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnston, M. 1998. Unification-based Multimodal parsing, Proceedings of COLING-ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "MATCH: An Architecture for Multimodal Dialog Systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Visireddy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ehlen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Whittaker",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Maloor",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40 th ACL",
"volume": "",
"issue": "",
"pages": "376--383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnston, M., Bangalore, S., Visireddy G., Stent, A., Ehlen, P., Walker, M., Whittaker, S., and Maloor, P. 2002. MATCH: An Architecture for Multimodal Dialog Systems, in Proceedings of the 40 th ACL, Philadelphia, pp. 376-383.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Estimation of probabilities from sparse data for the language model component of a speech recognizer",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Katz",
"suffix": ""
}
],
"year": 1987,
"venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katz, S. M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(3).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Cognitive Status and Form of Reference in Multimodal Human-Computer Interaction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kehler",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of AAAI'01",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kehler, A. 2000. Cognitive Status and Form of Reference in Multimodal Human-Computer Interaction, Proceedings of AAAI'01.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improved clustering techniques for class-based statistical language modeling",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kneser",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1993,
"venue": "Eurospeech'93",
"volume": "",
"issue": "",
"pages": "973--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kneser, R. and Ney, H. 1993. Improved clustering techniques for class-based statistical language modeling. In Eurospeech'93, pp. 973-976.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Visual Salience and Perceptual Grouping in Multimodal Interactivity",
"authors": [
{
"first": "F",
"middle": [],
"last": "Landragin",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bellalem",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2001,
"venue": "First International Workshop on Information Presentation and Natural Multimodal Dialogue",
"volume": "",
"issue": "",
"pages": "151--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Landragin, F., Bellalem, N., and Romary, L. 2001. Visual Salience and Perceptual Grouping in Multimodal Interactivity. In: First International Workshop on Information Presentation and Natural Multimodal Dialogue, Verona, Italy, pp. 151-155.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An algorithm for pronominal anaphora resolution",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Leass",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lappin, S., and Leass, H. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Mutual Disambiguation of Recognition Errors in a Multimodal Architecture",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oviatt",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of CHI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oviatt, S. 1999. Mutual Disambiguation of Recognition Errors in a Multimodal Architecture. In Proceedings of CHI.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multimodal Conversational Systems for Automobiles",
"authors": [
{
"first": "R",
"middle": [],
"last": "Pieraccini",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Dayandhi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bloom",
"suffix": ""
},
{
"first": "J.-G",
"middle": [],
"last": "Dahan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "B",
"middle": [
"R"
],
"last": "Goodman",
"suffix": ""
},
{
"first": "K",
"middle": [
"V"
],
"last": "Prasad",
"suffix": ""
}
],
"year": 2004,
"venue": "Communications of the ACM",
"volume": "47",
"issue": "1",
"pages": "47--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pieraccini, R., Dayandhi, K., Bloom, J., Dahan, J.-G., Phillips, M., Goodman, B. R., Prasad, K. V., 2004. Multimodal Conversational Systems for Automobiles, Communications of the ACM, Vol. 47, No. 1, pp. 47-49",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Towards Situated Speech Understanding: Visual Context Priming of Language Models",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2005,
"venue": "Computer Speech and Language",
"volume": "19",
"issue": "2",
"pages": "227--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy, D. and Mukherjee, N. 2005. Towards Situated Speech Understanding: Visual Context Priming of Language Models. Computer Speech and Language, 19(2): 227-248.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A class-based Language Model for Large Vocabulary Speech Recognition Extracted from Part-of-Speech Statistics",
"authors": [
{
"first": "C",
"middle": [],
"last": "Samuelsson",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Reichl",
"suffix": ""
}
],
"year": 1999,
"venue": "IEEE ICASSP'99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuelsson, C. and Reichl, W. 1999. A class-based Language Model for Large Vocabulary Speech Recognition Extracted from Part-of-Speech Statistics. In IEEE ICASSP'99.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Using model-theoretic semantic interpretation to guide statistical parsing and word recognition in a spoken language interface",
"authors": [
{
"first": "W",
"middle": [],
"last": "Schuler",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schuler, W. 2003. Using model-theoretic semantic interpretation to guide statistical parsing and word recognition in a spoken language interface. In Proceedings of ACL, Sapporo, Japan.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A Dynamic Semantic Model for Rescoring Recognition Hypothesis",
"authors": [
{
"first": "C",
"middle": [],
"last": "Wai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pierraccinni",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wai, C., Pierraccinni, R., and Meng, H. 2001. A Dynamic Semantic Model for Rescoring Recognition Hypothesis. Proceedings of the ICASSP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The superARV language model: In Investigating the effectiveness of tightly integrating multiple knowledge sources",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [
"M"
],
"last": "Harper",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, W. and Harper. M. 2002. The superARV language model: In Investigating the effectiveness of tightly integrating multiple knowledge sources. In Proceedings of EMNLP, 238- 247.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Integration of Speech Recognition and Natural Language Processing in the MIT Voyager System",
"authors": [
{
"first": "V",
"middle": [],
"last": "Zue",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Goodine",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zue, V., Glass, J., Goodine, D., Leung, H., Phillips, M., Polifroni, J., and Seneff, S. 1991. Integration of Speech Recognition and Natural Language Processing in the MIT Voyager System. Proceedings of the ICASSP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Semantics-based multimodal interpretation",
"type_str": "figure",
"num": null
},
"FIGREF4": {
"uris": null,
"text": "Comparison of the baseline and the result from post-processing in terms of WER",
"type_str": "figure",
"num": null
},
"FIGREF5": {
"uris": null,
"text": "What is the population of this town Baseline recognition: What is the publisher of this time Salience-based processing: what is the population of this town Example 2:",
"type_str": "figure",
"num": null
}
}
}
}