ACL-OCL / Base_JSON /prefixS /json /sigdial /2005.sigdial-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:50:50.904078Z"
},
"title": "DialogDesigner -A Tool for Rapid System Design and Evaluation",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Dybkjaer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Prolog Development Center A/S H. J. Holst Vej",
"location": {
"postCode": "3C-5C 2605",
"settlement": "Br\u00f8ndby",
"country": "Denmark"
}
},
"email": "dybkjaer@pdc.dk"
},
{
"first": "Laila",
"middle": [],
"last": "Dybkjaer",
"suffix": "",
"affiliation": {
"laboratory": "Natural Interactive Systems Laboratory University of Southern Denmark",
"institution": "",
"location": {
"addrLine": "Campusvej 55",
"postCode": "5230",
"settlement": "Odense M"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "As spoken dialogue systems mature, the need for rapid development tools increases. We describe such a tool that is currently being used for commercial design, specification and evaluation, and that is in the process of being developed into a complete case tool.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "As spoken dialogue systems mature, the need for rapid development tools increases. We describe such a tool that is currently being used for commercial design, specification and evaluation, and that is in the process of being developed into a complete case tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Improved recognition and understanding of spoken interaction facilitate the development of higher level tools that may enhance the clarity of spoken dialogue systems (SDSs) and reduce their development time and cost. This paper describes a tool -named DialogDesigner 1which supports SDS developers in rapidly designing and evaluating a dialogue model. In the following Section 2 provides an overall description of Dialog-Designer. Sections 3, 4, 5 and 6 present different aspects of the tool functionality in terms of how to model the dialogue, get various graphical views, run a Wizard-of-Oz (WOZ) simulation session, and extract different presentations in HTML. Sections 7 and 8 describe related work on design and evaluation tools and development tools, respectively. Section 9 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The basis in DialogDesigner is the design window where one can enter and browse a dialogue model, including prompts, conditions, and state transitions. Having entered a dialogue model there are various presentation possibilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DialogDesigner",
"sec_num": "2"
},
{
"text": "One option is to view a graphical presentation of the dialogue model. This presentation can be made more or less detailed depending on what the designer wants to 1 See also www.spokendialogue.dk/DialogDesigner. see. A second option is to run a WOZ simulation. This can be done with users or as part of presentations to and discussions with customers. The simulation is logged and can be saved for later analysis and commenting. The simulation log can also be used normatively to generate test scripts for use in a systematic functionality test. A third option is to extract HTML versions of the entire dialogue as well as of prompt and phrase lists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DialogDesigner",
"sec_num": "2"
},
{
"text": "In the following we explain the design window and the three mentioned main options, and illustrate the tool via the early design of a pizza application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DialogDesigner",
"sec_num": "2"
},
{
"text": "The design window ( Figure 1 ) has at its top three fields for administrative purposes (name of application, version and note) (1). The rest of the window concerns application design. The designer starts by entering a new group (2). A group consists of one or more dialogue states which conceptually belong together and are described by the group. A group or a state can be moved up or down in the emerging dialogue structure (3) using the arrow buttons (2). New states are entered at (4). Here one can also indicate if there is any priority condition (conditions are numbers, not Booleans) for entering the state, grammars needed for this state, and parameters that can be tested in conditions on states or transitions. No grammars are needed if the state does not take input from the user but continues directly to another state.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dialogue Structure and Prompts",
"sec_num": "3"
},
{
"text": "A state usually has one or more prompts attached. These are entered by clicking \"edit\" at (5). This leads to a window (not shown) listing all phrases already entered. New phrases can be added and one can compose a prompt for the state by selecting one or more phrases or named sets of indexed phrases and storing them. The resulting text is then shown at (5) when one returns to the design window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Structure and Prompts",
"sec_num": "3"
},
{
"text": "To get from one state to another, transitions (6) are needed. Some transitions are globally enabled when input from the user is expected. These may include e.g. request for repetition and no input registered. When there are several such global transitions it may pay off to group them together as done in Figure 1 under the group StandardReactions. Here (Commands) contain user-initiated meta-communication commands, such as help and repeat, while (Events) contain system triggers for meta-communication, such as no input and nothing understood. (Standard) contains default domain value reactions such as price information which the user may request at any time during the dialogue. A state may have several possible transitions leading to different new states (targets) where the choice of transition depends on the user's immediate input or on which infor-mation has been achieved so far. Transitions may target states or groups of states. In the latter case state conditions will determine which state to enter. Conditions on transitions express what must be fulfilled in order to select them. Transitions may also be accompanied by a prompt e.g. to provide feedback on the user's input or bridging to the output for the next state.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 313,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dialogue Structure and Prompts",
"sec_num": "3"
},
{
"text": "Transition information is entered at 7where clicking on clone will enable the designer to enter a new transition. Transition prompt texts are entered in the same way as state prompt texts, as explained above. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Structure and Prompts",
"sec_num": "3"
},
{
"text": "Clicking Model in the top menu bar in the design window ( Figure 1 ) opens a new window which allows the designer to see various graphical views of the dialogue (Figure 2 ). The graph part (7) is empty when the designer opens the window. To the left (1) are the groups and states specified in the design window. To the right (4) the designer can choose what he wants the graph to show. This should be done before he starts drawing the graph. Ticking Domain will enable all domain, i.e. taskrelated, transitions to be drawn. Ticking Command and System, respectively, will enable meta-transitions to be shown where System covers meta-transitions triggered by system events and Command covers user-initiated meta-transitions. Incoming and Outgoing allow the designer to see incoming and outgoing transitions, respectively, for a group or a state. Local shows transitions going out of and coming into the same state. Via shows transitions to a state that by default continues to some other state. Whenever the designer ticks one of the options Via, Incoming, Outgoing and Local, and selects a group or a state, the Outgoing (5) and Incoming (6) lists will show the transitions that will be drawn, if any.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 66,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 161,
"end": 170,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Graphical View",
"sec_num": "4"
},
{
"text": "To draw a group or a state in the graph part of the window (7) one must double-click the group or state at (1). Groups are shown in a double ellipsis to indicate that they can be further expanded, while states are drawn in a single ellipsis. The ellipsis of a selected group or state is shown in red. To expand a selected group or a state and see its transitions as specified at (4) one must click the expand button at (2). To collapse a group again one must double-click the group at (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphical View",
"sec_num": "4"
},
{
"text": "Domain transition labels are green while system transitions are red and command transition are yellow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphical View",
"sec_num": "4"
},
{
"text": "The graphical view is well-suited to get an overview of the dialogue structure and see connections at a more or less fine-grained level. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphical View",
"sec_num": "4"
},
{
"text": "In the design window (Figure 1 ) one may select \"Wizard of Oz\" -> \"Woz\" from the menu bar. Doing this opens a new window as shown in Figure 3 . This window enables the designer to simulate a user-system interaction using the designed dialogue model.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 30,
"text": "(Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 133,
"end": 141,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Wizard of Oz",
"sec_num": "5"
},
{
"text": "The designer starts a dialogue by clicking Start (1, where the button now is labelled Stop because a dialogue is ongoing). This will cause the system utterance for the initial state to be displayed in the Prompt field (2). At the same time all possible transitions from this state are shown in the Next field (4). Which one to choose depends on the user's input which is entered at (3). Entering the user's input does not automatically cause a selection of a transition. This must be done manually. But writing down the user's input means that the log eventually will contain a full dialogue with both system and user utterances. Such dialogues may later be used for testing the application and for further analysis. At (3) it is also possible to write notes to the current dialogue state, user input or transition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wizard of Oz",
"sec_num": "5"
},
{
"text": "The designer selects a transition by double-clicking on it. In doing this the previous system and user turn will be displayed in the log field at (5). At the same time the next system prompt is shown in the Prompt field and the new transition possibilities are shown in the Next field. The designer may copy and save a log for later inspection in the analysis window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wizard of Oz",
"sec_num": "5"
},
{
"text": "The analysis window is opened from the design windows menu bar \"Wizard of Oz\" -> \"Edit logs\". This window looks quite similar to the Woz window but supports the designer in inspecting, editing and commenting a previously saved log from a simulated interaction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wizard of Oz",
"sec_num": "5"
},
{
"text": "The HTML menu in the design window ( Figure 1) gives access to a number of options for HTML presentations.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 46,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "HTML Presentations",
"sec_num": "6"
},
{
"text": "Phrase and prompt lists and a presentation of the dialogue model may be extracted in HTML. These are helpful for communicating with customers and phrase speakers. The HTML dialogue model can be used for navigating the dialogue via links, cf. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HTML Presentations",
"sec_num": "6"
},
{
"text": "Other tools than DialogDesigner exist which are meant to support the design and evaluation of SDSs and which support WOZ. Two such tools are Suede [Klemmer et al. 2000] , developed at the University of Washington, and the WOZ tool developed by Richard Breuer [WOZ tool] as a by-product of his work at Scansoft.",
"cite_spans": [
{
"start": 147,
"end": 168,
"text": "[Klemmer et al. 2000]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Design and Evaluation Tools",
"sec_num": "7"
},
{
"text": "Suede offers an interface for each of the three main activities of design, test, and analysis. The design inter-face allows the designer to create example dialogue scripts and a design graph representing the general design solution. For each prompt the audio output may be played if it has been recorded. The test mode enables WOZ simulation. The designer selects a prompt from a list of available prompts given the present state. The selected prompt is played to the user. Based on the user's answer the designer selects again one among the now available prompts, etc. Simulation of recognition errors is supported. The analysis interface is similar to the design interface except for the top of the window which contains user audio input from the last session. Moreover the design graph is annotated with test data which can be played.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Design and Evaluation Tools",
"sec_num": "7"
},
{
"text": "The WOZ tool developed by Richard Breuer offers interfaces for the three main activities of design, WOZ simulation and export. In the design mode the designer can specify the dialogue design in terms of prompts, questions and concepts. Like in DialogDesigner but contrary to Suede this interface is textual and not graphical. However, one has -like in DialogDesignerthe option to view a graphical version of the designed dialogue model. In WOZ mode the designer chooses the output to the user from a list of possible next prompts or questions depending on the user's input. The export activity is facilitated from a menu point in the design window. There are several export possibilities, including export to XML, HTML or HDDL (a proprietary programming language used by the SpeechMania platform [Aust et al. 1995] ). Figure 5 gives a rough comparison of which features are included in DialogueDesigner, Suede and Woz tool.",
"cite_spans": [
{
"start": 796,
"end": 814,
"text": "[Aust et al. 1995]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 818,
"end": 826,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Related Design and Evaluation Tools",
"sec_num": "7"
},
{
"text": "IVR tools extended with recognition facilities, such as HotVoice from Dolphin and Edify, may also be seen as related work. Both these examples offer a graphical interface for dialogue flow design. In addition HotVoice also offers the possibility to edit the program text generated via the graphical interface or write the design directly in the HotVoice language. The language used by HotVoice as well as the one used by Edify are proprietary languages just like HDDL. A major difference between DialogDesigner and the IVR tools is that the possibilities for designing a dialogue using an IVR tool are fairly low-level. IVR tools are fine for specifying dialogues as a flow diagram. However, it would be difficult to use them for the design of complex dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Design and Evaluation Tools",
"sec_num": "7"
},
{
"text": "Spoken dialogue platforms such as SpeechMania, Envox 6 VoiceXML Studio (both also support IVR), OpenSpeech, and the CSLU Toolkit are more aimed at implementation. To different extents they offer tools like \"standard dialogues\" for \"best practices\" in user interface design, such as entering a pin code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Design and Evaluation Tools",
"sec_num": "7"
},
{
"text": "However, common to these tools is that they focus on the implementation rather than on the modelling and evaluation -they are not case tools. And they do not focus on presentation to customers and users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Design and Evaluation Tools",
"sec_num": "7"
},
{
"text": "We have described DialogDesigner which is a tool in support of SDS dialogue design and evaluation. It focuses on communication and modelling flexibility as argued in [Dybkjaer and Dybkjaer 2004] . The HTML extracts, graph views and simulation mode provide strong support for communication with customers and domain experts which is important in real-life projects. The ability to place conditions on states, transitions and prompts provides a useful flexibility in dialogue modelling.",
"cite_spans": [
{
"start": 166,
"end": 194,
"text": "[Dybkjaer and Dybkjaer 2004]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "Three next tool development and extension steps are planned. They include features for enhanced design process support (cf. Figure 5 ) as well as implementation support (code generation), transcription, and synthesis. Code generation will allow the automatic generation of VoiceXML code based on the design description presented above. Automatic code generation has the potential to save considerable effort. However, it will be a challenge to flexibly support e.g. agent or problem solving approaches. For transcription we envision a tool comparable to the TranscriptionStation included in the SpeechMania platform. It requires that spoken input is recorded and that the recognised utterances are used as the basis for the transcription process. The synthesis extension must allow the user of DialogDesigner to either record output phrases for use in system simulations or use speech synthesis for the same purpose. ",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 132,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "Must be coded.3 Has recognition as part of the running system but recognition cannot be tested during simulation.4 By using SpeechMania tools on generated code. S Sound must be transcribed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Philips automatic train timetable information system",
"authors": [
{
"first": "Harald",
"middle": [],
"last": "Aust",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Oerder",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stenbiss",
"suffix": ""
}
],
"year": 1995,
"venue": "Speech Communication",
"volume": "17",
"issue": "",
"pages": "249--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harald Aust, Martin Oerder, F. Seide and V. Stenbiss: The Philips automatic train timetable information system. Speech Communication 17, 1995, 249-262.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hans Dybkjaer and Laila Dybkjaer: Modeling Complex Spoken Dialog",
"authors": [
{
"first": "",
"middle": [],
"last": "Cslu Toolkit",
"suffix": ""
}
],
"year": 2004,
"venue": "IEEE Computer",
"volume": "",
"issue": "",
"pages": "32--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CSLU Toolkit: http://cslu.cse.ogi.edu/toolkit/ Hans Dybkjaer and Laila Dybkjaer: Modeling Complex Spoken Dialog. IEEE Computer, August 2004, 32-40.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The 13th Annual ACM Symposium on User Interface Software and Technology: UIST 2000",
"authors": [
{
"first": "R",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Anoop",
"middle": [
"K"
],
"last": "Klemmer",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "James",
"middle": [
"A"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Nadeem",
"middle": [],
"last": "Landay",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Aboobaker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott R. Klemmer, Anoop K. Sinha, Jack Chen, James A. Landay, Nadeem Aboobaker, and Annie Wang: SUEDE: A Wizard of Oz Prototyping Tool for Speech User Interfaces. CHI Letters, The 13th Annual ACM Symposium on User Interface Software and Technology: UIST 2000. 2(2): 1-10.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The design window. Red numbers are referenced in the text."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The graphical view. Red numbers are referenced in the text."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 4, without having access to the DialogDesigner."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The simulation window. The log is stored in XML and may later be analysed in a similar window, or the RTF-format in the right-most pane may be copied to another document. Red numbers are referenced in the text."
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Excerpt of HTML presentation."
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Tool comparison. +: Has feature. -: Does not have feature. ?: Unknown, *: In pipeline"
}
}
}
}