{ "paper_id": "U06-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:09:33.333506Z" }, "title": "Using Dialogue Acts to Suggest Responses in Support Services via Instant Messaging", "authors": [ { "first": "Edward", "middle": [], "last": "Ivanovic", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": {} }, "email": "edwardi@csse.unimelb.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Instant messaging dialogue is real-time, text-based computer-mediated communication conducted over the Internet. Messages sent over instant messaging can be encoded We propose a method of using dialogue acts to predict utterances in taskoriented dialogue. Dialogue acts provide a semantic representation of utterances in a dialogue. An evaluation using a dialogue simulation program shows that our proposed method of predicting responses provides useful suggestions for almost all response types.", "pdf_parse": { "paper_id": "U06-1023", "_pdf_hash": "", "abstract": [ { "text": "Instant messaging dialogue is real-time, text-based computer-mediated communication conducted over the Internet. Messages sent over instant messaging can be encoded We propose a method of using dialogue acts to predict utterances in taskoriented dialogue. Dialogue acts provide a semantic representation of utterances in a dialogue. An evaluation using a dialogue simulation program shows that our proposed method of predicting responses provides useful suggestions for almost all response types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Support services in many domains have traditionally been provided over the telephone: when customers have queries, they dial a support number and speak to a support representative. Recent years have seen an increasing trend in support services provided over the Internet. Many companies have web sites with Frequently Asked Questions (FAQs), and also offer e-mail support. More recently, real-time support via online chat sessions is being offered where customers and support representatives type short messages to each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Chat sessions are conducted over a network, such as the Internet, where textual messages can be sent and received between interlocutors in real-time. These chat sessions are commonly referred to as instant messaging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Support services that are conducted via instant messaging vary from being person-person dialogue, similar to traditional call centres, through to being entirely automated where customers engage in dialogue with a computer program. Commercial software is available to partially automated online support by suggesting responses to a human agent, which may then be accepted or overwritten. The research presented in this paper aims to provide a degree of natural language understanding to assist in automating task-oriented dialogue, such as support services, by suggesting utterances during the dialogue. We apply various probabilistic methods to improve discourse modelling in the support services domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In previous work, we collected a small corpus of task-oriented dialogues between customers and support representatives from the MSN Shopping online support service (Ivanovic, 2005b) . The service is designed to assist potential customers with finding various items for sale on the MSN Shopping web site. A sample from one of the dialogues in this corpus is shown in Table 1 .", "cite_spans": [ { "start": 164, "end": 181, "text": "(Ivanovic, 2005b)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 366, "end": 373, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The research presented here advances previous work which examined various models and tech-niques to predict dialogue acts in task-oriented instant messaging. In Ivanovic (2005b) , the MSN Shopping corpus was collected and a gold standard produced by labelling the utterances with dialogue acts. Probabilistic models were then trained to predict dialogue acts given a sequence of utterances. Ivanovic (2005a) examined probabilistic and linguistic methods to automatically segment messages from the same corpus into utterances. The present paper concludes this work by applying the models to a dialogue simulation program which suggests utterance responses during a dialogue. The performance of the suggested utterances is then evaluated.", "cite_spans": [ { "start": 161, "end": 177, "text": "Ivanovic (2005b)", "ref_id": "BIBREF1" }, { "start": 391, "end": 407, "text": "Ivanovic (2005a)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our dialogue act tag set contains 12 dialogue acts, which are intended to represent the illocutionary force of an utterance. The tags were derived in Ivanovic (2005b) by manually labelling the MSN Shopping corpus using the tags that seemed appropriate from a list of 42 tags in Stolcke et al. (2000) .", "cite_spans": [ { "start": 150, "end": 166, "text": "Ivanovic (2005b)", "ref_id": "BIBREF1" }, { "start": 278, "end": 299, "text": "Stolcke et al. (2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The MSN Shopping corpus we use comprises approximately 550 utterances and 6,500 words. Ivanovic (2005b) describes the manual process of segmenting the messages into utterances and labelling the utterances with dialogue act tags to produce a gold standard version of the data. Kappa analysis on both the labelling and segmentation tasks was conducted with results showing high interannotator agreement (Ivanovic, 2005a) .", "cite_spans": [ { "start": 87, "end": 103, "text": "Ivanovic (2005b)", "ref_id": "BIBREF1" }, { "start": 401, "end": 418, "text": "(Ivanovic, 2005a)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "As part of a high-level, end-to-end evaluation of dialogue act prediction and their usefulness in semiautomated dialogue systems, we developed a program that simulates a live conversation while suggesting responses. The suggested utterances are ranked by their respective probabilities given the dialogue history.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "3" }, { "text": "We use cross-validation by training the system on all but one dialogue in our corpus. Following training, a customer support scenario is played out using the one dialogue that was not used for training, known as the target dialogue. The aim is to replicate substantially all of the utterances in the target dialogue. The process is repeated for each dialogue in our corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "3" }, { "text": "Our interface displays a ranked list of suggested dialogue acts and utterances. The dialogue acts are ranked from highest to lowest probability as determined by the naive Bayes model. The utterances within the dialogue acts are ranked by their frequency count during training. However, many utterances are only seen once, in which case the ordering is assumed random as their frequencies are equal. Our evaluation is only focussed on the dialogue-act rankings, not the utterance rankings. When a dialogue act is selected in the \"Suggestions\" list, the list of utterances is updated to show the relevant utterances for that dialogue act.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "3" }, { "text": "Our support dialogue simulation program showed that it is possible to accurately predict many utterances using dialogue acts; 61% of utterances were correctly predicated within the top three ranked dialogues: 22% were in the first rank and 27% in the second.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "3" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatic utterance segmentation in instant messaging dialogue", "authors": [ { "first": "Edward", "middle": [], "last": "Ivanovic", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Australasian Language Technology Workshop", "volume": "", "issue": "", "pages": "241--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Ivanovic. 2005a. Automatic utterance segmen- tation in instant messaging dialogue. In Proceedings of the Australasian Language Technology Workshop, pages 241-249, Sydney, NSW, Australia, December. Australasian Language Technology Association.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dialogue act tagging for instant messaging chat sessions", "authors": [ { "first": "Edward", "middle": [], "last": "Ivanovic", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Student Research Workshop", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Ivanovic. 2005b. Dialogue act tagging for in- stant messaging chat sessions. In Proceedings of the ACL Student Research Workshop, pages 79-84, Ann Arbor, Michigan, USA, June. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Bates", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Carol", "middle": [], "last": "Van Ess-Dykema", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Ries", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Meteer", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "3", "pages": "339--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Eliza- beth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339-373.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "html": null, "text": "", "type_str": "table", "content": "
: An example of the beginning of a dia- |
logue in our corpus showing utterance boundaries |
and dialogue-act tags in superscript. |