{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:21:07.457749Z" }, "title": "Task-Oriented Dialog Systems for Dravidian Languages", "authors": [ { "first": "Tushar", "middle": [], "last": "Kanakagiri", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "" }, { "first": "Karthik", "middle": [], "last": "Radhakrishnan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Task-oriented dialog systems help a user achieve a particular goal by parsing user requests to execute a particular action. These systems typically require copious amounts of training data to effectively understand the user intent and its corresponding slots. Acquiring large training corpora requires significant manual effort in annotation, rendering its construction infeasible for low-resource languages. In this paper, we present a two step approach for automatically constructing task-oriented dialogue data in such languages by making use of annotated data from high resource languages. First, we use a machine translation (MT) system to translate the utterance and slot information to the target language. Second, we use token prefix matching and mBERT based semantic matching to align the slot tokens to the corresponding tokens in the utterance. We hand-curate a new test dataset in two low-resource Dravidian languages and show the significance and impact of our training dataset construction using a stateof-the-art mBERT model-achieving a Slot F 1 of 81.51 (Kannada) and 78.82 (Tamil) on our test sets.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Task-oriented dialog systems help a user achieve a particular goal by parsing user requests to execute a particular action. These systems typically require copious amounts of training data to effectively understand the user intent and its corresponding slots. Acquiring large training corpora requires significant manual effort in annotation, rendering its construction infeasible for low-resource languages. In this paper, we present a two step approach for automatically constructing task-oriented dialogue data in such languages by making use of annotated data from high resource languages. First, we use a machine translation (MT) system to translate the utterance and slot information to the target language. Second, we use token prefix matching and mBERT based semantic matching to align the slot tokens to the corresponding tokens in the utterance. We hand-curate a new test dataset in two low-resource Dravidian languages and show the significance and impact of our training dataset construction using a stateof-the-art mBERT model-achieving a Slot F 1 of 81.51 (Kannada) and 78.82 (Tamil) on our test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the surge in popularity of digital assistants (Google, Siri, Alexa) task-oriented dialog (TOD) systems have become commonplace in NLP research today. TOD systems are designed to complete a particular user goal by understanding requests from users and providing relevant information (Liu and Lane, 2018) . They typically consist of four major components -Natural Language Understanding (Chen et al., 2016) Campagna et al., 2020) , Dialog Policy Learning (Takanobu et al., 2019; , and Response Generation (Kummerfeld et al., 2019; Galley et al., 2019; Kale and Rastogi, 2020) . In this work, we focus on NLU and its two subtasks -Intent Detection and Slot Filling. Intent Detection is typically cast as a sequence classification problem where the task is to classify the purpose or goal that underlies a user utterance into one of several predefined classes called intents (ex. check-sunrise, set-alarm). The brevity and succinctness of the utterances coupled with the requirements to scale to different domains pose challenges to Intent Detection.", "cite_spans": [ { "start": 287, "end": 307, "text": "(Liu and Lane, 2018)", "ref_id": "BIBREF23" }, { "start": 390, "end": 409, "text": "(Chen et al., 2016)", "ref_id": "BIBREF5" }, { "start": 410, "end": 432, "text": "Campagna et al., 2020)", "ref_id": "BIBREF3" }, { "start": 458, "end": 481, "text": "(Takanobu et al., 2019;", "ref_id": "BIBREF31" }, { "start": 508, "end": 533, "text": "(Kummerfeld et al., 2019;", "ref_id": "BIBREF18" }, { "start": 534, "end": 554, "text": "Galley et al., 2019;", "ref_id": "BIBREF8" }, { "start": 555, "end": 578, "text": "Kale and Rastogi, 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Slot Filling involves identifying the intent arguments and is typically cast as a sequence labelling problem using the BIO notation. (see table 1 for an example). Prior research on the two tasks have mainly focused on English, reporting excellent performance owing to the availability of high quality and large amounts of annotated data (Schuster et al., 2018; Wu et al., 2020) . But such performance has not been achieved in low-resource languages due to lack of such data which can be expensive to construct.", "cite_spans": [ { "start": 337, "end": 360, "text": "(Schuster et al., 2018;", "ref_id": "BIBREF30" }, { "start": 361, "end": 377, "text": "Wu et al., 2020)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we present a simple but effective method to automatically create annotated training data for slot filling and intent detection in low resource languages making use of available English data. First, we independently translate utterances and annotated slots in English to the target language. Second, to align the slots with the utterance in the generated translation, we make use of morphology and semantics of the target language ( \u00a73). We conduct experiments on two Dravidian languages such as Tamil(tam) and Kannada(kan) and evaluate our methods on hand-annotated test sets of 600 utterances across the two languages. Tamil and Kannada belongs to south Dravidian (Tamil-Kannada) languages Mahesan, 2019, 2020a,b) . By training using state-of-the-art Multilingual BERT : mBERT (Devlin et al., 2018) model for slot filling and intent detection, we show in \u00a76 that our simple alignment heuristics outperform prior approaches relying on existing word alignment methods 1 .", "cite_spans": [ { "start": 705, "end": 728, "text": "Mahesan, 2019, 2020a,b)", "ref_id": null }, { "start": 792, "end": 813, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "TOD Systems -In the recent years, there has been a lot of work on building TOD systems broadly around two themes -Building out each component of the system independently (Chen et al., 2016; Campagna et al., 2020; Takanobu et al., 2019; Kummerfeld et al., 2019) and end-to-end TOD systems (Hosseini-Asl et al., 2020; Ham et al., 2020; Liang et al., 2020) . Though end-to-end systems show better generalization, they usually require larger amounts of training data and may not be very feasible for low-resource settings. In this work, we primarily tackle the NLU component which performs Intent detection and Slot identification.", "cite_spans": [ { "start": 170, "end": 189, "text": "(Chen et al., 2016;", "ref_id": "BIBREF5" }, { "start": 190, "end": 212, "text": "Campagna et al., 2020;", "ref_id": "BIBREF3" }, { "start": 213, "end": 235, "text": "Takanobu et al., 2019;", "ref_id": "BIBREF31" }, { "start": 236, "end": 260, "text": "Kummerfeld et al., 2019)", "ref_id": "BIBREF18" }, { "start": 288, "end": 315, "text": "(Hosseini-Asl et al., 2020;", "ref_id": null }, { "start": 316, "end": 333, "text": "Ham et al., 2020;", "ref_id": "BIBREF11" }, { "start": 334, "end": 353, "text": "Liang et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Since these tasks are framed as sequence classification and tagging tasks, sentence encoder models have been used to tackle them. Prior works have utilized Recurrent Neural Networks (Liu and Lane, 2016; Goo et al., 2018; E et al., 2019) and more recently have used BERT .", "cite_spans": [ { "start": 182, "end": 202, "text": "(Liu and Lane, 2016;", "ref_id": "BIBREF22" }, { "start": 203, "end": 220, "text": "Goo et al., 2018;", "ref_id": "BIBREF9" }, { "start": 221, "end": 236, "text": "E et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Multilingual TOD -Though there currently exist multiple large-scale datasets for TOD (Byrne et al., 2019; Budzianowski et al., 2018; Hemphill et al., 1990) , they are monolingual English Corpora. Schuster et al. (2018) released the Facebook TOD dataset which contains utterances from three languages -English, Spanish, Thai geared towards crosslingual transfer of dialog. On this dataset, there have been works on zero and few-shot transfer using Latent Variable Transfer (Liu et al., 2019) , Mixed Language Training (Liu et al., 2020) but they don't display consistent gains over augmentation with MT+Word alignment data.", "cite_spans": [ { "start": 85, "end": 105, "text": "(Byrne et al., 2019;", "ref_id": "BIBREF2" }, { "start": 106, "end": 132, "text": "Budzianowski et al., 2018;", "ref_id": "BIBREF1" }, { "start": 133, "end": 155, "text": "Hemphill et al., 1990)", "ref_id": "BIBREF12" }, { "start": 196, "end": 218, "text": "Schuster et al. (2018)", "ref_id": "BIBREF30" }, { "start": 472, "end": 490, "text": "(Liu et al., 2019)", "ref_id": "BIBREF24" }, { "start": 517, "end": 535, "text": "(Liu et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "On Indian languages, there have been works towards Indic and Code-Switched TOD systems (Jayarao and Srivastava, 2018; Banerjee et al., 2018; Mandl et al., 2020) but these datasets are manually constructed and are smaller in size. Furthermore, they're primarily for Hindi and there currently exists no dataset for languages like Kannada, Telugu, etc.", "cite_spans": [ { "start": 87, "end": 117, "text": "(Jayarao and Srivastava, 2018;", "ref_id": "BIBREF14" }, { "start": 118, "end": 140, "text": "Banerjee et al., 2018;", "ref_id": "BIBREF0" }, { "start": 141, "end": 160, "text": "Mandl et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Auto-construction of datasets -Given the cost and difficulty in acquiring annotators, prior works have employed methods to create synthetic training data. use Google Translate to create synthetic utterances in Indic languages towards the task of spoken intent detection but do not tackle the slot filling task. More recently L\u00f3pez de Lacalle et al. (2020) employed a Seq2Seq translation and word alignment to project the Spanish slot tags to Basque. However, we show that given the flexible word ordering and rich morphology, word alignment systems do not work particularly well for Dravidian languages. Furthermore, since Dravidian languages do not have large open-source corpora to learn MT and alignment systems reliably, we utilize Google Translate API, morphology and semantics based heuristic slot aligner which does not require any parallel data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The training dataset for the low-resource languages is constructed from the Facebook Multilingual Task-Oriented Dialog dataset (Schuster et al., 2018) which contains utterances in English, Spanish and Thai tagged with the corresponding intent and slot labels by hand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Construction", "sec_num": "3" }, { "text": "Our dataset construction consists of two steps -Translation and slot assignment. The translation phase converts the English utterances to their target language equivalents and the slot assignment phase transfers slot labels from the English phrases to their target language equivalents. As seen in Figure 1 , words often get translated to multiple words/don't have an equivalent in the translated language ('the' in the sentence is folded into 'groomer' in the target language), making the slot assignment problem non-trivial. For the translation phase, we experiment with two methods -Google Translate and a transformer based Seq2Seq model.", "cite_spans": [], "ref_spans": [ { "start": 298, "end": 306, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Auto-construction from English data", "sec_num": "3.1" }, { "text": "Previous works have tackled slot transfer using Word Alignment (L\u00f3pez de Lacalle et al., 2020; Schuster et al., 2018) by first predicting the word alignment between the translations and then copying over the labels from the source language words to their corresponding target language equivalents. This maximises sentence level alignment probabilities and is usually learned using a parallel sentence corpora. Furthermore, owing to the rich morphology and flexible word-ordering of Dravidian languages, these models typically have high alignment error rates. To transfer the slot spans from English to our target language, we only require the alignments of the slot spans and not the entire utterance. We tackle this by translating the annotated English spans to our target language and using various heuristics to align the slots to the utterance. Highlighted text shows the matching prefix of tokens from slots and utterances. We can observe how span expansion helps label the \u0c85\u0ca8\u0cc1\u0ccd\u0ca8 token correctly and obtain a contiguous span and account for noise in our alignment technique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auto-construction from English data", "sec_num": "3.1" }, { "text": "We use a simple word match as our baseline, where we identify words from utterances that match words from the annotated slots. This baseline achieves a poor Slot F 1 score owing to the variation in translation of certain words when translated independently and as part of a sentence. This can be attributed to the morphologically rich nature of Indian languages, where the context of the inflection can lead to different suffixes during translation for each word. To tackle this, as seen in Figure 1 , we use a technique that matches a word from the annotated slot to a word from the utterance that has the maximum prefix overlap. This helps account for a wide variety of variations and greatly helps in the automatic construction of the low-resource dataset.", "cite_spans": [], "ref_spans": [ { "start": 491, "end": 499, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Auto-construction from English data", "sec_num": "3.1" }, { "text": "Finally, we apply a span expansion technique where we assign the labels of the aligned slot tokens to those intra-span tokens that did not align with any slot. This helps us obtain a contiguous span which is a requirement of sequence tagging models. Additionally, this also helps account for errors in our alignment approach when certain intra-span tokens do not get aligned with any utterance token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auto-construction from English data", "sec_num": "3.1" }, { "text": "Upon analyzing the unmatched words from our aligner, we noticed that the words get translated differently when used in a complete sentence and when used individually. These variations could be due to transliteration (Word gets transliterated when used in slot phrase but translated when used in the utterance), synonyms (Phrase and utterance translations are synonyms) etc. Since the words are completely different, the prefix matching does not find any matching candidates. We use mBERT to obtain word embedding in-phrase and word embeddings inutterance and cosine similarity to find the closest matching word to align to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Including Semantic Matching", "sec_num": "3.2" }, { "text": "To quantitatively evaluate the quality of our automatically constructed training data, we require a gold test set in the target lowresource language. As this is unavailable for many Dravidian languages, we manually tag a test set for two of them (Kannada and Tamil), to obtain the gold alignments in the lowresource languages between annotated slots and the utterances. We first translate utterances to the low-resource language using Google Translate. Next, we remove examples that contain incorrect utterance translations. We finally obtain a sample of 300 examples for each language. For each language and each example, the utterance tokens corresponding to the slots are tagged, by two graduate students who are native speakers of the language (We obtain an inter-annotator Cohen Kappa score of 0.9303 for kan and 0.9606 for tam).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual tagging for test set", "sec_num": "3.3" }, { "text": "To aid with annotation, we make use of Doccano (Nakayama et al., 2018) to help provide the annotators an easy to use interface ( Figure 3 ).", "cite_spans": [ { "start": 47, "end": 70, "text": "(Nakayama et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 129, "end": 138, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Manual tagging for test set", "sec_num": "3.3" }, { "text": "We learn the intent and slot filling tasks jointly with BERT (Devlin et al., 2018) , more specifically, the multilingual variant -mBERT which is pre-trained on 104 languages. We pass the [CLS] vector representations onto a Multi-Layer perceptron to classify the intent onto one of the predefined intent classes. We pass the token representations produced by the mBERT model onto another Multi-Layer perceptron to identify the slots. In the case of a word consisting of multiple tokens, we only use the first sub-word token and ignore the others during training and test. Figure 2 shows the architecture of our system. Since mBERT is pre-trained across many languages, we follow the same architecture for our zero-shot and few-shot experiments.", "cite_spans": [ { "start": 61, "end": 82, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 571, "end": 579, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "We use the base model of BERT and mBERT in all our experiments with a batch size of 64 and a learning rate of 1e-5 with Adam (Kingma and Ba, 2014) optimizer. On zero and few shot transfer experiments, the model is trained for about 10 epochs before evaluation/fine-tuning on few shot data. The fourth example showcases the efficacy of using mBERT during alignment. On our training set, we noticed that the word 'weather' gets translated into 'meteorol\u00f3gico' when used in a sentence but gets translated to 'clima' when used individually. Since these words have no common prefix, the words are not usually tagged by our aligner. But when mBERT is used, since these terms are very close semantically, the slots are assigned correctly, subsequently resulting in better predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "5" }, { "text": "Without any models, we first evaluated the efficacy of our dataset tagger by autotagging and manually tagging the 300 test examples and comparing the slot F 1 between them. On our hand-annotated set of 300 examples, we show that the slot F 1 obtained by our tagging method ( \u00a73) outperforms simple word match and existing word alignment baselines in Table 3 . ", "cite_spans": [], "ref_spans": [ { "start": 350, "end": 358, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Effect of Dataset construction heuristic", "sec_num": "6.2" }, { "text": "We conduct experiments with varying amounts of data to show the effect of training data size in Table 4 . We can see that even with 300 examples from our auto-tagged dataset, the model is able to achieve significant boost as compared to the zero shot setting of training on eng and testing on kan. Initial training on eng followed by a few shot adaptation on kan provides an even higher boost in slot performance. Upon varying the number of training samples to 10K and 20K", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Effect of number of training samples", "sec_num": "6.3" }, { "text": "for kan, we see some minor improvement for 10K for the slot but see a 1.5 F1 boost when using 20K auto-annotated samples. Given that kan and tam are more closely related, we can see that the zero shot transfer from kan to tam achieves 62.22 & 46.16, outperforming the transfer from eng to tam. We also experimented with training a single model jointly on kan and tam but observed similar but slightly reduced performance as compared to using 5K examples from just one language (Rows 8, 13 and 6, 14) which could be due to interference when optimizing for multiple languages. We also observe that using 300 examples of kan and 5K of tam achieves better performance on the tam further corroborating the hypothesis that larger number of related language datapoints cause interference, leading to lower scores.", "cite_spans": [ { "start": 477, "end": 485, "text": "(Rows 8,", "ref_id": null }, { "start": 486, "end": 495, "text": "13 and 6,", "ref_id": null }, { "start": 496, "end": 499, "text": "14)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Effect of number of training samples", "sec_num": "6.3" }, { "text": "Test ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Train", "sec_num": null }, { "text": "Given the key role of the MT system in the dataset construction, we compare the performance of two approaches -Google Translate and our own Seq2Seq Transformer MT system. On the EnTam corpus (Ramasamy et al., 2012 ), our MT system achieved a BLEU score of 10.89 (the best reported score on this dataset is 9.39 (Kumar and Singh, 2019) ). The low performance of the mBERT model could be attributed to the low quality of the translations for the training dataset resulting from domain gap in the MT system. Since the EnTam corpus contains articles on Cinema, News, and Bible, it doesn't translate TOD utterances accurately (We examined frequent words in our training corpus -Remind, Alarm, etc and observed that these words didn't exist or were very infrequent in the EnTam corpus) Furthermore, owing to the differences in style (EnTam contains statements while TOD contains questions) we observed that some translations had changes in their meanings. We hence observe that 300 high quality training examples lead to similar performance as compared to 25K noisy training examples (Table 5) MT ", "cite_spans": [ { "start": 191, "end": 213, "text": "(Ramasamy et al., 2012", "ref_id": "BIBREF28" }, { "start": 311, "end": 334, "text": "(Kumar and Singh, 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 1078, "end": 1088, "text": "(Table 5)", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Effect of the MT system", "sec_num": "6.4" }, { "text": "We now present results when mBERT based semantic matching is included in the aligner in Table 6 . We noticed a small increase in F1 score for kan but a small drop for tam. This could be attributed to two reasons: 1) Our alignment heuristic already tags spans accurately and the inclusion of mBERT only provides minor improvements and 2) Languages like kan and tam are underrepresented in the mBERT training set leading to lower performance on these languages. On a non-Dravidian language Spanish, we notice a larger boost in performance (6 F1), indicating the strength of the mBERT representations for es. We present a more detailed analysis of the performance on es in the next section.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 95, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Effect of Semantic Matching", "sec_num": "6.5" }, { "text": "Though our aligner was designed for Dravidian languages, there are languages in other fami- Table 6 : Performance difference due to the inclusion of mBERT lies which also exhibit suffix-based morphology. We evaluate the performance of our system on Spanish by auto-creating the training data from eng using our method and evaluating on the hand-annotated es data provided in the Facebook dataset. We can see from Table 7 that our autocreated dataset of 5K examples performs only 12 F 1 worse than training on the entire handannotated es dataset (Consisting of 3.6K examples) further showcasing the quality of training examples produced by our method. ", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 6", "ref_id": null }, { "start": 413, "end": 420, "text": "Table 7", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Performance on non-Dravidian Languages", "sec_num": "6.6" }, { "text": "In this work, we demonstrated techniques to project training data from high resource languages to low-resource settings, thus efficiently obtaining large scale synthetic data for training TOD systems. We also showcased the efficacy of the dataset creation on a manually curated test set on kan and tam.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "7" }, { "text": "In the future, we hope to add more Dravidian languages -Telugu and Malayalam. We also hope to utilize social media data from these languages to generate codeswitched/more natural utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "7" }, { "text": "Code and data available at https://github.com/ karthikradhakrishnan96/TOD-Dravidian", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A dataset for building code-mixed goal oriented conversation systems", "authors": [ { "first": "Suman", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Moghe", "suffix": "" }, { "first": "Siddhartha", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Mitesh M", "middle": [], "last": "Khapra", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.05997" ] }, "num": null, "urls": [], "raw_text": "Suman Banerjee, Nikita Moghe, Siddhartha Arora, and Mitesh M Khapra. 2018. A dataset for building code-mixed goal oriented conversation systems. arXiv preprint arXiv:1806.05997.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multiwoz -a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Bo-Hsiang", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Casanueva", "suffix": "" }, { "first": "Ultes", "middle": [], "last": "Stefan", "suffix": "" }, { "first": "Ramadan", "middle": [], "last": "Osman", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawe\u0142 Budzianowski, Tsung-Hsien Wen, Bo- Hsiang Tseng, I\u00f1igo Casanueva, Ultes Stefan, Ramadan Osman, and Milica Ga\u0161i\u0107. 2018. Mul- tiwoz -a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empir- ical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Taskmaster-1: Toward a realistic and diverse dialog dataset", "authors": [ { "first": "Bill", "middle": [], "last": "Byrne", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Krishnamoorthi", "suffix": "" }, { "first": "Chinnadhurai", "middle": [], "last": "Sankar", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Duckworth", "suffix": "" }, { "first": "Semih", "middle": [], "last": "Yavuz", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Dubey", "suffix": "" }, { "first": "Kyu-Young", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Cedilnik", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill Byrne, Karthik Krishnamoorthi, Chinnadhu- rai Sankar, Arvind Neelakantan, Daniel Duck- worth, Semih Yavuz, Ben Goodrich, Amit Dubey, Kyu-Young Kim, and Andy Cedilnik. 2019. Taskmaster-1: Toward a realistic and di- verse dialog dataset.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Zeroshot transfer learning with synthesized data for multi-domain dialogue state tracking", "authors": [ { "first": "Giovanni", "middle": [], "last": "Campagna", "suffix": "" }, { "first": "Agata", "middle": [], "last": "Foryciarz", "suffix": "" }, { "first": "Mehrad", "middle": [], "last": "Moradshahi", "suffix": "" }, { "first": "Monica", "middle": [], "last": "Lam", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "122--132", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.12" ] }, "num": null, "urls": [], "raw_text": "Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, and Monica Lam. 2020. Zero- shot transfer learning with synthesized data for multi-domain dialogue state tracking. In Pro- ceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 122-132, Online. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert for joint intent classification and slot filling", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhu", "middle": [], "last": "Zhuo", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.10909" ] }, "num": null, "urls": [], "raw_text": "Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. arXiv preprint arXiv:1902.10909.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "End-to-end memory networks with knowledge carryover for multiturn spoken language understanding", "authors": [ { "first": "Yun-Nung", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yun-Nung Chen, Dilek Hakkani-T\u00fcr, Jianfeng Gao, and Li Deng. 2016. End-to-end memory networks with knowledge carryover for multi- turn spoken language understanding.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language un- derstanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A novel bi-directional interrelated model for joint intent detection and slot filling", "authors": [ { "first": "E", "middle": [], "last": "Haihong", "suffix": "" }, { "first": "Peiqing", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Zhongfu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Meina", "middle": [], "last": "Song", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5467--5471", "other_ids": { "DOI": [ "10.18653/v1/P19-1544" ] }, "num": null, "urls": [], "raw_text": "Haihong E, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5467-5471, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Grounded response generation task at dstc7", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "B", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Chris Brockett, Xiang Gao, Jian- feng Gao, and B. Dolan. 2019. Grounded re- sponse generation task at dstc7.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Slot-gated modeling for joint slot filling and intent prediction", "authors": [ { "first": "Guang", "middle": [], "last": "Chih-Wen Goo", "suffix": "" }, { "first": "Yun-Kai", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Chih-Li", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Tsung-Chieh", "middle": [], "last": "Huo", "suffix": "" }, { "first": "Keng-Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yun-Nung", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "753--757", "other_ids": { "DOI": [ "10.18653/v1/N18-2118" ] }, "num": null, "urls": [], "raw_text": "Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih- Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Pro- ceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 753- 757, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Acoustics based intent recognition using discovered phonetic units for low resource languages", "authors": [ { "first": "Akshat", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Xinjian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Krishna", "middle": [], "last": "Sai", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Rallabandi", "suffix": "" }, { "first": "", "middle": [], "last": "Black", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2011.03646" ] }, "num": null, "urls": [], "raw_text": "Akshat Gupta, Xinjian Li, Sai Krishna Ralla- bandi, and Alan W Black. 2020. Acoustics based intent recognition using discovered pho- netic units for low resource languages. arXiv preprint arXiv:2011.03646.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "End-to-end neural pipeline for goal-oriented dialogue systems using GPT-2", "authors": [ { "first": "Donghoon", "middle": [], "last": "Ham", "suffix": "" }, { "first": "Jeong-Gwan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Youngsoo", "middle": [], "last": "Jang", "suffix": "" }, { "first": "Kee-Eung", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "583--592", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.54" ] }, "num": null, "urls": [], "raw_text": "Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems us- ing GPT-2. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 583-592, Online. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The ATIS spoken language systems pilot corpus", "authors": [ { "first": "Charles", "middle": [ "T" ], "last": "Hemphill", "suffix": "" }, { "first": "John", "middle": [ "J" ], "last": "Godfrey", "suffix": "" }, { "first": "George", "middle": [ "R" ], "last": "Doddington", "suffix": "" } ], "year": 1990, "venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spo- ken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Work- shop Held at Hidden Valley, Pennsylvania, June 24-27,1990.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue", "authors": [ { "first": "Ehsan", "middle": [], "last": "Hosseini-Asl", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00796" ] }, "num": null, "urls": [], "raw_text": "Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dia- logue. arXiv preprint arXiv:2005.00796.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Intent detection for code-mix utterances in task oriented dialogue systems", "authors": [ { "first": "Pratik", "middle": [], "last": "Jayarao", "suffix": "" }, { "first": "Aman", "middle": [], "last": "Srivastava", "suffix": "" } ], "year": 2018, "venue": "2018 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT)", "volume": "", "issue": "", "pages": "583--587", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pratik Jayarao and Aman Srivastava. 2018. Intent detection for code-mix utterances in task ori- ented dialogue systems. In 2018 International Conference on Electrical, Electronics, Commu- nication, Computer, and Optimization Tech- niques (ICEECCOT), pages 583-587. IEEE.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Template guided text generation for task-oriented dialogue", "authors": [ { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6505--6520", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.527" ] }, "num": null, "urls": [], "raw_text": "Mihir Kale and Abhinav Rastogi. 2020. Tem- plate guided text generation for task-oriented dialogue. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 6505-6520, Online. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "NL-PRL at WAT2019: Transformer-based Tamil -English indic task neural machine translation system", "authors": [ { "first": "Amit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Anil Kumar", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 6th Workshop on Asian Translation", "volume": "", "issue": "", "pages": "171--174", "other_ids": { "DOI": [ "10.18653/v1/D19-5222" ] }, "num": null, "urls": [], "raw_text": "Amit Kumar and Anil Kumar Singh. 2019. NL- PRL at WAT2019: Transformer-based Tamil - English indic task neural machine translation system. In Proceedings of the 6th Workshop on Asian Translation, pages 171-174, Hong Kong, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A large-scale corpus for conversation disentanglement", "authors": [ { "first": "Jonathan", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "" }, { "first": "R", "middle": [], "last": "Sai", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Gouravajhala", "suffix": "" }, { "first": "Vignesh", "middle": [], "last": "Peper", "suffix": "" }, { "first": "Chulaka", "middle": [], "last": "Athreya", "suffix": "" }, { "first": "Jatin", "middle": [], "last": "Gunasekara", "suffix": "" }, { "first": "", "middle": [], "last": "Ganhotra", "suffix": "" }, { "first": "Sankalp", "middle": [], "last": "Siva", "suffix": "" }, { "first": "Lazaros", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Walter", "middle": [ "S" ], "last": "Polymenakos", "suffix": "" }, { "first": "", "middle": [], "last": "Lasecki", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan K. Kummerfeld, Sai R. Gouravajhala, Joseph Peper, Vignesh Athreya, Chulaka Gu- nasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros Polymenakos, and Walter S. Lasecki. 2019. A large-scale corpus for conversation dis- entanglement. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Building a taskoriented dialog system for languages with no training data: the case for Basque", "authors": [ { "first": "Maddalen", "middle": [], "last": "L\u00f3pez De Lacalle", "suffix": "" }, { "first": "Xabier", "middle": [], "last": "Saralegi", "suffix": "" }, { "first": "I\u00f1aki San", "middle": [], "last": "Vicente", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2796--2802", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maddalen L\u00f3pez de Lacalle, Xabier Saralegi, and I\u00f1aki San Vicente. 2020. Building a task- oriented dialog system for languages with no training data: the case for Basque. In Proceed- ings of the 12th Language Resources and Eval- uation Conference, pages 2796-2802, Marseille, France. European Language Resources Associa- tion.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Guided dialogue policy learning without adversarial learning in the loop", "authors": [ { "first": "Ziming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Jinchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Kiseleva", "suffix": "" }, { "first": "Shahin", "middle": [], "last": "Maarten De Rijke", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Shayandeh", "suffix": "" }, { "first": "", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2308--2317", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.209" ] }, "num": null, "urls": [], "raw_text": "Ziming Li, Sungjin Lee, Baolin Peng, Jinchao Li, Julia Kiseleva, Maarten de Rijke, Shahin Shayandeh, and Jianfeng Gao. 2020. Guided di- alogue policy learning without adversarial learn- ing in the loop. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2308-2317, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Moss: End-to-end dialog system framework with modular supervision", "authors": [ { "first": "Weixin", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Youzhi", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Chengcai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "8327--8335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weixin Liang, Youzhi Tian, Chengcai Chen, and Zhou Yu. 2020. Moss: End-to-end dialog system framework with modular supervision. In AAAI, pages 8327-8335.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Attention-based recurrent neural network models for joint intent detection and slot filling", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Lane", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.01454" ] }, "num": null, "urls": [], "raw_text": "Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint in- tent detection and slot filling. arXiv preprint arXiv:1609.01454.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "End-to-end learning of task-oriented dialogs", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Lane", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop", "volume": "", "issue": "", "pages": "67--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Liu and Ian Lane. 2018. End-to-end learning of task-oriented dialogs. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguis- tics: Student Research Workshop, pages 67-73.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Zero-shot cross-lingual dialogue systems with transferable latent variables", "authors": [ { "first": "Zihan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jamin", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Genta", "middle": [], "last": "Indra Winata", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.04081" ] }, "num": null, "urls": [], "raw_text": "Zihan Liu, Jamin Shin, Yan Xu, Genta Indra Winata, Peng Xu, Andrea Madotto, and Pascale Fung. 2019. Zero-shot cross-lingual dialogue sys- tems with transferable latent variables. arXiv preprint arXiv:1911.04081.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Attentioninformed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems", "authors": [ { "first": "Zihan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhaojiang", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Xu", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "8433--8440", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6362" ] }, "num": null, "urls": [], "raw_text": "Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, and Pascale Fung. 2020. Attention- informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems. Proceedings of the AAAI Conference on Arti- ficial Intelligence, 34(05):8433-8440.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil", "authors": [ { "first": "Thomas", "middle": [], "last": "Mandl", "suffix": "" }, { "first": "Sandip", "middle": [], "last": "Modha", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" }, { "first": "Bharathi Raja Chakravarthi ;", "middle": [], "last": "Malayalam", "suffix": "" }, { "first": "", "middle": [], "last": "Hindi", "suffix": "" }, { "first": "German", "middle": [], "last": "English", "suffix": "" } ], "year": 2020, "venue": "Forum for Information Retrieval Evaluation", "volume": "2020", "issue": "", "pages": "29--32", "other_ids": { "DOI": [ "10.1145/3441501.3441517" ] }, "num": null, "urls": [], "raw_text": "Thomas Mandl, Sandip Modha, Anand Ku- mar M, and Bharathi Raja Chakravarthi. 2020. Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identifica- tion in Tamil, Malayalam, Hindi, English and German. In Forum for Information Retrieval Evaluation, FIRE 2020, page 29-32, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "doccano: Text annotation tool for human", "authors": [ { "first": "Hiroki", "middle": [], "last": "Nakayama", "suffix": "" }, { "first": "Takahiro", "middle": [], "last": "Kubo", "suffix": "" }, { "first": "Junya", "middle": [], "last": "Kamura", "suffix": "" }, { "first": "Yasufumi", "middle": [], "last": "Taniguchi", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroki Nakayama, Takahiro Kubo, Junya Ka- mura, Yasufumi Taniguchi, and Xu Liang. 2018. doccano: Text annotation tool for human. Software available from https://github.com/doccano/doccano.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Morphological processing for english-tamil statistical machine translation", "authors": [ { "first": "Loganathan", "middle": [], "last": "Ramasamy", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Zden\u011bk", "middle": [], "last": "\u017dabokrtsk\u1ef3", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Workshop on Machine Translation and Parsing in Indian Languages", "volume": "", "issue": "", "pages": "113--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Loganathan Ramasamy, Ond\u0159ej Bojar, and Zden\u011bk \u017dabokrtsk\u1ef3. 2012. Morphological processing for english-tamil statistical machine translation. In Proceedings of the Workshop on Machine Trans- lation and Parsing in Indian Languages, pages 113-122.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Schema-guided dialogue state tracking task at dstc8. arXiv preprint", "authors": [ { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Xiaoxue", "middle": [], "last": "Zang", "suffix": "" }, { "first": "Srinivas", "middle": [], "last": "Sunkara", "suffix": "" }, { "first": "Raghav", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Khaitan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.01359" ] }, "num": null, "urls": [], "raw_text": "Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Schema-guided dialogue state tracking task at dstc8. arXiv preprint arXiv:2002.01359.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Cross-lingual transfer learning for multilingual task oriented dialog", "authors": [ { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.13327" ] }, "num": null, "urls": [], "raw_text": "Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2018. Cross-lingual transfer learning for multilingual task oriented dialog. arXiv preprint arXiv:1810.13327.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Guided dialog policy learning: Reward estimation for multi-domain task-oriented dialog", "authors": [ { "first": "Ryuichi", "middle": [], "last": "Takanobu", "suffix": "" }, { "first": "Hanlin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "100--110", "other_ids": { "DOI": [ "10.18653/v1/D19-1010" ] }, "num": null, "urls": [], "raw_text": "Ryuichi Takanobu, Hanlin Zhu, and Minlie Huang. 2019. Guided dialog policy learning: Reward estimation for multi-domain task-oriented dia- log. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 100-110, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Representation", "authors": [ { "first": "Sajeetha", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "Sinnathamby", "middle": [], "last": "Mahesan", "suffix": "" } ], "year": 2019, "venue": "2019 14th Conference on Industrial and Information Systems (ICIIS)", "volume": "", "issue": "", "pages": "320--325", "other_ids": { "DOI": [ "10.1109/ICIIS47346.2019.9063341" ] }, "num": null, "urls": [], "raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2019. Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Fea- ture Representation. In 2019 14th Conference on Industrial and Information Systems (ICIIS), pages 320-325.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts", "authors": [ { "first": "Sajeetha", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "Sinnathamby", "middle": [], "last": "Mahesan", "suffix": "" } ], "year": 2020, "venue": "2020 Moratuwa Engineering Research Conference (MERCon)", "volume": "", "issue": "", "pages": "272--276", "other_ids": { "DOI": [ "10.1109/MERCon50084.2020.9185369" ] }, "num": null, "urls": [], "raw_text": "Sajeetha Thavareesan and Sinnathamby Mahe- san. 2020a. Sentiment Lexicon Expansion us- ing Word2vec and fastText for Sentiment Pre- diction in Tamil texts. In 2020 Moratuwa Engi- neering Research Conference (MERCon), pages 272-276.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Word embedding-based Part of Speech tagging in Tamil texts", "authors": [ { "first": "Sajeetha", "middle": [], "last": "Thavareesan", "suffix": "" }, { "first": "Sinnathamby", "middle": [], "last": "Mahesan", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)", "volume": "", "issue": "", "pages": "478--482", "other_ids": { "DOI": [ "10.1109/ICIIS51140.2020.9342640" ] }, "num": null, "urls": [], "raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020b. Word embedding-based Part of Speech tagging in Tamil texts. In 2020 IEEE 15th In- ternational Conference on Industrial and Infor- mation Systems (ICIIS), pages 478-482.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Tod-bert: Pretrained natural language understanding for task-oriented dialogues", "authors": [ { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Hoi", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.06871" ] }, "num": null, "urls": [], "raw_text": "Chien-Sheng Wu, Steven Hoi, Richard Socher, and Caiming Xiong. 2020. Tod-bert: Pre- trained natural language understanding for task-oriented dialogues. arXiv preprint arXiv:2004.06871.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Our prefix matching and span expansion technique.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "Model Architecture showcasing a kan utterance passed to mBERT model. (word-byword eng translations are included for readability). Sub-word tokens omitted for brevity. Slot labels and intent are predicted from word and[CLS] representations respectively", "type_str": "figure" }, "TABREF0": { "html": null, "type_str": "table", "content": "
What's temperature in NewYork
OB-weatherO B-Loc I-Loc
Intent : weather/find
", "text": ", Dialog State Tracking * Equal contribution", "num": null }, "TABREF1": { "html": null, "type_str": "table", "content": "", "text": "Example of an utterance along with its slot in BIO (Beginning, Inside, Outside) notation and intent label.", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "
: Some qualitative examples from our dataset (With word-by-word eng translations) and predic-
tions by our mBERT model. (DT -DateTime, WN -Weather Noun, LOC -Location, WA -Weather
Attribute)
", "text": "", "num": null }, "TABREF4": { "html": null, "type_str": "table", "content": "
shows some qualitative examples from
our dataset and the predictions made by our
model. The first two examples demonstrate
successful predictions on tam and es when
trained on 5K auto-tagged examples. The
third example demonstrates an error due to lo-
cal normalization during optimization. Since
we don't explicitly model that B-LOC (Begin-
ning Location) does not occur immediately af-
", "text": "LOC and that I-LOC is more probable, the model predicts B-LOC for both slots. Utilising global normalization over BERT (CRF) could help resolve this issue.", "num": null }, "TABREF6": { "html": null, "type_str": "table", "content": "", "text": "Performance of different slot alignment methods", "num": null }, "TABREF8": { "html": null, "type_str": "table", "content": "
", "text": "Zero and Few shot transfer performance", "num": null }, "TABREF10": { "html": null, "type_str": "table", "content": "
", "text": "Performance of different MT systems", "num": null }, "TABREF13": { "html": null, "type_str": "table", "content": "
", "text": "Performance on a Romance language es", "num": null } } } }