ACL-OCL / Base_JSON /prefixN /json /nlp4convai /2021.nlp4convai-1.26.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:55:07.436515Z"
},
"title": "Towards Code-Mixed Hinglish Dialogue Generation",
"authors": [
{
"first": "Vibhav",
"middle": [],
"last": "Agarwal",
"suffix": "",
"affiliation": {},
"email": "vibhav.agarwal@iiitb.ac.in"
},
{
"first": "Pooja",
"middle": [],
"last": "Rao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Lausanne",
"location": {
"country": "Switzerland"
}
},
"email": "pooja.rao@unil.ch"
},
{
"first": "Babu",
"middle": [],
"last": "Dinesh",
"suffix": "",
"affiliation": {},
"email": "jdinesh@iiitb.ac.in"
},
{
"first": "",
"middle": [],
"last": "Jayagopi",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Code-mixed language plays a crucial role in communication in multilingual societies. Though the recent growth of web users has greatly boosted the use of such mixed languages, the current generation of dialog systems is primarily monolingual. This increase in usage of code-mixed language has prompted dialog systems in a similar language. We present our work in Code-Mixed Dialog Generation, an unexplored task in code-mixed languages, generating utterances in code-mixed language rather than a single language that is more often just English. We present a new synthetic corpus in code-mix for dialogs, CM-DailyDialog, by converting an existing English-only dialog corpus to a mixed Hindi-English corpus. We then propose a baseline approach where we show the effectiveness of using mBART like multilingual sequence-to-sequence transformers for codemixed dialog generation. Our best performing dialog models can conduct coherent conversations in Hindi-English mixed language as evaluated by human and automatic metrics setting new benchmarks for the Code-Mixed Dialog Generation task.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Code-mixed language plays a crucial role in communication in multilingual societies. Though the recent growth of web users has greatly boosted the use of such mixed languages, the current generation of dialog systems is primarily monolingual. This increase in usage of code-mixed language has prompted dialog systems in a similar language. We present our work in Code-Mixed Dialog Generation, an unexplored task in code-mixed languages, generating utterances in code-mixed language rather than a single language that is more often just English. We present a new synthetic corpus in code-mix for dialogs, CM-DailyDialog, by converting an existing English-only dialog corpus to a mixed Hindi-English corpus. We then propose a baseline approach where we show the effectiveness of using mBART like multilingual sequence-to-sequence transformers for codemixed dialog generation. Our best performing dialog models can conduct coherent conversations in Hindi-English mixed language as evaluated by human and automatic metrics setting new benchmarks for the Code-Mixed Dialog Generation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Due to the popularity of different social media and messaging platforms over the last decade, there has been a significant increase in internet users, mainly from multilingual societies. Multilingual speakers regularly combine languages in what is commonly called code-mixing or code-switching while communicating with other multilingual speakers. This has resulted in a substantial influx of mixed language data in the form of comments, conversations, and other forms of communication. Traditional natural language processing tasks like tokenization and tagging, semantic processing, machine translation, and text generation face new and interesting challenges due to this language mixing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dialog Systems have been of great interest amongst the natural language processing community for widespread applications. These systems are broadly categorized into three categories: taskoriented dialog system (Wen et al., 2017; Williams and Zweig, 2016) , open-ended conversational systems (Shang et al., 2015; Xing et al., 2017) , and interactive question answering system. Traditional Dialog Systems have mostly relied on a rule or template-based approach (Williams et al., 2013) .",
"cite_spans": [
{
"start": 210,
"end": 228,
"text": "(Wen et al., 2017;",
"ref_id": "BIBREF45"
},
{
"start": 229,
"end": 254,
"text": "Williams and Zweig, 2016)",
"ref_id": "BIBREF48"
},
{
"start": 291,
"end": 311,
"text": "(Shang et al., 2015;",
"ref_id": "BIBREF39"
},
{
"start": 312,
"end": 330,
"text": "Xing et al., 2017)",
"ref_id": "BIBREF51"
},
{
"start": 459,
"end": 482,
"text": "(Williams et al., 2013)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The success of deep neural networks with a considerable amount of training data has led towards end-to-end trained sequence-to-sequence (seq2seq) (Sutskever et al., 2014) models that enhance the generality and diversity of the text generated. Recent advances in attention-based mechanisms (Bahdanau et al., 2015) and Transformers (Vaswani et al., 2017) have shown significant performance improvement and shifted the communities' approach and interest in training larger models. However, all of these prior works use monolingual data and specifically English.",
"cite_spans": [
{
"start": 146,
"end": 170,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF43"
},
{
"start": 289,
"end": 312,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 330,
"end": 352,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The increasing use of code-mixed languages and ubiquitous nature of multilingual speakers call for catering to the needs of such users in a multilingual fashion with a dialog system, the need for a Code-Mixed Dialog Generational System. A recent study by Bawa et al. (2020) shows that in a real-life setting, people prefer chatbots that engage in codemixed language.",
"cite_spans": [
{
"start": 255,
"end": 273,
"text": "Bawa et al. (2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Code-mixing in informal contexts like newsgroups, tweets, comments and blogs has made it difficult to define a uniform structure to the language. However, linguists have formulated various hypotheses (Belazi et al., 1994; Pfaff, 1979; Poplack, 1981) and constraints (Sankoff and Poplack, 1981; Sciullo et al., 1986; Joshi, 1982 ) that can define a general rule for code-mixing.",
"cite_spans": [
{
"start": 200,
"end": 221,
"text": "(Belazi et al., 1994;",
"ref_id": "BIBREF7"
},
{
"start": 222,
"end": 234,
"text": "Pfaff, 1979;",
"ref_id": "BIBREF31"
},
{
"start": 235,
"end": 249,
"text": "Poplack, 1981)",
"ref_id": "BIBREF32"
},
{
"start": 266,
"end": 293,
"text": "(Sankoff and Poplack, 1981;",
"ref_id": "BIBREF36"
},
{
"start": 294,
"end": 315,
"text": "Sciullo et al., 1986;",
"ref_id": "BIBREF37"
},
{
"start": 316,
"end": 327,
"text": "Joshi, 1982",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the rise of large pretrained language models like (Devlin et al., 2019; Radford et al., 2019; Conneau et al., 2020 ), there's been a lot of improve-ment in machine translation and multilingual models. Prior works (Khanuja et al., 2020b; Gupta et al., 2020) show the effectiveness of large multilingual pretrained language models like mBERT (Devlin et al., 2019) and XLM (Conneau and Lample, 2019) on code-mixed data.",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 77,
"end": 98,
"text": "Radford et al., 2019;",
"ref_id": "BIBREF33"
},
{
"start": 99,
"end": 119,
"text": "Conneau et al., 2020",
"ref_id": "BIBREF11"
},
{
"start": 218,
"end": 241,
"text": "(Khanuja et al., 2020b;",
"ref_id": "BIBREF26"
},
{
"start": 242,
"end": 261,
"text": "Gupta et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 345,
"end": 366,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 375,
"end": 401,
"text": "(Conneau and Lample, 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work attempts to utilize these large seq2seq pre-trained multilingual transformer-based models for the code-mixed dialog generation task. Specifically, our contributions are as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a synthetic code-mixed dialog dataset, CM-DailyDialog. This is the first benchmark dataset in Hindi-English mixed language for dialog generation generated from translating DailyDialog (Li et al., 2017) to code-mix.",
"cite_spans": [
{
"start": 197,
"end": 214,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We set new benchmarks for the code-mixed dialog generation task on the CM-DailyDialog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We finetune the mBART model for the dialog generation in two ways to generate coherent dialog, as seen from both automatic and human evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 To create CM-DailyDialog, we train a machine translation model. We use monolingual English data as our input instead of a parallel English and Hindi corpus. This differs from earlier work on code-mixed Machine Translation (Gupta et al., 2020; Garg et al., 2018) where they process a parallel corpus for training, making our approach less resourceintensive.",
"cite_spans": [
{
"start": 224,
"end": 244,
"text": "(Gupta et al., 2020;",
"ref_id": "BIBREF21"
},
{
"start": 245,
"end": 263,
"text": "Garg et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Code-mixing refers to the interleaving of words belonging to different languages. Code-mixing is most common in multilingual cultures, and it is becoming more common as the number of people using social media and messaging services on the internet grows (Ansaldo et al., 2008) . This has lead to a rising research interest in recent years, and several tasks have been conducted as part of codeswitching workshops . There has been a lot of advancements in solving different code-mixed tasks like language identification Molina et al., 2016) , named-entity recognition (Rao and Devi, 2016; Aguilar et al., 2018) , question answering (Chandu et al., 2018) , part-of-speech tagging (Jamatia et al., 2018) , and information retrieval (banerjee et al., 2016) . Very recently, a code-mixed version of the GLUE benchmark was proposed by Khanuja et al. (2020b) which introduced a common benchmark for all the tasks.",
"cite_spans": [
{
"start": 254,
"end": 276,
"text": "(Ansaldo et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 519,
"end": 539,
"text": "Molina et al., 2016)",
"ref_id": "BIBREF29"
},
{
"start": 588,
"end": 609,
"text": "Aguilar et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 631,
"end": 652,
"text": "(Chandu et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 678,
"end": 700,
"text": "(Jamatia et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 729,
"end": 752,
"text": "(banerjee et al., 2016)",
"ref_id": null
},
{
"start": 829,
"end": 851,
"text": "Khanuja et al. (2020b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Machine ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixed Machine Translation",
"sec_num": "2.1"
},
{
"text": "This work focuses on open-ended conversational dialog systems, which are interchangeably also called dialog systems here. Conversational systems engage in more open-ended conversations with no specific objective or task to solve in contrast to a task-oriented dialog system. In the open domain, conversational dialog systems fall again into two categories: retrieval-based and generative dialog systems. We take the generative model approach for a code-mixed dialog model rather than retrieve them from a fixed set as it is more dynamic and interactive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversational Dialog Systems",
"sec_num": "2.2"
},
{
"text": "Traditional dialog systems have mostly relied on hand-crafted rules or templates (Williams et al., 2013) . Recently, a more data-driven approach is used to enhance the generality and diversity of the text generated in other domains. Researches used RNN (Rumelhart et al., 1986) and LSTM (Hochreiter and Schmidhuber, 1997) based encoder-decoder architectures for the dialog systems. Since the popularity of attention-based mechanisms (Bahdanau et al., 2015) , these also have been widely adapted to boost performance. Recent works like DialoGPT (Zhang et al., 2020) , Blender-Bot (Smith et al., 2020) and Meena (Adiwardana et al., 2020) are just a few examples of the open domain conversational agents.",
"cite_spans": [
{
"start": 81,
"end": 104,
"text": "(Williams et al., 2013)",
"ref_id": "BIBREF46"
},
{
"start": 253,
"end": 277,
"text": "(Rumelhart et al., 1986)",
"ref_id": "BIBREF35"
},
{
"start": 287,
"end": 321,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF22"
},
{
"start": 433,
"end": 456,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 544,
"end": 564,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF52"
},
{
"start": 579,
"end": 599,
"text": "(Smith et al., 2020)",
"ref_id": "BIBREF40"
},
{
"start": 610,
"end": 635,
"text": "(Adiwardana et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conversational Dialog Systems",
"sec_num": "2.2"
},
{
"text": "Although much work has been done in dialog systems, mostly all of it is for English conversations. This is because, in most of the work, the dataset used is monolingual. The only recent work involving a multilingual conversational system is by Chen et al. (2019) which performs dialog generation on English and Chinese data. It uses a shared memory mechanism with a seq2seq encoderdecoder like architecture and is trained using multitask learning (Caruana, 1997) .",
"cite_spans": [
{
"start": 244,
"end": 262,
"text": "Chen et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 447,
"end": 462,
"text": "(Caruana, 1997)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conversational Dialog Systems",
"sec_num": "2.2"
},
{
"text": "This section describes the benchmark dataset for code-mixed dialog generation: CM-DailyDialog, the English to Hinglish translation model used to generate this dataset, and our mBART based dialog generation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "There is no standardized dataset available for multilingual dialog generation; therefore, we choose to generate a synthetic dataset to train our model for code-mixed dialog. We use a standardized and popular English dialog dataset called DailyDialog (Li et al., 2017) and translate the utterances and conversations from English to Code-Mixed using our mBART model (mBART-en_cm) defined in Section 3.1.1. This results in the CM-DailyDialog dataset consisting of 11,118 conversations in the training set and 1,000 conversations in both test and validation sets.",
"cite_spans": [
{
"start": 250,
"end": 267,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CM-DailyDialog dataset",
"sec_num": "3.1"
},
{
"text": "We also use the Code-Mixed NLI conversation dataset from the GLUECoS benchmark (Khanuja et al., 2020a) . This dataset contains roughly 1,800 training and roughly 500 test conversations extracted from movies.",
"cite_spans": [
{
"start": 79,
"end": 102,
"text": "(Khanuja et al., 2020a)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CM-DailyDialog dataset",
"sec_num": "3.1"
},
{
"text": "We first process the multi-turn conversations from the CM-DailyDialog dataset into triplets of utterances using a sliding window approach. The first two utterances in that triplet are served as contextual inputs to the model, while the third utterance is served as the ground truth on which the loss is calculated. This processing of conversations into triplets of utterances increased the size of our dialog dataset from 13,118 to 76,745 data points. Similarly, we process the Code-Mixed NLI dataset into similar triplets, expanding our working dataset from 1,800 to 2,128 unique dialog triplets in the train set and by roughly 500 triplets in the test set. We choose to process these multi-turn conversations into splits of 3 and not say 5 or any other number because of increased computational costs for the mBART model to process such long conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CM-DailyDialog dataset",
"sec_num": "3.1"
},
{
"text": "We describe our English to Hinglish translation model mBART-en_cm and the dataset used for its training in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CM-DailyDialog dataset",
"sec_num": "3.1"
},
{
"text": "We use an mBART model finetuned on English to Code-Mixed data described in Section 3.1.2 as our machine translation model to convert the English DailyDialog dataset to Code-Mixed form. We denote this model as mBART-en_cm. mBART is a multilingual seq2seq denoising bidirectional auto-encoder pre-trained using the same objective as BART (Lewis et al., 2020) but on large-scale monolingual corpora of 25 languages. It is based on the transformer (Vaswani et al., 2017) architecture and consists of 12 encoder and decoder layers, each with 16 attention heads and model dimensions being 1024 resulting in roughly 680 million parameters.",
"cite_spans": [
{
"start": 336,
"end": 356,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 444,
"end": 466,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation model",
"sec_num": "3.1.1"
},
{
"text": "We use the following datasets to finetune and test our mBART-en_cm model for English to Hinglish translation task:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset for Translation",
"sec_num": "3.1.2"
},
{
"text": "\u2022 CMU Hinglish is an extended Code-Mixed form of the Document Grounded Conversation dataset by Zhou et al. (2018) . It consists of roughly 10,000 English and Hinglish Code-Mixed sentences.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "Zhou et al. (2018)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset for Translation",
"sec_num": "3.1.2"
},
{
"text": "\u2022 Reverse PHINC is the reverse version of the PHINC (Srivastava and Singh, 2020) dataset but we switch the source and target pairs for our task. It contains roughly 13,000 Hinglish and parallel English translations. We use these datasets individually and in conjunction to see any improvement with increased data. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset for Translation",
"sec_num": "3.1.2"
},
{
"text": "We use the end-to-end multilingual training of the mBART. In literature, there is minimal work utilizing the BART architecture for dialog generation. De Bruyn et al. 2020is one such work. It utilizes BART for knowledge grounding and knowledge retrieval in dialogs. Our approach attempts to leverage multilingualism by using the pre-trained BART for multilingual dialog generation and presents a few baselines for future work. We compare two strategies for finetuning mBART model for dialog generation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixed Dialog Model",
"sec_num": "3.2"
},
{
"text": "\u2022 mBART-dialog:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixed Dialog Model",
"sec_num": "3.2"
},
{
"text": "We finetune the mBART model on a Code-Mixed dialog dataset. In our case, we use triplet utterances to train our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixed Dialog Model",
"sec_num": "3.2"
},
{
"text": "\u2022 mBART-dialog + : We finetune the mBART model in a dual curriculum learning method where we first finetune the mBART on an English to Code-Mixed translation task and then on a Code-Mixed dialog dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixed Dialog Model",
"sec_num": "3.2"
},
{
"text": "In this section, we describe our experimental setup for both our mBART-en_cm model and the dialog model described in Section 3.1.1 & 3.2 respectively. Our proposed approach is written in Pytorch (Paszke et al., 2019) , and the mBART model weights and architecture used are from the Hug-gingFace's Transformer (Wolf et al., 2020) package. We use only the mbart-cc-25 weights in all our modeling. All the mBART based models were trained using the AdamW optimizer with weight decay. We used all the default hyperparameters except the number of training epochs. We finetuned all our mBART models for five epochs only. As discussed in Section 2, there is extremely limited prior work and literature on multilingual dialogs. Therefore, there is no baseline for us to compare our model to and we report our numbers as it is. As discussed in Section 3.1, we process our datasets from English to Code-Mixed and then into triplets. We split our processed CM-DailyDialog dataset into 8:1:1 splits for training, validation, and test set and use the additional Code-Mixed NLI dataset in conjunction with the CM-DailyDialog dataset to see any performance improvement with the increased data. We evaluate both our mBARTdialog and mBART-dialog + models on BLEU and perplexity metric and report our scores in the Table 1. To gauge the language mixing performance of our models, we also use the Code-Mixing Index (CMI) (Gamb\u00e4ck and Das, 2016) . We report sacrebleu as our BLEU metric using the HuggingFace's Dataset package.",
"cite_spans": [
{
"start": 195,
"end": 216,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 309,
"end": 328,
"text": "(Wolf et al., 2020)",
"ref_id": null
},
{
"start": 1401,
"end": 1424,
"text": "(Gamb\u00e4ck and Das, 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We also show the performance of our mBART-en_cm model on different datasets for the monolingual English to Hinglish translation task using metrics like sacrebleu(reported as BLEU) and Code-Mixing Index in Table 2 . Code-Mixed Dialogs S1: actually, fruits aur veggies tumhe ache lagte hain S2: haan, muje patha hein, lekin chicken ke baare mein kya? S1(generated): Mujhe lagta hai I'm going to make a slice of it. S1: Mike! Tumhare se sunke accha laga. Aap kaise hain?",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": null
},
{
"text": "S2: everything is fine , aur tum kaise ho? S1(generated): Main thik hoon. Tumhare sath baat krke accha laga. Table 4 : Examples of the response generated by mBART-dialog + on the CM-DailyDialog. S1 and S2 refer to Speaker 1 and 2 respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "metric. This boost in performance might be due to the model understanding code-mixed language after the first finetune and, as a result, adapting better over the code-mixed dialogs in the second finetune. We show some of the examples of our dialog model in Table 4 . We also observe that simply increasing the data does not necessarily increase the model performance and leads to a significant drop in this case. This drop might be due to the inconsistent Hindi vocabulary in the romanized form in different datasets. The same Devanagari token can be represented in various Roman scripts in different datasets. This can cause the model not to have a fixed code-mixed vocabulary, causing this confusion and, hence, a drop in model performance. Table 2 shows our mBART-en_cm model performance on different datasets. As observed previously, increasing the data leads to a drop in performance, which may be due to different datasets' vocabulary discrepancies. We use the best performing model, i.e. trained on the CMU Hinglish dataset and use that to generate our CM-DailyDialog dataset as described in Section 3.1. Table 3 shows some of the translation examples from English Dai-lyDialog to CM-DailyDialog. Table 5 shows some of the statistics of the CM-DailyDialog dataset. The CMI scores for all the splits for our generated dataset are close to that of the real world code-mixed datasets like Dhar et al. (2018) . This strengthens our intent to utilize this synthetic code-mixed dialog dataset for our dialog generation model.",
"cite_spans": [
{
"start": 1393,
"end": 1411,
"text": "Dhar et al. (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 257,
"end": 264,
"text": "Table 4",
"ref_id": null
},
{
"start": 743,
"end": 750,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1112,
"end": 1119,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 1204,
"end": 1211,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Considering BLEU and CMI ratings do not give insight into translation errors, we use error analysis to further assess the quality of our CM-DailyDialog dataset. We assess the quality of our mBART-en_cm model's translations on the test set by grouping the different errors generated by the model into three error categories and a no error category. We randomly sample 50 sentences from our test set and bucket them into categories. We follow the error analysis categories from Gautam et al. (2021) . We employ three human raters that classify the sampled translations into the error buckets. Graduate students (non-native English speakers) familiar with the usage of code-mixed language, specifically Hinglish, in everyday life are the human annotators involved in this research. We report our numbers as a mode of three rater evalu- Table 5 : Statistics of the generated CM-DailyDialog dataset ations to account for the subjectivity among the raters. Mistranslated/Partially translation category indicates if the translation has low or no semantic resemblance with the source sentence. Morphological/Syntactical errors indicate if the translation has the same semantic meaning as the source sentence but has minor grammatical or syntax errors. NER mistranslations refer to the situation where the model translates the named entities in the generated output. Table 6 shows the results of the error analysis over the described errors categories for the 50 test translations. We observe that the model makes 12 syntactical errors and 13 partial/mistranslations out of the 50 samples. After a more nuanced analysis of these numbers, we find that most of the syntax errors were 1-2 token errors or misalignment of those tokens. We also found that out of the 13 partial/mistranslations, only 15% of the translations were complete mistranslations. Most of the sentences in this error category were partial translations where the model failed to translate and code-mix simultaneously.",
"cite_spans": [
{
"start": 476,
"end": 496,
"text": "Gautam et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 833,
"end": 840,
"text": "Table 5",
"ref_id": null
},
{
"start": 1358,
"end": 1365,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Error Analysis of mBART-en_cm Translations",
"sec_num": "5.1"
},
{
"text": "Freq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Category",
"sec_num": null
},
{
"text": "Morphology/Syntax Issues 12",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mistranslated/Partially Translated 13",
"sec_num": null
},
{
"text": "No error 24 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER mistranslation 1",
"sec_num": null
},
{
"text": "To further strengthen the assessment of the generated code-mixed dialog, we perform a human evaluation of our best performing dialog model (mBARTdialog + trained on CM-DailyDialog). We employ three human raters who rate the generated followup dialog given prior contextual dialogs. These contextual dialogs refer to the first two utterances in the triplets that we processed in Section 3.1. As previously stated, the raters here are Graduate students familiar with the usage of Hinglish. The raters were instructed to rate the quality of the dialog on a scale of 1-5, with 1 being the lowest. The quality was assessed in terms of both the coherence in the dialog and the code-mixing. We do this analysis on 50 randomly sampled dialog generations from the test set. The results of the human ratings can be seen in Figure 1 as the mean of all three raters.",
"cite_spans": [],
"ref_spans": [
{
"start": 813,
"end": 821,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Human Evaluation of Code-Mixed Dialog",
"sec_num": "5.2"
},
{
"text": "As it can be seen from Figure 1 , 60% of the dialog utterances achieve a score greater than 3. 88% of the dialog utterance are scored above 2. These numbers indicate that our machine-generated code-mixed dialog followups are of good quality both in terms of coherence as well as code-mixing.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Human Evaluation of Code-Mixed Dialog",
"sec_num": "5.2"
},
{
"text": "We introduce a new benchmark dataset for codemixed dialog generation, CM-DailyDialog, a codemixed version of the DailyDialog. Our work proposes using multilingual Transformers (mBART) and demonstrates how they help in code-mixed dialog generation. We also introduce a new monolingual English to Code-Mixed machine translation model using mBART. With our comprehensive experiments, we show the effectiveness of our approach in terms of machine translation and dialog generation and set new benchmarks in the Code-Mixed dialog generation task. The manual error analysis illustrates the quality of the new dataset, although it is synthetically generated. In terms of both automatic and human evaluation metrics, we show that the dialog generated from our model is of high quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "As part of the future work, we would like to improve our machine translation model to improve our CM-DailyDialog data that further boosts our dialog generation. Another huge scope of improvement is in the vocabulary discrepancy in different datasets, and we wish to resolve this to further boost our modeling and performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards a Human-like Open-Domain Chatbot",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Adiwardana",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "So",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Fiedel",
"suffix": ""
},
{
"first": "Romal",
"middle": [],
"last": "Thoppilan",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Apoorv",
"middle": [],
"last": "Kulshreshtha",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Nemade",
"suffix": ""
},
{
"first": "Yifeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.09977"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a Human-like Open-Domain Chatbot. arXiv:2001.09977 [cs, stat].",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. Association for Computational Linguistics",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Thamar Solorio, Mona Diab, and Julia Hirschberg, editors. 2018. Proceedings of the Third Workshop on Computational Approaches to Linguistic Code- Switching. Association for Computational Linguis- tics, Melbourne, Australia.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "LinCE: A centralized benchmark for linguistic code-switching evaluation",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Sudipta",
"middle": [],
"last": "Kar",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1803--1813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar, Sudipta Kar, and Thamar Solorio. 2020. LinCE: A centralized benchmark for linguis- tic code-switching evaluation. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 1803-1813, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language therapy and bilingual aphasia: Clinical implications of psycholinguistic and neuroimaging research",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "In\u00e9s Ansaldo",
"suffix": ""
},
{
"first": "Karine",
"middle": [],
"last": "Marcotte",
"suffix": ""
},
{
"first": "Lilian",
"middle": [],
"last": "Scherer",
"suffix": ""
},
{
"first": "Gaelle",
"middle": [],
"last": "Raboyeau",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Neurolinguistics",
"volume": "21",
"issue": "6",
"pages": "539--557",
"other_ids": {
"DOI": [
"10.1016/j.jneuroling.2008.02.001"
]
},
"num": null,
"urls": [],
"raw_text": "Ana In\u00e9s Ansaldo, Karine Marcotte, Lilian Scherer, and Gaelle Raboyeau. 2008. Language therapy and bilingual aphasia: Clinical implications of psy- cholinguistic and neuroimaging research. Journal of Neurolinguistics, 21(6):539-557.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Overview of the Mixed Script Information Retrieval (MSIR)",
"authors": [
{
"first": "Kunal",
"middle": [],
"last": "Somnath Banerjee",
"suffix": ""
},
{
"first": "Sudip",
"middle": [],
"last": "Chakma",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Kumar Naskar",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of FIRE 2016. FIRE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Somnath banerjee, Kunal Chakma, Sudip Kumar Naskar, Amitava Das, Paolo Rosso, Sivaji Bandy- opadhyay, and Monojit Choudhury. 2016. Overview of the Mixed Script Information Retrieval (MSIR). In Proceedings of FIRE 2016. FIRE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Do multilingual users prefer chat-bots that code-mix? let's nudge and find out!",
"authors": [
{
"first": "Anshul",
"middle": [],
"last": "Bawa",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Khadpe",
"suffix": ""
},
{
"first": "Pratik",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. ACM Hum.-Comput",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3392846"
]
},
"num": null,
"urls": [],
"raw_text": "Anshul Bawa, Pranav Khadpe, Pratik Joshi, Kalika Bali, and Monojit Choudhury. 2020. Do multilin- gual users prefer chat-bots that code-mix? let's nudge and find out! Proc. ACM Hum.-Comput. In- teract., 4(CSCW1).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Code switching and x-bar theory: The functional head constraint",
"authors": [
{
"first": "Hedi",
"middle": [
"M"
],
"last": "Belazi",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"J"
],
"last": "Rubin",
"suffix": ""
},
{
"first": "Almeida Jacqueline",
"middle": [],
"last": "Toribio",
"suffix": ""
}
],
"year": 1994,
"venue": "Linguistic Inquiry",
"volume": "25",
"issue": "2",
"pages": "221--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hedi M. Belazi, Edward J. Rubin, and Almeida Jacque- line Toribio. 1994. Code switching and x-bar theory: The functional head constraint. Linguistic Inquiry, 25(2):221-237.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multitask Learning. Machine Learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "28",
"issue": "",
"pages": "41--75",
"other_ids": {
"DOI": [
"10.1023/A:1007379606734"
]
},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask Learning. Machine Learning, 28(1):41-75.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Code-mixed question answering challenge: Crowdsourcing data and techniques",
"authors": [
{
"first": "Khyathi",
"middle": [],
"last": "Chandu",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Loginova",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "G\u00fcnter",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Manoj",
"middle": [],
"last": "Chinnakotla",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "29--38",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3204"
]
},
"num": null,
"urls": [],
"raw_text": "Khyathi Chandu, Ekaterina Loginova, Vishal Gupta, Josef van Genabith, G\u00fcnter Neumann, Manoj Chin- nakotla, Eric Nyberg, and Alan W. Black. 2018. Code-mixed question answering challenge: Crowd- sourcing data and techniques. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 29-38, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multilingual Dialogue Generation with Shared-Private Memory",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lisong",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Zhenxin",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Junfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.02365[cs].ArXiv:1910.02365"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Chen, Lisong Qiu, Zhenxin Fu, Dongyan Zhao, Junfei Liu, and Rui Yan. 2019. Multilingual Dialogue Generation with Shared-Private Memory. arXiv:1910.02365 [cs]. ArXiv: 1910.02365.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7057--7067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057-7067.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bart for knowledge grounded conversations",
"authors": [
{
"first": "Ehsan",
"middle": [],
"last": "Maxime De Bruyn",
"suffix": ""
},
{
"first": "Jeska",
"middle": [],
"last": "Lotfi",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Buhmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2020,
"venue": "Converse@ KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, and Walter Daelemans. 2020. Bart for knowledge grounded conversations. In Converse@ KDD.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Enabling code-mixed translation: Parallel corpus creation and MT augmentation approach",
"authors": [
{
"first": "Mrinal",
"middle": [],
"last": "Dhar",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "131--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mrinal Dhar, Vaibhav Kumar, and Manish Shrivastava. 2018. Enabling code-mixed translation: Parallel cor- pus creation and MT augmentation approach. In Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing, pages 131-140, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Julia Hirschberg, and Thamar Solorio",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Ghoneim",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W16-58"
]
},
"num": null,
"urls": [],
"raw_text": "Mona Diab, Pascale Fung, Mahmoud Ghoneim, Ju- lia Hirschberg, and Thamar Solorio, editors. 2016. Proceedings of the Second Workshop on Computa- tional Approaches to Code Switching. Association for Computational Linguistics, Austin, Texas.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Proceedings of the First Workshop on Computational Approaches to Code Switching. Association for Computational Linguistics",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/W14-39"
]
},
"num": null,
"urls": [],
"raw_text": "Mona Diab, Julia Hirschberg, Pascale Fung, and Thamar Solorio, editors. 2014. Proceedings of the First Workshop on Computational Approaches to Code Switching. Association for Computational Lin- guistics, Doha, Qatar.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Comparing the level of code-switching in corpora",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1850--1855",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Gamb\u00e4ck and Amitava Das. 2016. Comparing the level of code-switching in corpora. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 1850-1855, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Code-switched language models using dual RNNs and same-source pretraining",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Tanmay",
"middle": [],
"last": "Parekh",
"suffix": ""
},
{
"first": "Preethi",
"middle": [],
"last": "Jyothi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3078--3083",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1346"
]
},
"num": null,
"urls": [],
"raw_text": "Saurabh Garg, Tanmay Parekh, and Preethi Jyothi. 2018. Code-switched language models using dual RNNs and same-source pretraining. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3078-3083, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "CoMeT: Towards code-mixed translation using parallel monolingual sentences",
"authors": [
{
"first": "Devansh",
"middle": [],
"last": "Gautam",
"suffix": ""
},
{
"first": "Prashant",
"middle": [],
"last": "Kodali",
"suffix": ""
},
{
"first": "Kshitij",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Anmol",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Ponnurangam",
"middle": [],
"last": "Kumaraguru",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "47--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devansh Gautam, Prashant Kodali, Kshitij Gupta, An- mol Goel, Manish Shrivastava, and Ponnurangam Kumaraguru. 2021. CoMeT: Towards code-mixed translation using parallel monolingual sentences. In Proceedings of the Fifth Workshop on Compu- tational Approaches to Linguistic Code-Switching, pages 47-55, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A semi-supervised approach to generate the code-mixed text using pre-trained encoder and transfer learning",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "2267--2280",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.206"
]
},
"num": null,
"urls": [],
"raw_text": "Deepak Gupta, Asif Ekbal, and Pushpak Bhattacharyya. 2020. A semi-supervised approach to generate the code-mixed text using pre-trained encoder and trans- fer learning. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2267- 2280, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Collecting and Annotating Indian Social Media Code-Mixed Corpora",
"authors": [
{
"first": "Anupam",
"middle": [],
"last": "Jamatia",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "406--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anupam Jamatia, Bj\u00f6rn Gamb\u00e4ck, and Amitava Das. 2018. Collecting and Annotating Indian Social Me- dia Code-Mixed Corpora. In Computational Lin- guistics and Intelligent Text Processing, pages 406- 417, Cham. Springer International Publishing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Processing of sentences with intra-sentential code-switching",
"authors": [
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1982,
"venue": "Coling 1982: Proceedings of the Ninth International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aravind K. Joshi. 1982. Processing of sentences with intra-sentential code-switching. In Coling 1982: Proceedings of the Ninth International Conference on Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A new dataset for natural language inference from codemixed conversations",
"authors": [
{
"first": "Simran",
"middle": [],
"last": "Khanuja",
"suffix": ""
},
{
"first": "Sandipan",
"middle": [],
"last": "Dandapat",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the The 4th Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simran Khanuja, Sandipan Dandapat, Sunayana Sitaram, and Monojit Choudhury. 2020a. A new dataset for natural language inference from code- mixed conversations. In Proceedings of the The 4th Workshop on Computational Approaches to Code Switching, pages 9-16, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "GLUECoS: An evaluation benchmark for code-switched NLP",
"authors": [
{
"first": "Simran",
"middle": [],
"last": "Khanuja",
"suffix": ""
},
{
"first": "Sandipan",
"middle": [],
"last": "Dandapat",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3575--3585",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.329"
]
},
"num": null,
"urls": [],
"raw_text": "Simran Khanuja, Sandipan Dandapat, Anirudh Srini- vasan, Sunayana Sitaram, and Monojit Choudhury. 2020b. GLUECoS: An evaluation benchmark for code-switched NLP. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 3575-3585, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "DailyDialog: A manually labelled multi-turn dialogue dataset",
"authors": [
{
"first": "Yanran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Shuzi",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "986--995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manu- ally labelled multi-turn dialogue dataset. In Proceed- ings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), pages 986-995, Taipei, Taiwan. Asian Federa- tion of Natural Language Processing.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Overview for the second shared task on language identification in code-switched data",
"authors": [
{
"first": "Giovanni",
"middle": [],
"last": "Molina",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Ghoneim",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Rey-Villamizar",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "40--49",
"other_ids": {
"DOI": [
"10.18653/v1/W16-5805"
]
},
"num": null,
"urls": [],
"raw_text": "Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas Rey- Villamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the second shared task on language identification in code-switched data. In Proceedings of the Second Workshop on Computa- tional Approaches to Code Switching, pages 40-49, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "K\u00f6pf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In Advances in Neural Informa- tion Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Constraints on language mixing: Intrasentential code-switching and borrowing in spanish/english",
"authors": [
{
"first": "Carol",
"middle": [
"W"
],
"last": "Pfaff",
"suffix": ""
}
],
"year": 1979,
"venue": "Language",
"volume": "55",
"issue": "2",
"pages": "291--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carol W. Pfaff. 1979. Constraints on language mix- ing: Intrasentential code-switching and borrowing in spanish/english. Language, 55(2):291-318.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Syntactic structure and social function of code-switching",
"authors": [
{
"first": "Shana",
"middle": [],
"last": "Poplack",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "169--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shana Poplack. 1981. Syntactic structure and social function of code-switching, pages 169-184.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Cmee-il: Code mix entity extraction in indian languages from social media text @ fire 2016 -an overview",
"authors": [
{
"first": "R",
"middle": [
"K"
],
"last": "Pattabhi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Devi",
"suffix": ""
}
],
"year": 2016,
"venue": "FIRE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pattabhi R. K. Rao and S. Devi. 2016. Cmee-il: Code mix entity extraction in indian languages from social media text @ fire 2016 -an overview. In FIRE.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning representations by backpropagating errors",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1986,
"venue": "Nature",
"volume": "323",
"issue": "6088",
"pages": "533--536",
"other_ids": {
"DOI": [
"10.1038/323533a0"
]
},
"num": null,
"urls": [],
"raw_text": "David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back- propagating errors. Nature, 323(6088):533-536.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A formal grammar for code-switching",
"authors": [
{
"first": "David",
"middle": [],
"last": "Sankoff",
"suffix": ""
},
{
"first": "Shana",
"middle": [],
"last": "Poplack",
"suffix": ""
}
],
"year": 1981,
"venue": "Papers in Linguistics -International Journal of Human Communication",
"volume": "14",
"issue": "",
"pages": "3--46",
"other_ids": {
"DOI": [
"10.1080/08351818109370523"
]
},
"num": null,
"urls": [],
"raw_text": "David Sankoff and Shana Poplack. 1981. A formal grammar for code-switching. Papers in Linguistics -International Journal of Human Communication, 14:3-46.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Government and code-mixing",
"authors": [
{
"first": "Anne-Marie Di",
"middle": [],
"last": "Sciullo",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Muysken",
"suffix": ""
},
{
"first": "Rajendra",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 1986,
"venue": "Journal of Linguistics",
"volume": "22",
"issue": "",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne-Marie Di Sciullo, Pieter Muysken, and Rajendra Singh. 1986. Government and code-mixing. Jour- nal of Linguistics, 22(1):1-24.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1099"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Neural responding machine for short-text conversation",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1577--1586",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1152"
]
},
"num": null,
"urls": [],
"raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neu- ral responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1577-1586, Beijing, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Can you put it all together: Evaluating conversational agents' ability to blend skills",
"authors": [
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Williamson",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2021--2030",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.183"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 2021-2030, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Overview for the first shared task on language identification in code-switched data",
"authors": [
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Blair",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Ghoneim",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {
"DOI": [
"10.3115/v1/W14-3907"
]
},
"num": null,
"urls": [],
"raw_text": "Thamar Solorio, Elizabeth Blair, Suraj Mahar- jan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Ju- lia Hirschberg, Alison Chang, and Pascale Fung. 2014. Overview for the first shared task on language identification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62-72, Doha, Qatar. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "PHINC: A parallel Hinglish social media code-mixed corpus for machine translation",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)",
"volume": "",
"issue": "",
"pages": "41--49",
"other_ids": {
"DOI": [
"10.18653/v1/2020.wnut-1.7"
]
},
"num": null,
"urls": [],
"raw_text": "Vivek Srivastava and Mayank Singh. 2020. PHINC: A parallel Hinglish social media code-mixed cor- pus for machine translation. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W- NUT 2020), pages 41-49, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Informa- tion Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "A networkbased end-to-end trainable task-oriented dialogue system",
"authors": [
{
"first": "David",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"M"
],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "438--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, David Vandyke, Nikola Mrk\u0161i\u0107, Milica Ga\u0161i\u0107, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network- based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 438-449, Valencia, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "The dialog state tracking challenge",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Raux",
"suffix": ""
},
{
"first": "Deepak",
"middle": [],
"last": "Ramachandran",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the SIGDIAL 2013",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "",
"middle": [],
"last": "Conference",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "404--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference, pages 404-413, Metz, France. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Endto-end LSTM-based dialog control optimized with supervised and reinforcement learning",
"authors": [
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01269[cs].ArXiv:1606.01269"
]
},
"num": null,
"urls": [],
"raw_text": "Jason D. Williams and Geoffrey Zweig. 2016. End- to-end LSTM-based dialog control optimized with supervised and reinforcement learning. arXiv:1606.01269 [cs]. ArXiv: 1606.01269.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Clara",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Scao",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Lhoest",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Topic aware neural response generation",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yalou",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3351--3357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelli- gence, February 4-9, 2017, San Francisco, Califor- nia, USA, pages 3351-3357. AAAI Press.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "DIALOGPT : Largescale generative pre-training for conversational response generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "270--278",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.30"
]
},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large- scale generative pre-training for conversational re- sponse generation. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270- 278, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "A dataset for document grounded conversations",
"authors": [
{
"first": "Kangyan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "708--713",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1076"
]
},
"num": null,
"urls": [],
"raw_text": "Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded con- versations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 708-713, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "LinCE(Aguilar et al., 2020) Benchmark provides an English to Hinglish Code-Mixed dataset as part of their Code-Mixing benchmark for Machine Translation and GLUE. The dataset consists of roughly 10,000 English and Code-Mixed pairs.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Bucketed average rating of 3 raters over the Code-Mixed dialogs quality from the range of 1-5",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF4": {
"text": "shows that the model trained using the dual curriculum learning method (mBART-dialog + ) performs better both on the BLEU as well as the CMIEnglish Dialogs to Code-Mixed Translation S1: Good afternoon. This is Michelle Li speaking, calling on behalf of IBA. Is Mr Meng available at all? S1: Accha afternoon. Ye Michelle Li speaking hai, IBA ka on behalf calling. Kya Mr Meng available hai? S2: This is Mr Meng speaking, Michelle. \u2192 S2: Ye hai Mr Meng speaking, Michelle.",
"type_str": "table",
"content": "<table><tr><td>S1: Oh, hello! Sorry about that. I'm just</td><td>S1: Oh, hello! Sorry iske bare mein. Main</td></tr><tr><td>calling to say that we've received your new</td><td>bas kah raha hoon ki hamen apna new Corpo-</td></tr><tr><td>Corporate Credit Card from HQ.</td><td>rate Credit Card mil gaya hai HQ.</td></tr></table>",
"html": null,
"num": null
},
"TABREF5": {
"text": "Translating DailyDailog dataset from English to Code-Mixed. Blue tokens refer to the Hindi tokens in the Roman script. S1 and S2 refer to Speaker 1 and 2 respectively.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF7": {
"text": "Error Analysis on 50 randomly sampled test translated sentences from our best performing mBART-en_cm model on CMU Hinglish Dataset",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}