{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:04.048969Z" }, "title": "ASR Adaptation for E-commerce Chatbots using Cross-Utterance Context and Multi-Task Language Modeling", "authors": [ { "first": "Ashish", "middle": [], "last": "Shenoy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon AWS AI", "location": { "country": "USA" } }, "email": "ashenoy@amazon.com" }, { "first": "Sravan", "middle": [], "last": "Bodapati", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon AWS AI", "location": { "country": "USA" } }, "email": "sravanb@amazon.com" }, { "first": "Katrin", "middle": [], "last": "Kirchhoff", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon AWS AI", "location": { "country": "USA" } }, "email": "katrinki@amazon.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic Speech Recognition (ASR) robustness toward slot entities are critical in ecommerce voice assistants that involve monetary transactions and purchases. Along with effective domain adaptation, it is intuitive that cross utterance contextual cues play an important role in disambiguating domain specific content words from speech. In this paper, we investigate various techniques to improve contextualization, content word robustness and domain adaptation of a Transformer-XL neural language model (NLM) to rescore ASR N-best hypotheses. To improve contextualization, we utilize turn level dialogue acts along with cross utterance context carry over. Additionally, to adapt our domaingeneral NLM towards e-commerce on-the-fly, we use embeddings derived from a finetuned masked LM on in-domain data. Finally, to improve robustness towards in-domain content words, we propose a multi-task model that can jointly perform content word detection and language modeling tasks. Compared to a noncontextual LSTM LM baseline, our best performing NLM rescorer results in a content WER reduction of 19.2% on e-commerce audio test set and a slot labeling F1 improvement of 6.4%.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Automatic Speech Recognition (ASR) robustness toward slot entities are critical in ecommerce voice assistants that involve monetary transactions and purchases. Along with effective domain adaptation, it is intuitive that cross utterance contextual cues play an important role in disambiguating domain specific content words from speech. In this paper, we investigate various techniques to improve contextualization, content word robustness and domain adaptation of a Transformer-XL neural language model (NLM) to rescore ASR N-best hypotheses. To improve contextualization, we utilize turn level dialogue acts along with cross utterance context carry over. Additionally, to adapt our domaingeneral NLM towards e-commerce on-the-fly, we use embeddings derived from a finetuned masked LM on in-domain data. Finally, to improve robustness towards in-domain content words, we propose a multi-task model that can jointly perform content word detection and language modeling tasks. Compared to a noncontextual LSTM LM baseline, our best performing NLM rescorer results in a content WER reduction of 19.2% on e-commerce audio test set and a slot labeling F1 improvement of 6.4%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Task-oriented conversations in voice chatbots deployed for e-commerce usecases such as shopping (Maarek, 2018) , browsing catalog, scheduling deliveries or ordering food are predominantly shortform audios. Moreover, these dialogues are restricted to a narrow range of multi-turn interactions that involve accomplishing a specific task (Mari et al., 2020) . The back and forth between a user and the chatbots are key to reliably capture the user intent and slot entities referenced in the spoken utterances. As shown in previous works (Irie et al., 2019; Parthasarathy et al., 2019; Sun et al., 2021) , rather than decoding each utterance independently, there can be benefit in decoding these utterances based on context from previous turns. In the case of grocery shopping for example, knowing that the context is \"what kind of laundry detergent?\" should help in disambiguating \"pods\" from \"pause\". Another common aspect in e-commerce chatbots is that the speech patterns differ among sub-categories of usecases (Eg. shopping clothes vs ordering fast food). Hence, some chatbot systems allow users to provide pre-defined grammars or sample utterances that are specific for their usecase . These user provided grammars are then predominantly used to perform domain adaptation on an n-gram language model. Recently (Shenoy et al., 2021) showed that these can be leveraged to bias a Transformer-XL (TXL) LM rescorer on-the-fly.", "cite_spans": [ { "start": 96, "end": 110, "text": "(Maarek, 2018)", "ref_id": "BIBREF18" }, { "start": 335, "end": 354, "text": "(Mari et al., 2020)", "ref_id": "BIBREF19" }, { "start": 534, "end": 553, "text": "(Irie et al., 2019;", "ref_id": "BIBREF10" }, { "start": 554, "end": 581, "text": "Parthasarathy et al., 2019;", "ref_id": "BIBREF21" }, { "start": 582, "end": 599, "text": "Sun et al., 2021)", "ref_id": "BIBREF30" }, { "start": 1313, "end": 1334, "text": "(Shenoy et al., 2021)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While there has been extensive previous work on improving contextualization of TXL LM using historical context, none of the approaches utilize signals from a natural language understanding (NLU) component such as turn level dialogue acts. This paper investigates how to utilize dialogue acts along with user provided speech patterns to adapt a domain-general TXL LM towards different ecommerce usecases on-the-fly. We also propose a novel multi-task architecture for TXL, where the model jointly learns to perform domain specific slot detection and LM tasks. We use perplexity (PPL) and word error rate (WER) as our evaluation metrics. We also evaluate on downstream NLU metrics such as intent classification (IC) F1 and slot labeling (SL) F1 to capture the success of these conversations. The overall contributions of this work can be summarized as follows :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show that a TXL model that utilizes turn level dialogue act information along with long span context helps with contextualiziation and improves WER and IC F1 in e-commerce chatbots.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 To improve robustness towards e-commerce domain specifc slot entities, we propose a novel TXL architecture that is jointly trained on slot detection and LM tasks which significantly improves content WERR and SL F1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show that adapting the NLM towards user provided speech patterns by using BERT on domain specific text is an efficient and effective method to perform on-the-fly adaptation of a domain-general NLM towards ecommerce utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Incorporating cross utterance context has been well explored with both recurrent and non-recurrent NLMs. With LSTM NLMs, long span context is usually propogated without resetting hidden states across sentences or using longer sequence lengths (Xiong et al., 2018a; Irie et al., 2019; Khandelwal et al., 2018; Parthasarathy et al., 2019) . In (Xiong et al., 2018b) , along with longer history, information about turn taking and speaker overlap is used to improve contextualization in human to human conversations. With transformer architecture based on self attention (Vaswani et al., 2017) (Dai et al., 2019) showed that by utilizing segment wise recurrence Transformer-XL (TXL) (Dai et al., 2019 ) is able to effectively leverage long span context while decoding. More recently, improving contextualization of the TXL models included adding a LSTM fusion layer to complement the advantages of recurrent with non-recurrent models (Sun et al., 2021) . (Shenoy et al., 2021 ) incorporated a non-finetuned masked LM fusion in order to make the domain adaptation of TXL models quick and on-the-fly using embeddings derived from customer provided data and incorporated dialogue acts but only with an LSTM based LM. While (Sunkara et al., 2020) tried to fuse multi-model features into a seq-to-seq LSTM based network. In (Sharma, 2020) cross utterance context was effectively used to perform better intent classification with e-commerce voice assistants. For domain adaptation, previous techniques explored include using an explicit topic vector as classified by a separate domain classifier and incorporating a neural cache (Mikolov and Zweig, 2019; Li et al., 2018; Raju et al., 2018; Chen et al., 2015) . (Irie et al., 2018) used a mixture of domain experts which are dynamically interpolated. It is also shown in , that using a hybrid pointer network over contextual metadata can also help in transcribing long form social media audio. Joint learning NLU tasks such as intent detection and slot filling have been explored with RNN based LMs in (Liu and Lane, 2016) and more recently in (Rao et al., 2020) , where they show that a jointly trained model consisting of both ASR and NLU tasks interfaced with a neural network based interface helps incorporate semantic information from NLU and improves ASR that comprises a LSTM based NLM. In tried to incorporate joint slot and intent detection into a LSTM based rescorer with a goal of improving accuracy on rare words in an end-to-end ASR system.", "cite_spans": [ { "start": 243, "end": 264, "text": "(Xiong et al., 2018a;", "ref_id": "BIBREF33" }, { "start": 265, "end": 283, "text": "Irie et al., 2019;", "ref_id": "BIBREF10" }, { "start": 284, "end": 308, "text": "Khandelwal et al., 2018;", "ref_id": "BIBREF11" }, { "start": 309, "end": 336, "text": "Parthasarathy et al., 2019)", "ref_id": "BIBREF21" }, { "start": 342, "end": 363, "text": "(Xiong et al., 2018b)", "ref_id": "BIBREF34" }, { "start": 567, "end": 589, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF32" }, { "start": 590, "end": 608, "text": "(Dai et al., 2019)", "ref_id": "BIBREF5" }, { "start": 679, "end": 696, "text": "(Dai et al., 2019", "ref_id": "BIBREF5" }, { "start": 930, "end": 948, "text": "(Sun et al., 2021)", "ref_id": "BIBREF30" }, { "start": 951, "end": 971, "text": "(Shenoy et al., 2021", "ref_id": "BIBREF29" }, { "start": 1216, "end": 1238, "text": "(Sunkara et al., 2020)", "ref_id": "BIBREF31" }, { "start": 1315, "end": 1329, "text": "(Sharma, 2020)", "ref_id": "BIBREF28" }, { "start": 1619, "end": 1644, "text": "(Mikolov and Zweig, 2019;", "ref_id": "BIBREF20" }, { "start": 1645, "end": 1661, "text": "Li et al., 2018;", "ref_id": "BIBREF14" }, { "start": 1662, "end": 1680, "text": "Raju et al., 2018;", "ref_id": "BIBREF24" }, { "start": 1681, "end": 1699, "text": "Chen et al., 2015)", "ref_id": "BIBREF4" }, { "start": 1702, "end": 1721, "text": "(Irie et al., 2018)", "ref_id": "BIBREF9" }, { "start": 2042, "end": 2062, "text": "(Liu and Lane, 2016)", "ref_id": "BIBREF15" }, { "start": 2084, "end": 2102, "text": "(Rao et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "However, none of the previous work utilize dialogue acts with a non-recurrent based LM such as Transformer-XL nor optimize towards improving robustness of in-domain slot entities. In this paper we experiment and study the impact of utilizing dialogue acts along with a masked language model fusion to improve contextualization and domain adaptation. Additionally, we also propose a novel multi-task architecture with TXL LM that improves the robustness towards in-domain slot entity detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A standard language model in an ASR system computes a probability distribution over a sequence of words W = w 0 , ..., w N auto-regressively as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(W ) = N i=1 p(w i |w 1 , w 2 , ..., w i\u22121 )", "eq_num": "(1)" } ], "section": "Approach", "sec_num": "3" }, { "text": "In our experiments, along with historical context, we condition the LM on additional contextual metadata such as dialogue acts :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "p(W ) = N i=1 p(w i |w 1 , w 2 , ..., w i\u22121 , c 1 , c 2 , ..., c k ) (2) Where c 1 , c 2 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "..c k are the turn based lexical representation of the contextual metadata. For baseline, we use a standard LSTM LM as summarized below : where embed i is a fixed size lower dimensional word embedding and the LSTM outputs are projected to word level outputs using W T ho . A Sof tmax layer converts the word level outputs into final word level probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "embed i = E T ke w i\u22121 c i , h i = LST M (h i\u22121 , c i\u22121 , embed i ) p(w i |w