ACL-OCL / Base_JSON /prefixE /json /ecnlp /2020.ecnlp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:07.691396Z"
},
"title": "Improving Intent Classification in an E-commerce Voice Assistant by Using Inter-Utterance Context",
"authors": [
{
"first": "Arpit",
"middle": [],
"last": "Sharma",
"suffix": "",
"affiliation": {},
"email": "arpit.sharma@walmartlabs.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this work, we improve the intent classification in an English based e-commerce voice assistant by using inter-utterance context. For increased user adaptation and hence being more profitable, an e-commerce voice assistant is desired to understand the context of a conversation and not have the users repeat it in every utterance. For example, let a user's first utterance be 'find apples'. Then, the user may say 'i want organic only' to filter out the results generated by an assistant with respect to the first query. So, it is important for the assistant to take into account the context from the user's first utterance to understand her intention in the second one. In this paper, we present our approach for contextual intent classification in Walmart's e-commerce voice assistant. It uses the intent of the previous user utterance to predict the intent of her current utterance. With the help of experiments performed on real user queries we show that our approach improves the intent classification in the assistant.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this work, we improve the intent classification in an English based e-commerce voice assistant by using inter-utterance context. For increased user adaptation and hence being more profitable, an e-commerce voice assistant is desired to understand the context of a conversation and not have the users repeat it in every utterance. For example, let a user's first utterance be 'find apples'. Then, the user may say 'i want organic only' to filter out the results generated by an assistant with respect to the first query. So, it is important for the assistant to take into account the context from the user's first utterance to understand her intention in the second one. In this paper, we present our approach for contextual intent classification in Walmart's e-commerce voice assistant. It uses the intent of the previous user utterance to predict the intent of her current utterance. With the help of experiments performed on real user queries we show that our approach improves the intent classification in the assistant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, there has been a notable advancement in the field of voice assistants 1 . Consequently, voice assistants are being deployed heavily in lucrative domains such as e-commerce (Mari et al., 2020) , customer service (Cui et al., 2017) and healthcare (Mavropoulos et al., 2019) . There are various ecommerce based voice assistants available in the market, including Amazon's Alexa voice shopping (Maarek, 2018) and Walmart's on Google assistant/Siri 2 . Their goal is to free us from the tedious task of buying stuff by visiting stores and websites. A major challenge in the fulfilment of this goal is their capability to precisely understand an utterance in a dialog without providing much context 1 https://bit.ly/2Xr71xv 2 https://bit.ly/33TmwkN, https://bit.ly/2JqH9eO in the utterance. For example, an assistant must precisely understand that when a user says 'five' after a query to add bananas to her cart then she intents to add five bananas to her cart. Whereas if a user says 'five' as her first utterance to the shopping assistant then her intention is unknown (i.e., it does not represent any e-commerce action at the start of a conversation). Handling such scenarios require the Natural Language Understanding (NLU) component in the assistants to utilize the context while predicting the intent associated with an utterance. The current intent prediction systems (Chen et al., 2019; Goo et al., 2018; Liu and Lane, 2016) do not focus on such contextual dependence. In this work we integrate inter-utterance contextual features in the NLU component of the shopping assistant of Walmart company to improve its intent classification.",
"cite_spans": [
{
"start": 182,
"end": 201,
"text": "(Mari et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 221,
"end": 239,
"text": "(Cui et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 255,
"end": 281,
"text": "(Mavropoulos et al., 2019)",
"ref_id": null
},
{
"start": 400,
"end": 414,
"text": "(Maarek, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 1380,
"end": 1399,
"text": "(Chen et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 1400,
"end": 1417,
"text": "Goo et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 1418,
"end": 1437,
"text": "Liu and Lane, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are four main aspects of a voice assistant, namely, Speech-to-Text, NLU, Dialog Management (DM) and Text-to-Speech. The NLU component (such as the one in Liu and Lane (2016)) identifies intent(s) and entities in a user utterance. The dialog manager uses the output of the NLU component to prepare a suitable response for the user. The NLU systems in the currently available voice enabled shopping assistants 3 do not focus on inter-utterance context and hence the onus of context disambiguation lies upon the dialog manager. Although it is possible to capture a small number of such cases in the dialog manager, it becomes difficult for it to scale for large number of contextually dependent utterances. For example, let us consider the user utterance 'five' after the utterance to add something to her cart. Then a dialog manager can predict its intent by using the rule: if previous intent = add to cart and the query is an integer then intent = add to cart else intent = un-known. But such a general rule can not be created for many other queries such as 'organic please' (previous intent = add to cart, intent = filter) and 'stop please' (previous intent = add to cart, intent = stop).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present our work to improve the intent classification in the shopping assistant of Walmart company by using inter-utterance context. Our work also reduces the contextual disambiguation burden from the dialog manager. Here, we implement our approach using two neural network based architectures. With the help of experiments we also compare the two implementations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Various intent classification works (Chen et al., 2019; Goo et al., 2018; Liu and Lane, 2016) have been proposed in the recent years. Most of them mainly focus on the current utterance only and try to predict its intent based on the information present in it. We take it one step further and use the context from the immediately previous utterance.",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "(Chen et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 56,
"end": 73,
"text": "Goo et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 74,
"end": 93,
"text": "Liu and Lane, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A work which focuses on the contextual information while predicting the intents is mentioned in Naik et al. (2018) . It uses visual information as context. Although it is useful in an assistant which comes with a screen, it is not applicable for a voice only assistant.",
"cite_spans": [
{
"start": 96,
"end": 114,
"text": "Naik et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another work (Mensio et al., 2018) uses an entire conversation (intents of all the utterances and all the replies from the assistant) as context. Although it makes sense to use the entire conversation as context in a general purpose chat-bot, we believe that in the e-commerce domain a conversation between an assistant and a user is fragmented into smaller goals such as 'finding an item in the inventory' and 'adding an item in cart'. Each such goal should be defined by a small number of utterances only. This is because the objective of the voice assistant is to simplify the shopping experience for the user instead of engaging her in a long conversation for a simpler task. So, in our work we use the intent of the previous utterance only as the context for the intent prediction of the current utterance.",
"cite_spans": [
{
"start": 13,
"end": 34,
"text": "(Mensio et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section we provide the data generation details and a detailed overview of our context aware intent classification implementations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Generation & Our Approach",
"sec_num": "3"
},
{
"text": "In this work our main goal is to improve the intent classification for e-commerce related utterances. We achieve this goal by using inter-utterance context. There are various data sets (ATIS (Hemphill et al., 1990) , SNIPS (Coucke et al., 2018) ) for intent classification. None of them contain contextual data instances. Furthermore, they do not focus on e-commerce related data. So, as part of this work we generated a context aware data set for e-commerce specific queries. We used a template based approach to generate the data. It was inspired by our previous work as mentioned in Sharma et al. (2018) . Following are the two main steps in the data generation phase. \u2022",
"cite_spans": [
{
"start": 185,
"end": 214,
"text": "(ATIS (Hemphill et al., 1990)",
"ref_id": null
},
{
"start": 223,
"end": 244,
"text": "(Coucke et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 586,
"end": 606,
"text": "Sharma et al. (2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Data Generation",
"sec_num": "3.1"
},
{
"text": "Step 2: In this step the placeholders in the template and intent combinations are replaced with their possible values from predefined sets to generate the data points. Multiple data points are generated from each template combination by using the placeholders' values (10 distinct values for each placeholder in each template). The possible values of the placeholders are extracted from the products' catalog of Walmart company.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Data Generation",
"sec_num": "3.1"
},
{
"text": "The data generated by using the above steps was found to be unbalanced in the following two ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fixing Unbalanced Data",
"sec_num": "3.1.1"
},
{
"text": "1. We found that the generated data contained more instances corresponding to some intents than others. This is because the templates corresponding to some intents (such as add to cart) contained placeholders which were replaced by many possible values whereas the templates corresponding to other intents (such as stopping a conversation) did not contain any placeholders. To resolve this kind of skewness we performed Random Oversampling (ROS) (Rathpisey and Adji, 2019) with respect to the intents with minimal data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fixing Unbalanced Data",
"sec_num": "3.1.1"
},
{
"text": "2. The contextual instances in the generated data were overwhelmed by their non-contextual parts. Let T be a contextual template such that its intent is I iff the previous intent is P otherwise its intent is unknown. Since there are a total of 32 intents, T corresponds to 32 contextual template combinations of previous intent and current intent. In 31 among those, the current intent of T is unknown whereas only one corresponds to I. To resolve this kind of imbalance we performed Random Oversampling (ROS) (Rathpisey and Adji, 2019) with respect to the one combination mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fixing Unbalanced Data",
"sec_num": "3.1.1"
},
{
"text": "We used two different neural network architectures for two separate implementations of our approach. The two architectures are as shown in the Figure 1 . Following are the details of the architectures.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3.2"
},
{
"text": "1. Bi-LSTM+FeedForward+GRU: The first architecture is inspired by the work in Mensio et al. (2018) . It has two main components. First, a bi-LSTM (Huang et al., 2015; Plank et al., 2016) en-coder which generates a vector encoding of an input utterance. The vector represents a summarized version of the utterance. It is generated from the embeddings of the words in the query. We experimented with different pre-trained language models (BERT (Devlin et al., 2018) and Glove (Pennington et al., 2014) ) to retrieve the initial word embedding. Second component is a GRU (Chung et al., 2014) layer whose inputs consist of the output of a feed forward layer with respect to the embedding generated by the bi-LSTM encoder, along with the one-hot encoding of the previous utterance's intent. The output of this layer is the intent of the input utterance (See Figure 1) .",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "Mensio et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 146,
"end": 166,
"text": "(Huang et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 167,
"end": 186,
"text": "Plank et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 442,
"end": 463,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 474,
"end": 499,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 568,
"end": 588,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 853,
"end": 862,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3.2"
},
{
"text": "2. Bi-LSTM+FeedForward: Similar to the first architecture, the second one also has two components, a bi-LSTM layer followed by a feed forward layer. As in the first architecture, the bi-LSTM layer generates a vector which represents a user utterance. The output of the bi-LSTM layer is then concatenated with the one-hot encoding of the previous intent and entered as an input to a feed forward layer (See Figure 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 406,
"end": 415,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3.2"
},
{
"text": "The main goal of this work is to improve the intent classification in the NLU component of Walmart's shopping assistant by using inter-utterance context. In this section we present the quantitative details of the data, the experiments and the analysis of the results. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation & Results",
"sec_num": "4"
},
{
"text": "We followed the data generation steps mentioned in the Section 3.1 with and without oversampling to generate a set of 730K and 570K instances respectively. Each set is split into training (85%) and validation (15%). For testing the trained models we used the real user queries which were taken from the live logs of Walmart's shopping assistant. We selected all the queries from two weeks of live logs. Then we filtered the retrieved queries by keeping only the unique ones. By following the above steps, we got 2550 unique user queries in our test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Details",
"sec_num": "4.1"
},
{
"text": "The hypothesis that led to this work states that contextual evidence is needed to find the correct intent of a user utterance in an e-commerce voice assistant. To test the hypothesis, in this experiment we analyzed the set of 2550 unique user utterances from user logs. 40.11% of those (i.e., 1023) were found to be contextual. For example 'give me a smaller size' is one such contextual query which when appears after an add to cart utterance, implies filtering the results of the previous utterance whereas when appeared first in a conversation between a user and a voice assistant, does not make sense (or unknown intent). The add to cart and search intents are most popular among the logs. We found that 1005 out of 1023 contextual queries were related to those intents. This emphasizes the importance of contextual disambiguation even further. 917 (approx. 90%) among 1023 were correctly classified by the best performing version (BERT based) of our implementations (see Table 1 ). The current, non-contextual intent classifier of the Walmart's shopping assistant classifies all the contextual queries as one intent (say abc). The contextual disambiguation burden lies on the dialog manager by using rules (as mentioned in the Section 1). Presently, a total of 12 rules exist to handle contextual templates corresponding to one intent (out of 32) only. We also found that out of 1023 contextual queries, about 88% are classified by the non-contextual intent classifier as abc.",
"cite_spans": [],
"ref_spans": [
{
"start": 976,
"end": 984,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 1",
"sec_num": "4.2"
},
{
"text": "In this experiment, we tested our implementations with respect to different input word embedding and data (with oversampling and without oversampling). We used BERT's huggingface 4 pretrained embedding (length=768), and Glove 5 6 billion and 840 billion pre-trained embedding (length=300). The evaluation results are as shown in the Table 1 . Each model was trained for 10 epochs. The BERT embedding of a word was calculated by taking an average of the last 4 layers in the 12-layer BERT pre-trained model 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2",
"sec_num": "4.3"
},
{
"text": "The results of experiment 1 show the usefulness of contextual intent classification. The results of experiment 2 show that the Bi-LSTM and GRU based implementation performs best with an overall accuracy of 87.68% on all the live user logs and approximately 90% on the contextual logs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results Analysis",
"sec_num": "4.4"
},
{
"text": "Inference speed plays an important role in production deployment of models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results Analysis",
"sec_num": "4.4"
},
{
"text": "Although the performance of BERT based Bi-LSTM+FeedForward+GRU is better than the Glove (840B) based Bi-LSTM+FeedForward, the latency of first (450 milliseconds, averaged over 2550 queries on CPU) one is considerably more than the second (5 milliseconds on CPU). See Table 2 for inference speeds of the different models.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 275,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results Analysis",
"sec_num": "4.4"
},
{
"text": "We observed that many (about 50%) errors in both the implementations were caused by the inability of the training data to represent real user data. A way to address such errors is by using real user queries also to train the models. It requires manual effort to label user logs. We are currently in the process of such an effort through crowd workers. We believe that using a combination of templates based and real user data will improve the accuracy of the implementations even further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results Analysis",
"sec_num": "4.4"
},
{
"text": "In this paper, we presented our work of improving intent classification in Walmart's shopping assistant. We used previous utterance's intent as context to identify current utterance's intent. The contextual update in the NLU layer (intent classification) also takes the burden of intent based contextual disambiguation away from a dialog manager. As hypothesized, the experimental results show that our approach improves the intent classification by handling contextual queries. We presented two implementations of our approach and compared them with respect to live user logs. Though, in this work our main focus was on the contextual disambiguation of intents, the entities are also contextually dependent. For example 'five' uttered after 'add bananas' may refer to the quantity five whereas if uttered after 'pick a delivery time' may refer to the time of day five (am/pm). In future we would like to use contextual features to disambiguate between entities and improve the entity tagging as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "5"
},
{
"text": "https://bit.ly/2ZAa5KA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/transformers 5 https://nlp.stanford.edu/projects/glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "I would like to acknowledge the support of my colleagues in the Emerging Technologies team at Wal-martLabs, Sunnyvale, CA office. They provided their valuable feedback which helped in maturing this submission to a publishable states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bert for joint intent classification and slot filling",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Zhuo",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.10909"
]
},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. arXiv preprint arXiv:1902.10909.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Coucke",
"suffix": ""
},
{
"first": "Alaa",
"middle": [],
"last": "Saade",
"suffix": ""
},
{
"first": "Adrien",
"middle": [],
"last": "Ball",
"suffix": ""
},
{
"first": "Th\u00e9odore",
"middle": [],
"last": "Bluche",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Caulier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Leroy",
"suffix": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Doumouro",
"suffix": ""
},
{
"first": "Thibault",
"middle": [],
"last": "Gisselbrecht",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Caltagirone",
"suffix": ""
},
{
"first": "Thibaut",
"middle": [],
"last": "Lavril",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.10190"
]
},
"num": null,
"urls": [],
"raw_text": "Alice Coucke, Alaa Saade, Adrien Ball, Th\u00e9odore Bluche, Alexandre Caulier, David Leroy, Cl\u00e9ment Doumouro, Thibault Gisselbrecht, Francesco Calta- girone, Thibaut Lavril, et al. 2018. Snips voice plat- form: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Superagent: A customer service chatbot for e-commerce websites",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Chuanqi",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Chaoqun",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Cui, Shaohan Huang, Furu Wei, Chuanqi Tan, Chaoqun Duan, and Ming Zhou. 2017. Superagent: A customer service chatbot for e-commerce web- sites. In Proceedings of ACL 2017, System Demon- strations, pages 97-102.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Slot-gated modeling for joint slot filling and intent prediction",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Chih-Wen Goo",
"suffix": ""
},
{
"first": "Yun-Kai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chih-Li",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Tsung-Chieh",
"middle": [],
"last": "Huo",
"suffix": ""
},
{
"first": "Keng-Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "753--757",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun- Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 753-757.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The atis spoken language systems pilot corpus",
"authors": [
{
"first": "T",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "John",
"middle": [
"J"
],
"last": "Hemphill",
"suffix": ""
},
{
"first": "George",
"middle": [
"R"
],
"last": "Godfrey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 1990,
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles T Hemphill, John J Godfrey, and George R Doddington. 1990. The atis spoken language sys- tems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Attention-based recurrent neural network models for joint intent detection and slot filling",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.01454"
]
},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Alexa and her shopping journey",
"authors": [
{
"first": "Yoelle",
"middle": [],
"last": "Maarek",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1--1",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoelle Maarek. 2018. Alexa and her shopping jour- ney. In Proceedings of the 27th ACM International Conference on Information and Knowledge Manage- ment, pages 1-1.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The evolution of marketing in the context of voice commerce: A managerial perspective",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Mari",
"suffix": ""
},
{
"first": "Andreina",
"middle": [],
"last": "Mandelli",
"suffix": ""
},
{
"first": "Ren\u00e9",
"middle": [],
"last": "Algesheimer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceeding of the 22nd International Conference on Human-Computer Interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Mari, Andreina Mandelli, and Ren\u00e9 Algesheimer. 2020. The evolution of marketing in the context of voice commerce: A managerial perspective. In Proceeding of the 22nd International Conference on Human-Computer Interaction.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dimitris Tzimikas, Lefteris Papageorgiou, Christos Eleftheriadis",
"authors": [
{
"first": "Thanassis",
"middle": [],
"last": "Mavropoulos",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Meditskos",
"suffix": ""
},
{
"first": "Spyridon",
"middle": [],
"last": "Symeonidis",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Kamateri",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Rousi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanassis Mavropoulos, Georgios Meditskos, Spyri- don Symeonidis, Eleni Kamateri, Maria Rousi, Dim- itris Tzimikas, Lefteris Papageorgiou, Christos Eleft- heriadis, George Adamopoulos, Stefanos Vrochidis, et al. 2019. A context-aware conversational agent in the rehabilitation domain. Future Internet, 11(11):231.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multi-turn qa: A rnn contextual approach to intent classification for goal-oriented systems",
"authors": [
{
"first": "Martino",
"middle": [],
"last": "Mensio",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Rizzo",
"suffix": ""
},
{
"first": "Maurizio",
"middle": [],
"last": "Morisio",
"suffix": ""
}
],
"year": 2018,
"venue": "Companion Proceedings of the The Web Conference",
"volume": "",
"issue": "",
"pages": "1075--1080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martino Mensio, Giuseppe Rizzo, and Maurizio Mori- sio. 2018. Multi-turn qa: A rnn contextual approach to intent classification for goal-oriented systems. In Companion Proceedings of the The Web Conference 2018, pages 1075-1080.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Context aware conversational understanding for intelligent agents with a screen",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Vishal Ishwar Naik",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Metallinou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goel",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vishal Ishwar Naik, Angeliki Metallinou, and Rahul Goel. 2018. Context aware conversational under- standing for intelligent agents with a screen. In Thirty-Second AAAI Conference on Artificial Intel- ligence.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.05529"
]
},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidi- rectional long short-term memory models and auxil- iary loss. arXiv preprint arXiv:1604.05529.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Handling imbalance issue in hate speech classification using sampling-based methods",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Rathpisey",
"suffix": ""
},
{
"first": "Teguh",
"middle": [],
"last": "Bharata Adji",
"suffix": ""
}
],
"year": 2019,
"venue": "5th International Conference on Science in Information Technology (ICSITech)",
"volume": "",
"issue": "",
"pages": "193--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Rathpisey and Teguh Bharata Adji. 2019. Han- dling imbalance issue in hate speech classification using sampling-based methods. In 2019 5th Interna- tional Conference on Science in Information Tech- nology (ICSITech), pages 193-198. IEEE.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semantic representation and parsing for voice ecommerce",
"authors": [
{
"first": "Arpit",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kaul",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Shankara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Subramanya",
"suffix": ""
}
],
"year": 2018,
"venue": "R2K Workshop: Integrating learning of Representations and models with deductive Reasoning that leverages Knowledge",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arpit Sharma, Vivek Kaul, and Shankara B Subra- manya. 2018. Semantic representation and parsing for voice ecommerce. In R2K Workshop: Integrat- ing learning of Representations and models with de- ductive Reasoning that leverages Knowledge (Co- located with KR 2018).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Deep Learning Architectures of the Implementations. Implementation 1 is Bi-LSTM+FeedForward+GRU and Implementation 2 is Bi-LSTM+FeedForward"
},
"TABREF3": {
"content": "<table/>",
"text": "Inference Speed of Different Models (Averaged Over 2550 Real User Queries)",
"html": null,
"type_str": "table",
"num": null
}
}
}
}