{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:15.372110Z" }, "title": "\"Are you calling for the vaporizer you ordered?\" Combining Search and Prediction to Identify Orders in Contact Centers", "authors": [ { "first": "Shourya", "middle": [], "last": "Roy", "suffix": "", "affiliation": {}, "email": "shourya.roy@flipkart.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "With the growing footprint of ecommerce worldwide, the role of contact center is becoming increasingly crucial for customer satisfaction. To effectively handle scale and manage operational cost, automation through chatbots and voice-bots are getting rapidly adopted. With customers having multiple, often long list of active orders-the first task of a voicebot is to identify which one they are calling about. Towards solving this problem which we refer to as order identification, we propose a two-staged real-time technique by combining search and prediction in a sequential manner. In the first stage, analogous to retrieval-based question-answering, a fuzzy search technique uses customized lexical and phonetic similarity measures on noisy transcripts of calls to retrieve the order of interest. The coverage of fuzzy search is limited by no or limited response from customers to voice prompts. Hence, in the second stage, a predictive solution that predicts the most likely order a customer is calling about based on certain features of orders is introduced. We compare with multiple relevant techniques based on word embeddings as well as ecommerce product search to show that the proposed approach provides the best performance with 64% coverage and 87% accuracy on a large real-life data-set. A system based on the proposed technique is also deployed in production for a fraction of calls landing in the contact center of a large ecommerce provider; providing real evidence of operational benefits as well as increased customer delight.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "With the growing footprint of ecommerce worldwide, the role of contact center is becoming increasingly crucial for customer satisfaction. To effectively handle scale and manage operational cost, automation through chatbots and voice-bots are getting rapidly adopted. With customers having multiple, often long list of active orders-the first task of a voicebot is to identify which one they are calling about. Towards solving this problem which we refer to as order identification, we propose a two-staged real-time technique by combining search and prediction in a sequential manner. In the first stage, analogous to retrieval-based question-answering, a fuzzy search technique uses customized lexical and phonetic similarity measures on noisy transcripts of calls to retrieve the order of interest. The coverage of fuzzy search is limited by no or limited response from customers to voice prompts. Hence, in the second stage, a predictive solution that predicts the most likely order a customer is calling about based on certain features of orders is introduced. We compare with multiple relevant techniques based on word embeddings as well as ecommerce product search to show that the proposed approach provides the best performance with 64% coverage and 87% accuracy on a large real-life data-set. A system based on the proposed technique is also deployed in production for a fraction of calls landing in the contact center of a large ecommerce provider; providing real evidence of operational benefits as well as increased customer delight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With increasing penetration of ecommerce, reliance on and importance of contact centers is increasing. While emails and automated chat-bots are gaining popularity, voice continues to be the overwhelming preferred communication medium leading to mil-lions of phone calls landing at contact centers. Handling such high volume of calls by human agents leads to hiring and maintaining a large employee base. Additionally, managing periodic peaks (owing to sale periods, festive seasons etc.) as well as hiring, training, monitoring make the entire process a demanding operation. To address these challenges as well as piggybacking on recent progress of NLP and Dialog Systems research, voice-bots are gaining popularity. Voice-bot is a common name of automated dialog systems built to conduct task-oriented conversations with callers. They are placed as the first line of response to address customer concerns and only on failure, the calls are transferred to human agents. Goodness of voicebots, measured by automation rate, is proportional to the fraction of calls it can handle successfully end-to-end.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Customers' contacts in ecommerce domain are broadly about two types viz. for general enquiry about products before making a purchase and post purchase issue resolution; with overwhelming majority of contacts are of the latter type. For post purchase contacts, one of the first information that a voice-bot needs to gather is which product the customer is calling about. The most common practice has been to enumerate all products she has purchased, say in a reverse chronological order, and asking her to respond with her choice by pressing a numeric key. This is limiting in two important ways. Firstly, it limits the scope to a maximum of ten products which is insufficient in a large fraction of cases. Secondly and more importantly, listening to a long announcement of product titles to select one is a time-consuming and tedious customer experience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce the problem of order identification and propose a technique to identify or predict the product of interest for which a cus- Table 1 : Examples of top matches from fuzzy search and predictive model. The first column shows transcribed customer utterances and the second column shows all active orders at the time of the call with the top match from fuzzy search emphasized. The examples under Predictive Model shows the most likely order at the time of the call along with top-k features leading to the prediction. tomer has contacted the contact center 1 . We do it in a natural and efficient manner based on minimal or no explicit additional input from the customer through a novel combination of two complementary approaches viz. search and prediction. The system is not restricted by the number of products purchased even over a long time period. It has been shown to be highly accurate with 87% accuracy and over 65% coverage in a real-life and noisy environment.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After customer verification, a question was introduced in the voice-bot flow \"Which product you are calling about?\". Her response was recorded and transcribed by an automatic speech recognition (ASR) system to text in realtime. We modeled the search problem as a task to retrieve the most matching product considering this response as a query over the search space of all active products represented as a set of product attributes e.g. title, description, brand, color, author etc. While simple in formulation, the task offers a few practical challenges. Customers do not describe their products in a standard manner or as it is described in the product catalog. For example, to describe a \"SAMSUNG Galaxy F41 (Fusion Blue, 128 GB) (6 GB RAM)\" phone, they may say F41, Galaxy F41, mobile, phone, mobile phone, cellphone, Samsung mobile, etc. (more examples can be seen in Table1). Secondly, the responses from customers varied widely from being heavily code-mixed to having only fillers (ummms, aahs, whats etc.) to blank responses. This is complemented owing to the background noise as well as imperfections in ASR systems. Finally, in a not so uncommon scenario, often customers' active orders include multiple instances of the same product, minor variations thereof (e.g. in color), or related products which share many attributes (e.g. charger for \"SAMSUNG Galaxy F41 (Fusion Blue, 128 GB) (6 GB RAM)\") which are indistinguishable from their response to the prompt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose an unsupervised n-gram based fuzzy search based on a round of pre-processing followed by custom lexical and phonetic similarity metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In spite of its simplicity, this solution achieves 32% coverage with an accuracy of 87%, leveraging the relatively small search space. The custom nature of this solution achieves much higher accuracy compared to more sophisticated general purpose product search available on ecommerce mobile apps and websites. This simple technique also does not require additional steps such as named entity recognition (NER) which has been used for product identification related work in literature (Wen et al., 2019) . Additionally, NER systems' performance are comparatively poor on ASR transcripts owing to high degree of recognition and lexical noise (e.g. missing capitalization etc) (Yadav et al., 2020) .", "cite_spans": [ { "start": 485, "end": 503, "text": "(Wen et al., 2019)", "ref_id": "BIBREF14" }, { "start": 675, "end": 695, "text": "(Yadav et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While fuzzy search works with high accuracy, its coverage is limited owing to various mentioned noise in the data. We observed that about 25% of callers did not answer when asked to identify the product they were calling about. To overcome this challenge, we introduced a complementary solution based on predictive modeling which does not require explicit utterances from customers. In simple words, the model creates a ranking of active orders on the basis of likelihood of a customer calling about them. This is based on the intuition that certain characteristics of orders make them more likely to call about e.g. a return, an orders which was supposed to be delivered on the day of calling etc. Based on such features of orders and customer profile, a random forest model gives prediction accuracy of 72%, 88% and 94% at top-1, 2, and 3. For high confidence predictions, the voice-bot's prompt is changed to \"Are you calling for the you ordered?\" For right predictions, not only it reduces the duration of the call, also increases customer delight by the personalized experience. In combination, fuzzy search and predictive model cover 64.70% of all voice-bot calls with an accuracy of 87.18%. Organization of the paper: The rest of the paper is organized as follows. Section 2 narrates the background of order identification for voicebot, sections 3 discusses the proposed approach and sections 4 and 5 discuss the datasets used in our study and experiments respectively. Section 6 briefs some of the literature related to our work before concluding in section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A typical call flow of the voice-bot would start with greeting followed by identity verification, order identification and confirmation to issue identi- fication and finally issue resolution or transfer to human agent if needed. Figure 1 shows a sample conversation between a bot and a customer with multiple active orders, where the customer is asked to identify the order she called for.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Customers' responses to this question are transcribed to text in real-time using an ASR model. There were some practical difficulties in identifying the corresponding order from the transcribed customer utterances. When asked to identify the order, customers were not sure what they had to talk about, resulting in generic responses like 'hello', 'hai', 'okay', etc. in around 23% of the calls. Some customers straightaway mentioned about the issue instead of describing the product -for eg., 'refund order', 'order return karne ke liye call kiya hai', 'mix match pick up cancel', etc. We also noticed a prevalence of blank transcripts in around 22% of the calls, mostly from customers who have interacted with the voice-bot in the past. We believe this is due to the change in the call flow of voicebot from what they have experienced in the past. Another major challenge comes from the ASR errors especially in the context of code-mixed utterances. The transcription noise especially on product tokens ('mam record' for 'memory card', 'both tropage' for 'boAt Rockerz') made it more difficult to identify the right order. Also by nature of ASR, various lexical signals like capitalization, punctuation are absent in the ASR transcribed text, thereby making the task of order search more challenging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "After the order is identified or predicted, the customer is asked to confirm the chosen order. The customer can confirm positively and continue the conversation with the voice-bot or respond negatively and fallback to human agents. The ideal expectation from order search is to return a single matching order but in cases where multiple similar products exist, the voice-bot may prompt again to help disambiguate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "We propose to solve the problem of order identification through two steps. In the first phase (fuzzy search), we model the problem as a retrieval-based question-answering task where customer utterance is the query and the set of active orders of the customer is the search space. Towards solving this, we employ a sequence of matching techniques leveraging lexical and phonetic similarities. In the second phase (order prediction), we build a supervised learning model to predict the likelihood of the customer calling regarding different active orders. This later phase does not depend on customer utterances and hence does not get affected by transcription inaccuracies. Both of these approaches are discussed in detail in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "3" }, { "text": "Given the customer utterance and the product attributes of the active orders, fuzzy search proceeds in multiple rounds of textual similarity measures like direct, partial and phonetic match to retrieve the corresponding order, customer called for. These stages are invoked sequentially until a matching order is found. Sequentiality is introduced in fuzzy search in order to maximize the coverage while keeping the false positives low. Various stages involved in fuzzy search is shown in appendix A. Various stages in fuzzy search are detailed below. We use the terms {token and word} and {utterance and query} interchangeably.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fuzzy Search", "sec_num": "3.1" }, { "text": "We observed prevalence of generic texts like hello, haan, ok in the customer utterances, which are of no use in retrieving the order. Hence, such commonly used tokens are to be removed from the query. Also, by nature of ASR, acronyms are transcribed as single letter split words for eg., a c for AC, t v for TV, etc. We followed the pre-processing steps as below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing", "sec_num": "3.1.1" }, { "text": "\u2022 Removal of generic tokens: The commonly used tokens are identified by taking top 5% of frequently spoken tokens and are manually examined to ensure no product specific terms are removed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing", "sec_num": "3.1.1" }, { "text": "\u2022 Handle split words: The split words are handled by joining continuous single letter words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing", "sec_num": "3.1.1" }, { "text": "After these pre-processing steps, some of the customer utterances containing only the generic tokens would become blank, such cases are considered to have no match from the active orders. For non blank processed queries, we use the following similarity measures to identify the matching order(s). Let q denote the pre-processed customer query composed of query tokens. Let {p i } P i=1 denotes list of active orders where p i denote the product title corresponding to i th order. Product titles are typically concatenation of brand, model name, color, etc-'Redmi Note 9 Pro (Aurora Blue, 128 GB) (4 GB RAM)', 'Lakme Eyeconic Kajal (Royal Blue, 0.35 g)' are some sample product titles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing", "sec_num": "3.1.1" }, { "text": "The objective of direct match is to handle relatively easier queries, where customer utterance contains product information and is transcribed without any noise. Direct Match looks for exact text matches between query tokens and the tokens of product title. Each product is assigned a score basis the fraction of query tokens that matches with tokens from the corresponding product title. Score for the i th product is obtained as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Match", "sec_num": "3.1.2" }, { "text": "s i = 1 |q| x:q 1 {y:y\u2208p i ,y==x}!=\u03c6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Match", "sec_num": "3.1.2" }, { "text": "where 1 x indicates the indicator function which is 1 if x is true else 0. Product(s) with the maximum score are considered the possible candidate products for a direct match. Direct match between the query and any of the products is said to occur in the following cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Match", "sec_num": "3.1.2" }, { "text": "\u2022 Score of 1 would indicate that the product title includes all query tokens. Hence if the maximum score is 1, all product(s) with the score of 1 are returned by direct match. \u2022 If the max score is less than 1, direct match is limited to single candidate retrieval so as to avoid false positive due to similar products in the active orders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Direct Match", "sec_num": "3.1.2" }, { "text": "In order to handle partial utterances of product names by the customers and to account for ASR errors in product specific terms of customer utterances, partial match is introduced. For example, partial product utterances like 'fridge' for 'refrigerator\", 'watch' for 'smart watch' and ASR misspelled utterances like 'sandel' for 'sandal', would be handled by partial match. Algorithm 1 elucidates various steps in partial matching. It uses partial similarity (Sim) between the n-grams from query and the product titles. We start with individual tokens and then move to bigram, trigram, etc till 4-gram. Match at a higher n is clearly more promising than a lower n. For eg, a customer could ask for 'JBL wired headset' and the active orders could include 'boAt Wired Headset' and a 'JBL Wired Headset'. In such cases, token similarity or bigram similarity might treat both of these headsets as matching orders, however trigram similarity would result in a correct match. i.e., for cases with similar products in the active orders, going for a higher n would help reduce the false positives if customer had specified additional details to narrow the order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Match", "sec_num": "3.1.3" }, { "text": "Algorithm 1: n-gram based partial match", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Match", "sec_num": "3.1.3" }, { "text": "Result: R R = \u03c6, Q 0 = q for n = {1,2,3,4} do Qn = ngrams (q, n, Q n\u22121 ) Q n = \u03c6 for i=1:P do P in = ngrams(p i , n) s i = 1 |Qn | x:Qn 1 {y:y\u2208P in ,Sim(x,y)>\u03b8}!=\u03c6 Q n = Q n \u222a {x \u2208 Qn : {y \u2208 P in : Sim(x, y) > \u03b8}! = \u03c6} p = {p i : s i == max(s i ), max(s i )! = 0} if |p| == 1 or max(s i ) == 1 then R =p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Match", "sec_num": "3.1.3" }, { "text": "Let Q n refer to the n-grams in query string that had a match with any of the product n-grams, ngrams represent a function to return all possible n-grams of input string and ngrams return the surrounding n-grams of Q n\u22121 . For n \u2265 2, Q n would only contain n-grams with one or more product tokens. At a particular n, we obtain a similarity score s i for each active order, based on the proportion of n-grams in Q n , that finds a match with n-grams in corresponding product titles and the orders with maximum score are considered candidate orders (p) for successful match. At any n, matching order(s) is said to have found if n-grams from any order finds a match with all n-grams included in Q n i.e., max(s i ) == 1 or when there is only one candidate product i.e., |p| == 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Match", "sec_num": "3.1.3" }, { "text": "If none of the products finds a match at higher n, the matched products as of level n\u22121 is considered. A threshold on the similarity measure is imposed to indicate whether two n-grams match.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial Match", "sec_num": "3.1.3" }, { "text": "ASR errors on product specific tokens imposes additional challenges in identifying the corresponding order. For example, 'in clinics hot 94' for 'Infinix Hot 9 Pro', 'mam record' for 'memory card', 'double back' for duffel bag, etc. To handle such queries, we consider similarity between phonetic representations of n-grams of product title with that of customer utterance. Algorithmically, phonetic match works similar to fuzzy match (as in algorithm 1), with the important difference the similarity score (Sim) is on phonetic representation of ngrams. With this, strings like 'mam record', 'memory card', 'double back', duffel bag are mapped to 'MANRACAD', 'MANARACAD', 'DABLABAC' and 'DAFALBAG' respectively. Clearly, the noisy transcribed tokens are much closer to the original product tokens in the phonetic space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic Match", "sec_num": "3.1.4" }, { "text": "The objective of this step is to build a predictive model for ranking of active orders based on the likelihood of why a customer is calling. We formulate it as a classification problem on active orders and learn a binary classifier to predict the likelihood of customer calling for each order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Prediction", "sec_num": "3.2" }, { "text": "The features used in the model are broadly categorized into 4 categories, i.e., order specific, transaction specific, product specific and self serve related features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Engineering:", "sec_num": "3.2.1" }, { "text": "\u2022 Order-specific features includes order status, is delivery due today?, is pickup pending?, Is Refund issued? , etc. These features are specific to the time when customer calls.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Engineering:", "sec_num": "3.2.1" }, { "text": "\u2022 Transaction-specific features include price of the product, shipping charges, order payment type, etc", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Engineering:", "sec_num": "3.2.1" }, { "text": "\u2022 Product-specific features include product attributes like brand, vertical, is this a large item? , etc. These features are not dependent on the time of the customer call.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Engineering:", "sec_num": "3.2.1" }, { "text": "\u2022 Self-serve features like number of days since last chat conversation, number of days since last incident creation, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Engineering:", "sec_num": "3.2.1" }, { "text": "It is important to note that the likelihood of a customer calling for an order is highly related to the features of other active orders of the customer. For example, the chances of customer calling for an order that just got shipped are less when there is another order whose refund is pending for a long time. The formulation by default doesn't consider the relationship among features of other active orders. Hence this is overcome by creating derived features that brings in the relative ordering between features of active orders of a customer. Some of derived features include rank of the order with respect to ordered date (customers are more likely to call for a recent order than older ones), if refund is pending for any other order, if there are existing complaints for other orders etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Engineering:", "sec_num": "3.2.1" }, { "text": "Preprocessing: Together with these derived features, we have a total of 42 features mix of categorical and numerical. Low cardinality features like order status are one hot encoded, high cardinality features like brand, category, etc are label encoded and the numerical features are standard normalized. The labels available in our dataset is at a call level. Since the classification is at an order level, the label is assigned 1 or 0 depending on whether it's the order the customer called for.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Engineering:", "sec_num": "3.2.1" }, { "text": "In order to learn a binary classifier for order prediction, we experiment with various standard machine learning models like Logistic regression, tree based ensemble model like Random Forest and deep learning models like Deep Neural Network (DNN). As a baseline, we compare with the reverse chronological ranking of orders with respect to date of order. Various hyper parameters involved in these models are tuned using grid search. More details on the range of hyper parameters considered and the chosen best hyper parameter is available in appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Details", "sec_num": "3.2.2" }, { "text": "This section discusses details of the datasets used for order search and order prediction experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4" }, { "text": "Search Dataset: In order to collect data for order search experimentation, customers with multiple active orders were asked to describe the product they are calling for. The transcribed customer utterances along with the product attributes of the orders like product title, brand, etc constitute our dataset. We had a small development dataset of about 2.5K calls and a large test set of about 95K calls. The development set was used to build and tune the fuzzy search technique. The performance on both datasets are reported in section 5.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4" }, { "text": "Prediction Dataset: The dataset for predictive modeling is collected from the live customer calls. The dataset contains features of active orders and the customer chosen order, which would serve as ground truth. We came up with a variety of features from order, product related ones to self serve related ones and are collected online or offline depending whether the feature is dependent on the time when customer calls. The features for the active orders and the customer chosen order for 150K customer calls constitute our prediction dataset. The experiments and results on this dataset is given in section 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4" }, { "text": "The performance of an order search algorithm is evaluated using Coverage and Accuracy. Coverage refers to the fraction of calls, where proposed technique gave a match. Among the cases where match is found, accuracy refers to the fraction of correct matches. The rest of this section discusses the experiments and results on the development and test sets of the search dataset followed by order prediction experiments and finally the performance combining search and prediction for order identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "We compare our approach fuzzy search with the following two approaches viz. manual baseline and ecommerce search. In manual baseline, customer utterances were tagged for product entities by human agents handling those calls to get a baseline on the coverage. Ecommerce search refers to the general purpose product search used by consumers on ecommerce websites and mobile apps. This latter approach relies on NER for detecting product entities, which we had done through a NER model based on conditional random fields (CRF) (Lafferty et al., 2001) . Figure 2 shows the coverage of fuzzy search vs these two approaches on the development set of search dataset. As shown, we had a total of 2515 customer utterances, of which human annotators could spot product entities only in 34% of them demonstrating the difficulty of the task. Fuzzy search and ecommerce search had a coverage of 32.1% and 17.2% respectively. Both fuzzy and ecommerce search have an overlap with the remaining 66% data that manual annotations couldn't cover, showing that product entity malformations due to transcription noise is overcome by these models significantly. The coverage of ecommerce search is affected by poor NER performance on noisy ASR transcripts. At this point, an attentive reader may refer back to Table 1 to see some of the sample matches returned by fuzzy search. Some more qualitative results are shown in Appendix-C to understand the gap between fuzzy and ecommerce search further. Clearly, our customized ap- proach fuzzy search performs better than ecommerce search. As a second experiment, we compare fuzzy search against text embedding based approaches fasttext based similarity modelling and fuzzy search on fasttext. The former obtains the relevance of a customer utterance to a product by taking cosine similarity between their corresponding sentence embeddings and the most relevant orders(s) with similarity over a threshold is considered. Sentence embeddings are obtained by averaging the word embeddings obtained from a Fasttext model (Bojanowski et al., 2017) trained on customer utterances and product titles. Fuzzy search on fasttext combines the benefits of customisations in fuzzy search and the semantics from fasttext, where fuzzy search is done on text embeddings, with textual similarity replaced by cosine similarity over fasttext embeddings of n-grams. Table 2 shows the coverage and accuracy of single and multiple matches retrieved by various order search approaches on live contact center calls, that constitute the test set of search dataset. Fuzzy search is found to perform better with 18.02% single order matches with an accuracy of 86.33%. Similarity modelling on fasttext is found to have lesser coverage and accuracy than fuzzy search. Decrease in accuracy is attributed to calls with multiple similar orders and the retrieval fetches one of them as a match but customer chose a different order during the call. Fuzzy search on fasttext performs on par with fuzzy search on text, showing that semantics captured by word embeddings does not add incremental value. This, we believe, is owing to the lexical nature of product titles and unambiguous context of customer utterances. Fuzzy search despite being unsupervised, experimented on development data, the coverage and accuracy hold good on real life calls as well.", "cite_spans": [ { "start": 524, "end": 547, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF7" }, { "start": 2040, "end": 2065, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 550, "end": 558, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 1288, "end": 1295, "text": "Table 1", "ref_id": null }, { "start": 2369, "end": 2376, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Order Search Results", "sec_num": "5.1" }, { "text": "Upon deep diving into the multiple order matches from fuzzy search, we found around 38% of such multiple matches had exact same product (same make and model), 49% of them were same product type -can be different model, color etc (e.g., multiple t-shirts), some of them being similar products (e.g., shampoo, hair conditioner, hair serum of the same brand). Multiple matches, though not acted upon by voice-bot, are still valid matches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Search Results", "sec_num": "5.1" }, { "text": "The performance of order prediction is measured by top-k accuracy(A k ) given by the fraction of calls where the model predicted the ground truth order in top-k predictions. We use Prediction dataset with train/val/test split of 70/10/20 respectively for training order prediction models. Table 3 shows the top-k accuracy of various prediction models. Random Forest, a decision tree based ensemble model is found to perform better at both top-1 and top-2 accuracy of 72.52% and 88.71% respectively and marginally under performing than Deep Neural Network (DNN) at top-3 thereby showing the characteristics of the orders indeed decide the order, customer calls for.", "cite_spans": [], "ref_spans": [ { "start": 289, "end": 296, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Order Prediction Results", "sec_num": "5.2" }, { "text": "The reader may again refer to Table 1 where the rightmost two columns show some of the sample top-1 predictions and the features that led to such predictions by the Random Forest model. In the first example shown in table 1, among many orders, model predicted fitness band, which the customer has already initiated return process and have an existing complaint lodged. Upon looking into the top features that govern the model's predictions, we found self serve features like chat before call, existing complaints before call, etc. in addition to the rank wrt ordered date and selling price to be on top, showing that the customers explore self serve before calling. We show the Shapley Values plot of the feature importance in figure 3. We introduce a threshold on the likelihood of top ranked prediction to improve the accuracy while marginally compromising on coverage. With a threshold of 0.6, top-1 predictions from Random Forest had a coverage and accuracy of 62.5% and 84% respectively.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Order Prediction Results", "sec_num": "5.2" }, { "text": "Both order search and order prediction is also evaluated on an out-of-time data, consisting of 13K customer calls. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order Prediction Results", "sec_num": "5.2" }, { "text": "Order identification has not been much explored in the literature. Most related problem is on NER to identify product entities (Putthividhya and Hu, 2011; Joshi et al., 2015; More, 2016) . In the literature, there are many studies focused on NER for product entity extraction ranging from classical techniques (Brody and Elhadad, 2010) to recent deep learning approaches that make use of word embeddings (Majumder et al., 2018; Jiang et al., 2019) . While entity extraction from text is well researched in the literature, NER on speech is less studied. Most initial works on speech had a two staged approach -ASR followed by NER (Cohn et al., 2019) , recent works directly extract entities from speech (Ghannay et al., 2018; Yadav et al., 2020) . While NER helps in ecommerce search on websites and apps, the specific nature of order identification problem and the limited search space of active orders make NER unnecessary. Another related line of works is on sentence similarity tasks. Owing to the success of word embeddings (Mikolov et al., 2013; Bojanowski et al., 2017) , there is a lot of literature on textual similarity related tasks, that make use of word embed- dings in a supervised (Yao et al., 2018; Reimers and Gurevych, 2019; Shen et al., 2017) and unsupervised fashion (Arora et al., 2016) . (Wieting et al., 2015) showed that simple averaging of word embeddings followed by cosine similarity could provide competitive performance on sentence similarity tasks. We have compared word embeddings based approaches to show that additional semantics does not help in order identification problem.", "cite_spans": [ { "start": 127, "end": 154, "text": "(Putthividhya and Hu, 2011;", "ref_id": "BIBREF11" }, { "start": 155, "end": 174, "text": "Joshi et al., 2015;", "ref_id": "BIBREF6" }, { "start": 175, "end": 186, "text": "More, 2016)", "ref_id": "BIBREF10" }, { "start": 310, "end": 335, "text": "(Brody and Elhadad, 2010)", "ref_id": "BIBREF2" }, { "start": 404, "end": 427, "text": "(Majumder et al., 2018;", "ref_id": "BIBREF8" }, { "start": 428, "end": 447, "text": "Jiang et al., 2019)", "ref_id": "BIBREF5" }, { "start": 629, "end": 648, "text": "(Cohn et al., 2019)", "ref_id": "BIBREF3" }, { "start": 702, "end": 724, "text": "(Ghannay et al., 2018;", "ref_id": "BIBREF4" }, { "start": 725, "end": 744, "text": "Yadav et al., 2020)", "ref_id": "BIBREF16" }, { "start": 1028, "end": 1050, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF9" }, { "start": 1051, "end": 1075, "text": "Bojanowski et al., 2017)", "ref_id": "BIBREF1" }, { "start": 1195, "end": 1213, "text": "(Yao et al., 2018;", "ref_id": "BIBREF17" }, { "start": 1214, "end": 1241, "text": "Reimers and Gurevych, 2019;", "ref_id": "BIBREF12" }, { "start": 1242, "end": 1260, "text": "Shen et al., 2017)", "ref_id": "BIBREF13" }, { "start": 1286, "end": 1306, "text": "(Arora et al., 2016)", "ref_id": "BIBREF0" }, { "start": 1309, "end": 1331, "text": "(Wieting et al., 2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "6" }, { "text": "In this paper, we present one of the first studies exploring order identification for ecommerce contact centers. The proposed two-staged fuzzy search and order prediction technique provide 64% coverage at 87% accuracy on a large real-life dataset which are significantly better than manual baseline and relevant comparable techniques. Order prediction though developed for voice-bot, could also be used in other places like chat bot or non bot calls, where we can ask proactively if this is the order customer is looking for help. Finally, going beyond the scientific impact of this work, the proposed solution is also deployed in production for a fraction of calls landing in the contact center of a large ecommerce provider leading to real-life impact. Table 6 : Examples of predictions from fuzzy search and ecommerce search. First column shows the customer utterances along with the NER predictions emphasized. Second column shows all active orders at the time of call, with matching orders emphasized. Last column shows the correctness of order search approaches.", "cite_spans": [], "ref_spans": [ { "start": 755, "end": 762, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We use the terms order and product interchangably to mean different things customers have purchased.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A simple but tough-to-beat baseline for sentence embeddings", "authors": [ { "first": "Sanjeev", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Yingyu", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Tengyu", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat baseline for sentence em- beddings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An unsupervised aspect-sentiment model for online reviews", "authors": [ { "first": "Samuel", "middle": [], "last": "Brody", "suffix": "" }, { "first": "Noemie", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2010, "venue": "Human language technologies: The 2010 annual conference of the North American chapter of the association for computational linguistics", "volume": "", "issue": "", "pages": "804--812", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel Brody and Noemie Elhadad. 2010. An unsu- pervised aspect-sentiment model for online reviews. In Human language technologies: The 2010 annual conference of the North American chapter of the as- sociation for computational linguistics, pages 804- 812.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Audio de-identification-a new entity recognition task", "authors": [ { "first": "Ido", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Itay", "middle": [], "last": "Laish", "suffix": "" }, { "first": "Genady", "middle": [], "last": "Beryozkin", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Izhak", "middle": [], "last": "Shafran", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Tzvika", "middle": [], "last": "Hartman", "suffix": "" }, { "first": "Avinatan", "middle": [], "last": "Hassidim", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "197--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Cohn, Itay Laish, Genady Beryozkin, Gang Li, Izhak Shafran, Idan Szpektor, Tzvika Hartman, Avinatan Hassidim, and Yossi Matias. 2019. Audio de-identification-a new entity recognition task. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 197-204.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Endto-end named entity extraction from speech", "authors": [ { "first": "Sahar", "middle": [], "last": "Ghannay", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Caubriere", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "Esteve", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Laurent", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Morin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.12045" ] }, "num": null, "urls": [], "raw_text": "Sahar Ghannay, Antoine Caubriere, Yannick Esteve, Antoine Laurent, and Emmanuel Morin. 2018. End- to-end named entity extraction from speech. arXiv preprint arXiv:1805.12045.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Improved differentiable architecture search for language modeling and named entity recognition", "authors": [ { "first": "Yufan", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Chi", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Chunliang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3576--3581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yufan Jiang, Chi Hu, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2019. Improved differentiable ar- chitecture search for language modeling and named entity recognition. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3576-3581.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Distributed word representations improve ner for e-commerce", "authors": [ { "first": "Mahesh", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Hart", "suffix": "" }, { "first": "Mirko", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Jean", "middle": [ "David" ], "last": "Ruvini", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahesh Joshi, Ethan Hart, Mirko Vogel, and Jean David Ruvini. 2015. Distributed word repre- sentations improve ner for e-commerce. In Proceed- ings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 160-167.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando Cn", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Deep recurrent neural networks for product attribute extraction in ecommerce", "authors": [ { "first": "Aditya", "middle": [], "last": "Bodhisattwa Prasad Majumder", "suffix": "" }, { "first": "Abhinandan", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Shreyansh", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Ajinkya", "middle": [], "last": "Gandhi", "suffix": "" }, { "first": "", "middle": [], "last": "More", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.11284" ] }, "num": null, "urls": [], "raw_text": "Bodhisattwa Prasad Majumder, Aditya Subramanian, Abhinandan Krishnan, Shreyansh Gandhi, and Ajinkya More. 2018. Deep recurrent neural net- works for product attribute extraction in ecommerce. arXiv preprint arXiv:1803.11284.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Attribute extraction from product titles in ecommerce", "authors": [ { "first": "Ajinkya", "middle": [], "last": "More", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.04670" ] }, "num": null, "urls": [], "raw_text": "Ajinkya More. 2016. Attribute extraction from product titles in ecommerce. arXiv preprint arXiv:1608.04670.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bootstrapped named entity recognition for product attribute extraction", "authors": [ { "first": "Duangmanee", "middle": [], "last": "Putthividhya", "suffix": "" }, { "first": "Junling", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1557--1567", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duangmanee Putthividhya and Junling Hu. 2011. Boot- strapped named entity recognition for product at- tribute extraction. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, pages 1557-1567.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.10084" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Word embedding based correlation model for question/answer matching", "authors": [ { "first": "Yikang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Wenge", "middle": [], "last": "Rong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "31", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yikang Shen, Wenge Rong, Nan Jiang, Baolin Peng, Jie Tang, and Zhang Xiong. 2017. Word embedding based correlation model for question/answer match- ing. In Proceedings of the AAAI Conference on Arti- ficial Intelligence, volume 31.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Building largescale deep learning system for entity recognition in e-commerce search", "authors": [ { "first": "Musen", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Kumar Vasthimal", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Tian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Aimin", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies", "volume": "", "issue": "", "pages": "149--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Musen Wen, Deepak Kumar Vasthimal, Alan Lu, Tian Wang, and Aimin Guo. 2019. Building large- scale deep learning system for entity recognition in e-commerce search. In Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, pages 149-154.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards universal paraphrastic sentence embeddings", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.08198" ] }, "num": null, "urls": [], "raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal para- phrastic sentence embeddings. arXiv preprint arXiv:1511.08198.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "End-to-end named entity recognition from english speech", "authors": [ { "first": "Hemant", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "Sreyan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Rajiv Ratn", "middle": [], "last": "Shah", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.11184" ] }, "num": null, "urls": [], "raw_text": "Hemant Yadav, Sreyan Ghosh, Yi Yu, and Ra- jiv Ratn Shah. 2020. End-to-end named entity recognition from english speech. arXiv preprint arXiv:2005.11184.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A novel sentence similarity model with word embedding based on convolutional neural network. Concurrency and Computation: Practice and Experience", "authors": [ { "first": "Haipeng", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Huiwen", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Peiying", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haipeng Yao, Huiwen Liu, and Peiying Zhang. 2018. A novel sentence similarity model with word embed- ding based on convolutional neural network. Con- currency and Computation: Practice and Experi- ence, 30(23):e4415.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Sample conversation between voice-bot and the customer." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Comparison of coverage of fuzzy search against manual baseline and ecommerce search on development set. Vertical placements are indicative of overlaps between different sets." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Feature importance of top 20 features for Random Forest model. Features with prefix 'rank' or suffix 'for any other order' are derived features introduced to bring relation with other active orders." }, "TABREF0": { "num": null, "text": "Aiwa Professional 102 //00 High Quality 40 PCS Socket Set, Hidelink Men Brown Genuine Leather Wallet, CEDO XPRO Edge To Edge Tempered Glass for Realme XT, Realme X2, Protoner 25 kg PVC weight with 4 rods and Flat bench Home Gym Combo] [Sonata 77085PP02 Volt Analog Watch -For Men, Oxhox HBS-730 Wireless compatible with 4G redmi Headset with Mic Bluetooth Headset, MyTech With Charger M3 Smart Band Fitness Band ]", "content": "
Fuzzy SearchPredictive Model
CustomerProduct titles of active ordersProduct titles of active ordersTop-kfea-
Utterancewith fuzzy search matchwith top-1 match from predic-tures for the
tive modelprediction
maine[number of days
order kiyasince return ini-
tha totiation, Selling
toner 25Price, is incident
ke dikhacreated in last 2
to uskadays?
i can[Surat Dream Portable Mini Sewing Ma-[ Whirlpool 7.5 kg 5 Star, Hard Waterdays since in-
2000 greenchine Handheld Handy Stitch Machinewash Fully Automatic Top Load Grey,cident creation,
colorManual Cordless Electric Stitching Ma-Asian Running Shoes For Women]days since last
mobilechine Electric Sewing Machine, I Kallchat, rank wrt
phoneK1000 (Green, 64 GB), I Kall K1000selling price
(Blue, 64 GB)]
blue 2[Hardys Full Sleeve Solid Men Jacket,[Ncert Chemistry Class 12 ( Part 1 And 2 )is cancelled?, is
dead phoneBrawo Party Wear Party Wear For Men, TCombo 2 Book ( K.C.G), STROM COL-incident created
ke liyeGOOD Lite SH12 Bluetooth Headset,LECTION Men Formal Black Gen-in last 2 days?,
T GOOD Lite SH12 Bluetooth Head-uine Leather Belt, OPPO F15 (Blazingnumber of days
set, SPINOZA Pink diamond studded at-Blue, 128 GB)]since return ini-
tractive butterfly stylish women Analogtiation
Watch -For Girls]
", "html": null, "type_str": "table" }, "TABREF1": { "num": null, "text": "Are you calling about your order for Samsung TV ? [Order Confirmation] yes Okay, I just checked and it looks like your order has shipped and will be delivered by November 19th. [Status Announcement] What help do you need with this order? [Issue Identification] Please deliver it today itself I understand you're looking forward to receiving your order sooner. Sorry, while faster delivery is not available, please be assured our delivery agents are delivering your order as soon as possible. [Issue Resolution] What order are you calling about today? [Order Identification]", "content": "
Welcome to <COMPANY-NAME> ecommerce!
I'm your automated support assistant. [Greeting]
regarding tv order
What else do you need help with?
Nothing, that's it
Goodbye then, Thank you for shopping with <COMPANY-NAME>!
", "html": null, "type_str": "table" }, "TABREF2": { "num": null, "text": "", "content": "
shows the coverage and
accuracy of order search and order prediction indi-
", "html": null, "type_str": "table" }, "TABREF3": { "num": null, "text": "Coverage and accuracy of single and multiple matches from order search approaches on the test set", "content": "
ModelA 1A 2A 3
Rev. Chronological 40.09 75.17 87.74
Logistic Regression 71.00 88.00 93.70
Random Forest72.52 88.71 94.09
DNN70.34 88.33 94.34
", "html": null, "type_str": "table" }, "TABREF4": { "num": null, "text": "", "content": "
: Top-k accuracies(%) of order prediction mod-
els
ApproachCoverage Accuracy
Fuzzy Search20.3286.83
Order Prediction58.184.46
Search + Prediction64.787.18
", "html": null, "type_str": "table" }, "TABREF5": { "num": null, "text": "Performance of Search and Predictionvidually and the overall coverage by having both search and prediction in place. Order prediction resulted in an incremental coverage of 44% while maintaining same accuracy.", "content": "", "html": null, "type_str": "table" }, "TABREF7": { "num": null, "text": "Range of values for various hyper-parameters and the chosen hyper-parameter with best top-1 accuracy on validation set for various order prediction modelsC Sample predictions from fuzzy search & ecommerce search STAMEN 153 cm (5 ft) Polyester Window Curtain (Pack Of 4), Sauran 26-55 inch Heavy TV Wall Mount for all types of Fixed TV Mount Fixed TV Mount, Mi 4A 100 cm (40) Full HD LED Smart Android TV, Leemara Virus Protection, Anti Pollution, Face Mask, Reusable-Washable Outdoor Protection Cotton Safety Mask] Aspir Back Cover for Vivo V15, Mobi Elite Back Cover for Vivo V15, RUNEECH Back Camera Lens Glass Protector for VIVO V 20, Shoes Kingdom Shoes Kingdom LB791 Mocassins Casual Loafers For Men (Brown) Loafers For Men, Aspir Back Cover for Vivo V20, CatBull In-ear Bluetooth Headset] Easy Way Fashion Doll with Dresses Makeup and Doll Accessories, Vrilliance Traders Type C Compatible Fast Data Cable Charging Cable for Type C Android Devices (1.2 M,Black) 1.2 m USB Type C Cable] Highlander Full Sleeve Washed Men Jacket, Oricum Slides, BOLAX Black Slouchy woolen Long Beanie Cap for Winter skull head Unisex Cap, Oricum Slides, BOLAX Black Slouchy woolen Long Beanie Cap for Winter skull head Unisex Cap]", "content": "
Customer Ut-Product titles of active ordersComments
terance
mi tv ke baareFuzzy
meinSearch
Ecommerce
Search
coverkeFuzzy
baare meinSearch
mobile coverEcommerce
ke baare meinSearch
datacable ke[Fuzzy
liyeSearch
Ecommerce
Search
in clinics hot[JOKIN A1 MULTI FUNCTIONAL SMARTWATCH Smartwatch, Infinix Hot 9Fuzzy
94Pro (Violet, 64 GB), Vivo Z1Pro (Sonic Blue, 64 GB), Vivo Z1Pro (Sonic Blue,Search
64 GB), Vivo Z1Pro (Sonic Blue, 64 GB), Tech Unboxing Led Rechargeable Fan WithEcommerce
Torch 120 mm 3 Blade Exhaust Fan]Search
chappal ke[Fuzzy
liye paanchSearch
sau saat sattarEcommerce
pe ka productSearch
tha mera
", "html": null, "type_str": "table" } } } }