|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:47:12.449837Z" |
|
}, |
|
"title": "Neural Multi-Task Text Normalization and Sanitization with Pointer-Generator", |
|
"authors": [ |
|
{ |
|
"first": "Van-Hoang", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Innovation Lab", |
|
"institution": "", |
|
"location": { |
|
"country": "PayPal Singapore" |
|
} |
|
}, |
|
"email": "vanguyen@paypal.com" |
|
}, |
|
{ |
|
"first": "Cavallari", |
|
"middle": [], |
|
"last": "Sandro", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "scavallari@paypal.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Text normalization and sanitization are intrinsic components of Natural Language Inferences. In Information Retrieval or Dialogue Generation, normalization of user queries or utterances enhances linguistic understanding by translating non-canonical text to its canonical form, on which many state-of-the-art language models are trained. On the other hand, text sanitization removes sensitive information to guarantee user privacy and anonymity. Existing approaches to normalization and sanitization mainly rely on hand-crafted heuristics and syntactic features of individual tokens while disregarding the linguistic context. Moreover, such context-unaware solutions cannot dynamically determine whether out-of-vocab tokens are misspelt or are entity names. In this work, we formulate text normalization and sanitization as a multi-task text generation approach and propose a neural pointer-generator network based on multihead attention. Its generator effectively captures linguistic context during normalization and sanitization while its pointer dynamically preserves the entities that are generally missing in the vocabulary. Experiments show that our generation approach outperforms both token-based text normalization and sanitization, while the pointer-generator improves the generator-only baseline in terms of BLEU4 score, and classical attentional pointer networks in terms of pointing accuracy.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Text normalization and sanitization are intrinsic components of Natural Language Inferences. In Information Retrieval or Dialogue Generation, normalization of user queries or utterances enhances linguistic understanding by translating non-canonical text to its canonical form, on which many state-of-the-art language models are trained. On the other hand, text sanitization removes sensitive information to guarantee user privacy and anonymity. Existing approaches to normalization and sanitization mainly rely on hand-crafted heuristics and syntactic features of individual tokens while disregarding the linguistic context. Moreover, such context-unaware solutions cannot dynamically determine whether out-of-vocab tokens are misspelt or are entity names. In this work, we formulate text normalization and sanitization as a multi-task text generation approach and propose a neural pointer-generator network based on multihead attention. Its generator effectively captures linguistic context during normalization and sanitization while its pointer dynamically preserves the entities that are generally missing in the vocabulary. Experiments show that our generation approach outperforms both token-based text normalization and sanitization, while the pointer-generator improves the generator-only baseline in terms of BLEU4 score, and classical attentional pointer networks in terms of pointing accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Early Natural Language Processing (NLP) faced the long-standing limitation of human language understanding, mainly due to linguistic morphology or the wide variance of word forms. Therefore, a crucial requirement to obtain outstanding performance for modern NLP systems is the availability of \"standardized\" textual data (Guyon et al., 1996; Rahm and Do, 2000) . Standardizing or normalizing textual data reduces the domain complexity, hence improves the generalization of the learned model. However, there are challenges to automatic text normalization. Natural language is by nature evolving, e.g. Urban Dictionary 1 is a crowdsourced online dictionary for slang words and phrases not typically found in a standard dictionary, but used in an informal setting such as text messages or social media posts. Moreover, abbreviations and emojis allow humans to express rich and informative content with few characters, but troubles machine understanding. Finally, humans are prone to spelling errors while writing or typing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 341, |
|
"text": "(Guyon et al., 1996;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 360, |
|
"text": "Rahm and Do, 2000)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Due to the reasons mentioned above, developers have designed pre-processing techniques to normalise textual data, including spell correction, tokenisation, stemming, lemmatization and partof-speech tagging. During the years, multiple libraries have been proposed to facilitate such preprocessing steps: e.g. NLTK (Bird, 2006) , spaCy 2 or Stanford Core NLP (Manning et al., 2014) . However, as textual domains vary greatly from medical records, legal documents to social media posts, there is no single solution or a fixed set of preprocessing steps for text normalization. Thus, up to date, defining a pre-processing pipeline remains an art form which requires a significant engineering effort. While researchers can define hard-policies to eliminate all noisy textual data, they also considerably reduce the amount of information available to the model, thus limit its performance. Such pruning approach appears problematic in the industry where engineers tackle domain-specific problems are given a relatively limited noisy textual dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 325, |
|
"text": "(Bird, 2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 379, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Enterprises also have to comply with multiple policies concerning privacy. Thus, they are re- Table 1 : Example of well formatted text correctly masked with simple regex rules. Note that all the reported credit card number are artificially generated.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 101, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Masked Text i need to delete my credit card 5496-9579-4394-2954 i need to delete my credit card **** the refund will post to your credit card ending in (8077) in the next 3-5 business days the refund will post to your credit card ending in (****) in the next 3-5 business days quired to mask or remove sensitive information rather than cache them inside data centers. Such sensitive information includes credit card numbers, email addresses and Social Security Number (SSN). Note that sanitation issues not only arise during an offline storage/backup process of user-generated content, but they might also happen in real-time. For example, it is common for big enterprises to outsource customer services, like live-chat or chatbot systems, to third parties. Thus, there is the need to remove all the sensitive information before expose the input text to any third party to prevent information leakage. At the same time, the semantic meaning of a customer's request has to be preserved to deliver good customer support. Enterprises have traditionally addressed sanitization by defining heuristics. Such an approach is effective over well-defined text such as official documents and notes. As shown in Tab. 1, carefully designed regex rules are able to properly mask content following a specific pattern, e.g. credit card numbers, from a document 3 . Instead, in an informal setting regex rules can fail due to the presence of typos or sensitive information whose syntax is not accounted in the predefined patterns; for example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unmasked Text", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 \"my card ending -4810 has being refused.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unmasked Text", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 \"i want to cancel my last transaction 6 9 0 8 2 0 5 7 3 D 1 4 8 0 4 3 3.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unmasked Text", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "On the other hand, rules-based approaches, begin semantic-unaware, tend to mask most of the insensitive but crucial numerical information, troubling the downstream analysis. For instance, Tab. 2 demonstrates a case when a tracking number is confused with a transaction number. Similarly, in the second case, a transaction amount is confused with a credit card number.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unmasked Text", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As mentioned, we claim that it is not possible to define a general heuristics that correctly cover all the corner cases while ignoring semantics. Instead, we propose a novel approach for text normalization and sanitization based on the recent advancements made in NLP, specifically in Machine Translation (MT). That is, we formulate the joint text normalization and sanitization task as learning to translate from non-canonical English to a sequence of welldefined or masked tokens. For example, Tab. 3 demonstrates how malformed texts are translated into a semantically equivalent sequence of welldefined tokens with properly masked information. To our knowledge, this is the first attempt to formulate the joint text normalization and sanitization under MT framework. In so doing, we propose a novel network architecture for MT that can solve this multi-task learning problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unmasked Text", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Moreover, we address the thorny problem of generating unseen tokens during inference in sequenceto-sequence (seq2seq) learning by making use of pointer networks (Vinyals et al., 2015; See et al., 2017; Merity et al., 2016) . In addition to the generator, we integrate the pointer network, a module that learns to directly copy a specific segment within the input text to the output sequence. Compare to previous work, our design of the pointer is novel as it learns to predict the start and end positions of the correct text segment to be copied, and is built upon the concept of multi-head attention and positional encoding (Vaswani et al., 2017) . Experiments show that using a generating-pointing mechanism improves normalization performance compared to a pure generating mechanism. Our model can correctly identify and preserve most named entities contained in the input text, potentially benefits downstream analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 183, |
|
"text": "(Vinyals et al., 2015;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 201, |
|
"text": "See et al., 2017;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 222, |
|
"text": "Merity et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 647, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unmasked Text", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The introduction of word embeddings (Hinton et al., 1986; Mikolov et al., 2013; Goldberg and Levy, 2014) has produced a gigantic leap forward for most NLP-related task. Traditional problems such as vector sparsity and word interaction were solved by a simple, yet effective, methodology that exploits a large corpus rather than a sophisticated algorithm. However, such methods are limited by the challenge of inferring embeddings for words unobserved at training time, i.e. Out-Of-Vocabulary (OOV). Such scenarios are common in many socialmedia related applications where the input text is generated in real-time. Thus, the user's malformed language might affect downstream performance (Hutto and Gilbert, 2014). Another solution is to include all the misspelling words in the training dataset or to impose similar embeddings for all n-character variations of a canonical word. This, would not scale well due to the sheer amount of such non-canonical terms; thus researchers have studied the spelling correction problem since long time (Church and Gale, 1991; Brill and Moore, 2000) . However, traditional approaches are based on a word-per-word basis; which has shown acceptable results when applied to formal languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 57, |
|
"text": "(Hinton et al., 1986;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 58, |
|
"end": 79, |
|
"text": "Mikolov et al., 2013;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 104, |
|
"text": "Goldberg and Levy, 2014)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1036, |
|
"end": 1059, |
|
"text": "(Church and Gale, 1991;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1060, |
|
"end": 1082, |
|
"text": "Brill and Moore, 2000)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There have been many robust approaches to token-level spelling correction and lemmatization. The pioneering work done by Han and Baldwin demonstrated that micro-phonetic similarity could provide valuable insight to correct the spelling in an informal context, as many of these relaxed spellings are often based on the word's phonetic, e.g thr for there or d for the. Monoise (van der Goot, 2019a) generates feature-engineered n-character candidates for a misspelt word not found in the vocabulary and ranks them using a Random Forest Classifier. However, to accurately identify misspelt words, let alone normalizing them, optimal approaches need to consider the whole contextual semantics rather than the word-level morphology. For example, the utterance Can I speak to a reel person? is not misspelt at word-level as every word is a valid English word. However, if we consider sentence-level semantics, reel should be normal-ized into real. To factor in such contextual signals, recent advancements in NLP has considered these sequential nature of a written language as well as the long-term dependencies present in sentences. Thus, the research community has proposed different methodologies to perform micro-text normalisation based on deep learning (Min and Mott, 2015; Edizel et al., 2019; Gu et al., 2019; Satapathy et al., 2019) . While we address the problem of text normalisation in the NLP context, it has also been adopted as a key component for speech applications (Sproat and Jaitly, 2016; Zhang et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1253, |
|
"end": 1273, |
|
"text": "(Min and Mott, 2015;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1274, |
|
"end": 1294, |
|
"text": "Edizel et al., 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1295, |
|
"end": 1311, |
|
"text": "Gu et al., 2019;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1312, |
|
"end": 1335, |
|
"text": "Satapathy et al., 2019)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1477, |
|
"end": 1502, |
|
"text": "(Sproat and Jaitly, 2016;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1503, |
|
"end": 1522, |
|
"text": "Zhang et al., 2019)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Pointer Network was first proposed to solve geometric problems where the size of the output classes is a variable not conforming to the fixed multi-label classification of traditional seq2seq learning (Vinyals et al., 2015) . Pointer Network becomes widely adopted in many NLP tasks including machine translation (Gulcehre et al., 2016) , abstractive summarization (See et al., 2017) and language modeling (Merity et al., 2016) as it aids accurate reproduction of factual details such as unseen proper nouns commonly treated as OOVs. However, existing works formulate the pointing operation as a single position classification task that returns one word (token) position in the encoding sequence to be copied to the decoding sequence. Such formulation is no longer suitable for our char-to-word strategy. Furthermore, with the recent state-of-the-art in seq2seq learning introduced by the Transformer architecture, there has not been a comprehensive comparison between different attention strategies, i.e. the classical attention mechanisms (Luong et al., 2015) and multihead attention (Vaswani et al., 2017) on this pointing objective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 223, |
|
"text": "(Vinyals et al., 2015)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 336, |
|
"text": "(Gulcehre et al., 2016)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 383, |
|
"text": "(See et al., 2017)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 427, |
|
"text": "(Merity et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1041, |
|
"end": 1061, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1086, |
|
"end": 1108, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, none of the previous research considered the joint privacy-preserving issue, which is common in commercial NLIs such as virtual agents for customer services. To the best of our knowledge, (S\u00e1nchez et al., 2012 ) is the first model that attempted to solve the sanitization problem at a semantic level, without using a rule-based approach (Sweeney, 1996) . However, the former approaches are based on manually defined policies that are application and context-specific or are limited to named entities; thus are not generalizable across domains and applications. why it is my transaction whith id <msk> on hold ? I can't enter the tracking number 781243692BSD0433 for a refund. i can not enter the tracking number <unk> for a refund", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 218, |
|
"text": "(S\u00e1nchez et al., 2012", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 361, |
|
"text": "(Sweeney, 1996)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Neural seq2seq models (Sutskever et al., 2014; Cho et al., 2014; Vaswani et al., 2017) became the de facto standard for machine translation systems. Such models are composed by an encoderdecoder architecture which takes an input sequence x = [x 1 , ..., x M ] and generate the desired output sequence y = [y 1 , ..., y N ] according to the conditional probability distribution P gen \u03b8 (y|x), where \u03b8 stands for the model parameters. Due to their well-designed factorisation of P gen \u03b8 (y|x) based on an autoregressive approach:", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 46, |
|
"text": "(Sutskever et al., 2014;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 47, |
|
"end": 64, |
|
"text": "Cho et al., 2014;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 65, |
|
"end": 86, |
|
"text": "Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "P gen \u03b8 (y|x) = N t=1 P \u03b8 (y t |y t\u22121 , ..., y 1 , x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(1) seq2seq models have been proven capable of solving the translation task with outstanding results. However, in the traditional MT settings x and y are tokens' sequences of different languages, instead, in our context y represents the same input sentence, but rewritten in a formal and anonymised language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In addition to the next token generation objective, we formulate the pointing objective as outputting two sequences of start positions u s = [u s 1 , ..., u s N ] and end positions u e = [u e 1 , ..., u e N ] of the input encoding sequence where u s i , u e i \u2208 [1, ..., M \u2212 1]. Similar to y, u s and u e are chosen according to the conditional probability distributions P pt-start \u03b8 (u|x) and P pt-end \u03b8 (u|x) which can be factored as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P pt-start \u03b8 (u e |x) = N t=1 P pt-start \u03b8 (u s t |y t\u22121 , ..., y 1 , x),", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "P pt-end \u03b8 (u s |x) = N t=1 P pt-end \u03b8 (u e t |y t\u22121 , ..., y 1 , x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(3) Note that the factorisation proposed in Eq. 2 (and 3), convert the intractable estimation of u s conditioned on x in a sequence of classification tasks over the sequence length (M ) predicting u s t based on the previous predictions y <t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Finally, we learn the optimal \u03b8 by maximizing the joint likelihood of the distribution for generative normalisation and sanitisation, P gen \u03b8 (y|x), and the distribution for pointing to the start and end positions for normalisation, P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "pt-start \u03b8 (u s |x), P pt-end \u03b8", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(u e |x). In other words, our optimisation problem is the minimization of the well-known cross-entropy loss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 * = argmin \u03b8 \u2212 T t=1 \u0177 t log P gen \u03b8 (y t |y <t , x) +\u00fb s t log P pt-start \u03b8 (u s t |y <t , x) +\u00fb e t log P pt-end \u03b8 (u e t |y <t , x) .", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "4 Proposed Method", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "It is possible to formalise the text normalisation task as a seq2seq problem, where malformed English is translated in well-defined English. In literature seq2seq (Sutskever et al., 2014 ) models and the similar Memory Networks (Gulcehre et al., 2017; Weston et al., 2014; Graves et al., 2014) have been widely applied to multiple tasks such as machine translation (Vaswani et al., 2017; Cho et al., 2014) , language inference (Sukhbaatar et al., 2015; Devlin et al., 2018; , question answering (Devlin et al., 2018; and more. Still, in most cases, the model is expected to serve at a single granularity level: i.e. sequence of words to sequence of words (W2W), char-to-char (C2C) or subword-to-subword (Sw2Sw). While this guarantees consistency, these approaches are not suitable for our application. On the one hand, the limited vocabulary size is the main advantage of a C2C approach, but it is more computationally expensive and might generate misspelt words. On the other hand, a W2W setting is affected by the huge vocabulary size and by the OOV problem, but it guarantees grammatically correct words. Thus, we propose to use a char-to-word (C2V) strategy, where the input sequence is handled as a string of characters, but the output is generated as a distribution over well-formed words. Such a design enables us to handle any input string, solving the problems related to spelling errors while certifying well-formed output. However, it imposes also some challenges; e.g how to embed conceptually different objects in the same low dimensional space, or how to learn time dependencies inside a long sequence of characters are only the major problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 186, |
|
"text": "(Sutskever et al., 2014", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 251, |
|
"text": "(Gulcehre et al., 2017;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 272, |
|
"text": "Weston et al., 2014;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 293, |
|
"text": "Graves et al., 2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 387, |
|
"text": "(Vaswani et al., 2017;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 405, |
|
"text": "Cho et al., 2014)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 452, |
|
"text": "(Sukhbaatar et al., 2015;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 473, |
|
"text": "Devlin et al., 2018;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 516, |
|
"text": "(Devlin et al., 2018;", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generator", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As shown in Fig. 1 , given the input embedding of a sequence of characters x = [x 1 , ..., x M ], we can formally define an encoder as:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 18, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generator", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "q, k, v = xW q , xW k , xW v ,", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Generator", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z = f (q, k, v)", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Generator", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where f (\u2022) is a bidirectional Transformer as defined in (Devlin et al., 2018) . Similarly, given the output embedding of a sequence of words y = [y 1 , ..., y N ] the decoder is defined as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 78, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generator", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "q , k , v = yW q , yW k , yW v ,", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Generator", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h t = f (q , k , v , z)", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Generator", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where f (\u2022) is a traditional Transformer decoder applied in an auto-regressive settings as in (Vaswani et al., 2017) . Note that, we are differentiate from the original implementation as we adopt a C2W approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 116, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generator", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We address the limitation of generating unseen tokens in our design of the pointer network. As our generator module predicts a token from a fixed dictionary (vocabulary), it fails to normalise OOVs. We add a pointer module to our neural network that allows it to copy a segment of the input text if an unknown word is detected. Although previous works designed their pointer module to point to a single position, for our char-to-word learning problem where each position indicates a character, we propose to jointly point to a start and an end position, while coping all characters in-between. As the output token often consists of consecutive characters, this strategy effectively avoids copying a long continuous character sequence over multiple steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Formally, at the decoder timestep t, we learn to output the start position u s t and end position u e t by maximisation of Eq. 2 and Eq. 3 respectively. The pointer distribution for the start position is a function of the encoder representation z and the decoder representation h t at t, or P pt-start \u03b8 (u s t |y <t , x) = g s (h t , z). Given that, we can formally define the attention mechanism of the Transformer architecture as: Our pointer distribution can be formulated as the attention probability of the last decoder hidden state at timestep t towards each position of the encoder hidden state z. Specifically, we treat h t as the query vector q; while z is the key sequence k in Eq. 10. We derive the probability of the i-th position of the encoding sequence being the start position as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "attn i (q, k) = softmax( q i \u2022 k T i \u221a d K ),", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Pointer", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z = [attn 0 (q, k), ..., attn N (q, k)]W O", |
|
"eq_num": "(" |
|
} |
|
], |
|
"section": "Pointer", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "g s i (h t , z) = [attn 0 (h t , z), ..., attn N (h t , z)] i W s .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Notice that unlike the original multihead attention, we did not concern about the value sequence v, but we directly use the attention output to detect the pointing position. Similarly, we define the probability of the j-th position being the end position to copy as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "g e j (h t , z) = [attn 0 (h t , z), ..., attn N (h t , z)] j W e .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We conducted 3 experiments to verify the effectiveness of our proposed model. 1) For improved joint normalization and sanitization, we compare our context-aware model with: 1.1) a traditional tokenlevel lemmatizer and spelling corrector, and 1.2) a LSTM W2W encoder-decoder model. 2) For improved normalization of proper nouns, we compare our multi-head attentional pointer-generator with 2.1) a generator-only and a pointer-only baseline, and 2.2) the traditional attention encoder-decoder model. 3) Finally, to address the utility of text normalization we evaluate the performance's improvement obtained on a text classification task with or without text normalization. The seq2seq transformer architecture we used has 4 attention heads and 5 layers with 100 hidden units. The maximum number input characters and output words are 600 and 300 respectively. During evaluation we maintain a beam size of 3. We determine the correct positions for the pointer network by matching any output word to its character list if the characters appear consecutively in the input character sequence, and noting the start and end position of that character list. Words whose characters are not found consecutively are assign a start and end position of 0 (the beginning of the sequence). We fix the start and end position to the nearest left and right space respectively in the input character sequence to select a complete word. We use the pointer output instead of the generator output whenever the predicted probability for generation is less than 0.6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We conducted the experiments on two datasets:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "\u2022 The former dataset contains conversations occurred between a customer and a live-chat agent. Human annotators provide the normalized and sanitized version as ground turth. We will refer to this as the Conversational dataset and use it for the evaluation of the first two experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "\u2022 The later one contains utterances collected from a task-oriented chatbot service where customers interact with an agent to solve 27 possible tasks. Each utterance has been manually inspected and assigned to one of the possible class. We will refer to this as the Classification datasets and we will adopt it for the last experiments in Sec. 6.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We report the dataset statistics in Tab. 4 and the detailed descriptions in Sec. 8.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We adopted two baselines to benchmark the ability of the proposed model in the task of normalizing and sanitizing a sentence. The first baseline, Monoise (van der Goot, 2019b) -a lexical normalization tool, is adopted to confirm our model's effectiveness over token-based approaches. Monoise performs normalization via two subtasks: candidate generation and candidate ranking. The first subtask uses heuristics to select potential normalized forms of each token, including nearest neighbors in word embedding space, edit distance and phonetic distance, and crafted lookup list derived from training, and more. The second subtask first engineers features for each candidate, including word embedding distance, n-gram probability, character order, and more. This baseline is used to demonstrate the improvement of our approaches over a heuristic token-based model not only in terms of effectiveness but also efficiency. The second baseline, LSTM implemented using Fairseq (Ott et al., 2019) , is used to highlight the effectiveness of our char-to-word Transformer-based proposal over traditional word-to-word RNN.", |
|
"cite_spans": [ |
|
{ |
|
"start": 970, |
|
"end": 988, |
|
"text": "(Ott et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Based on the previous research done in the MT filed, we report the test performances of normalization and sanitization in terms of BLEU4 and Word Error Rate (Klakow and Peters, 2002 ) (WER). The experiment results are described in Tab. 5.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 181, |
|
"text": "(Klakow and Peters, 2002", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We also (2.1) compare the performance of our proposed pointer-generator model against generator-only model in text normalisation objective and, (2.2) compare multi-head attention against classical attention mechanisms described in a previous work (Luong et al., 2015) . The alternative attention formulation considered for benchmarking are:", |
|
"cite_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 267, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 General attention g s i (h t , z, \u03b8 s ) = h T t W s z i g e i (h t , z, \u03b8 e ) = h T t W e z i \u2022 Concat attention g s i (h t , z, \u03b8 s ) = v T s tanh(W s [h T t , z i ]) g e i (h t , z, \u03b8 s ) = v T e tanh(W e [h T t , z i ]).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Note that, for an overall comparison of the different network architectures considered we used the BLEU4 score. Instead, to evaluate the pointing mechanisms, we compute the accuracy score of the start and end position w.r.t. the correct text's segment, as well as the improved F 1 score of the proposed model and baselines. The experiment results are described in Tab. 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Finally, the classification is done using a linear classifier with a bag-of-word approach; which is a common settings in the industry. The performance are evaluated in terms of accuracy and F 1 score. The results are reported in Tab. 9.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "6 Results and Evaluation 6.1 Generator At the macro-scopic level, all translation models, i.e. LSTM and the proposed Transformer-based out-perform Monoise. Specifically, our model outperforms Monoise by 0.045 absolute margin or reduces the error by 33 times in terms of BLEU4 score. In terms of WER, the result is performance is consistent where our model reduces the error by 0.02 or by 29 times. Overall, this highlights the improvement of context-aware translation models from context-unaware token-based lemmatizers. We also highlight the superiority of our Transformer-based architecture over the RNN baseline. On normalization task, it is able to reduce LSTM's error by approximately 3 times in terms of BLEU4 and 2 times in terms of WER. On Sanitization task, the proposed model consistently reduces LSTM's error by approximately 1.5 times in terms of both BLEU4 and WER.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For a deeper understanding of the models behavior, we examine the results at micro-scopic level in Tab. 6. We observe that in the first example, Monoise, being unaware of the context, normalize shipping address to ship address. This can be confusing as the phrase shipping address specifically means the delivery address of a package, while ship address possibly means the docking location of a large watercraft. Instead, the proposed model is able to consider the contextual information such as my order, indicating a package to be delivered, and leave the word shipping as it is. In the second example, Monoise leaves the word real unlemmatized as reel is an existing English word. However, when we factor in the context of virtual agent and the followed word person, normalizing reel as real is more sensible. Overall, also the analysis of the last example demonstrate how the proposed model is able to consider the semantics of an utterance; which eventualy lead to a better results w.r.t. a token-based approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As shown in Tab. 7, all the networks with pointing capabilities outperform the Generator-only baseline in terms of BLEU score. Multihead Pointer-Generator improves Generator-only model by the shipping address is incorrect on my order .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "ship address is incorrect on my order .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "can i speak with a reel person? can i speak with a real person ? can i speak with a reel person ? i had money that was refunded to me and i tried sending to my bank account but its was on hold i have money that was refund to me and i try send it to my bank account but it was on hold i have money that was refund to me and i try send it to my bank account but it is was on hold Table 7 : Performance of our proposed Multihead Pointer-Generator versus baselines in text normalisation. The Pointer Start Acc. and Pointer End Acc. denote the accuracy of each system in pointing to the correct start and end position. The Generating F 1 denotes the F 1 score of each system in generating the correct next token. largest absolute margin of .0074 or 15.8% error reduction, compared with .0061 absolute margin or 10.89% error reduction and 0.006 absolute margin or 10.68% error reduction from General and Concat Pointer-Generator respectively. These statics confirm our hypothesis that jointly using a pointing and generating mechanism improves the performance of neural models. Moreover, our Multihead Pointer-Generator being highly compatible with the end-to-end transformer-based architecture is the most effective amongst the proposed pointerbased models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 385, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We further seek to understand the improvement brought about by our proposed Multihead Pointer-Generator by examining its accuracy in pointing to the correct start and end positions of the text segment to be copied. Experiment results from Pointer Start Acc. and Pointer End Acc. Table 8 suggest that there is no significant difference in pointing to the correct positions between the three pointer models. However, the Multihead Pointer-Generator shows a performance boost in terms of F 1 score, where our proposed model enhances the Generator baseline by 0.0091 absolute margin or 12.02% error reduction. This is significantly higher than the changes brought about by General (+1.84% error), and Concat (+4.89% error). This implies that our network design is capable of enhancing a traditional Generator-only module when applied to the text normalisation tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 286, |
|
"text": "Table 8", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pointer", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Finally, we want to evaluate if text normalization and sanitization is also beneficial for downstream tasks. That is, we hypothesized that a normalized and sanitized utterance would be easier to process by another model such as a text classifier. Re-cent advancement in NLP claims that BERT-like models can easily overcome limitations related to misspelling errors due to their tokenization and pre-training process. However, such models are computational expensive thus are not yet widely adopted in commercial applications that require high-throughput like chatbot services. It has to be noted that our Conversational dataset contains a broader set of topics and a more variegate lexicon than this dataset. Thus, for this experiment, we directly apply the best performing model of task 1 and 2 to obtain a normalized and sanitized version of the input utterances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Tab. 9 reports the impact of text normalization and sanitization on a downstream text classification task in our NLI that requires strong natural language understanding. Overall, our proposed model yields a relative improvement of +1.08% in terms of accuracy and +4.67% in terms of F 1 score. This indicates that text normalization is beneficial in detecting the classes characterized by a limited amount of training examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "We addressed the importance of context awareness in joint normalization and sanitization. We verified our C2W Transformer-based model's quality over context-unaware word-level lemmatizer and traditional W2W seq-to-seq model at both macroscopic and microscopic level. Moreover, we tackled the limitation of representing and producing OOVs during generation with a pointer-generator that learns to copy the relevant text segments from the source input to the translated output. Experiments at both macroscopic and microscopic level verified improved normalization and sanitization fluency previously limited by OOVs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Our formulation of text normalization as a learning-to-translate problem avoids the tedious engineering of domain specific preprocessing heuristics for textual data. The proposal of pointergenerator is highly generalizable to other NLP tasks such as summarization or machine translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "As described in Sec. 5.1, we adopted two distinct datasets in our evaluation. Here we are going to describe their characteristics and the annotation process used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Details", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "As over-mentioned, this dataset is formed by utterances collected from a chat service where clients interact with customers service agents. Note that such conversations happen in real-time. Thus they contain a huge variety of topics as well as a huge lexicon. Clients can access this chat service from any device, this translate in many syntactic errors present in the utterances as well as an informal language. All the above considerations suggest that many customers adopt mobile devices to interact with these services. The topics covered in such conversations can vary from issues related to financial services to trust problem, which involves third parties not directly participating in the conversations or general chitchatting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversational Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Human annotator has been used to reduce each word o its canonical form, i.e. lemmas. In contrast, misspelt words and sensitive/personal information are corrected or masked according to the contextual meaning of the conversation. Note that this labelling process contains little uncertainty; thus, we used a single annotator per utterance to maximise the dataset size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversational Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The second dataset is a traditional text classification dataset collected from a task-oriented chatbot system where customers can interact with a chatbot agent to solve 27 possibles task. Note that the user interface is equal for both dataset, but in this case, instead of a human agent, there is a chatbot agent. It has to be noted that we collect only the first utterance typed from the customer since it is the only part needed to classify the customer's need on the 27 classes correctly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3 skilled annotators have manually annotated each utterance, and we have discharged all the utterances that do not present 100% of agreement. The classes used in this dataset are a subset of the topics appearing in the previous dataset. For example, we have classes related to transactions status, transactions that are declined, dispute for item not received, scam emails or problems related to the ac-count of a customer. Note that, if the chatbot is not able to address the customer's need the conversation would be redirected to an human agent. Thus, a system able to normalize and sanitize utterances from the live-chat service (Conversational dataset), would be directly applicable also to this dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.urbandictionary.com/ 2 https://spacy.io/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that all the personal information have been anonimized.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "NLTK: The Natural Language Toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "69--72", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1225403.1225421" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Bird. 2006. NLTK: The Natural Language Toolkit. In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions, pages 69-72, Syd- ney, Australia. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An improved error model for noisy channel spelling correction", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "286--293", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Brill and Robert C Moore. 2000. An improved er- ror model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting on Associa- tion for Computational Linguistics, pages 286-293. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1406.1078" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Probability scoring for spelling correction", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Kenneth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William A", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Statistics and Computing", |
|
"volume": "1", |
|
"issue": "2", |
|
"pages": "93--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth W Church and William A Gale. 1991. Proba- bility scoring for spelling correction. Statistics and Computing, 1(2):93-103.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Transformer-xl: Attentive language models beyond a fixed-length context", |
|
"authors": [ |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1901.02860" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive lan- guage models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Misspelling oblivious word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Bora", |
|
"middle": [], |
|
"last": "Edizel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aleksandra", |
|
"middle": [], |
|
"last": "Piktus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Ferreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Silvestri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.09755" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bora Edizel, Aleksandra Piktus, Piotr Bojanowski, Rui Ferreira, Edouard Grave, and Fabrizio Silvestri. 2019. Misspelling oblivious word embeddings. arXiv preprint arXiv:1905.09755.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "word2vec explained: deriving mikolov et al.'s negativesampling word-embedding method", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1402.3722" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Goldberg and Omer Levy. 2014. word2vec explained: deriving mikolov et al.'s negative- sampling word-embedding method. arXiv preprint arXiv:1402.3722.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Monoise: A multi-lingual and easy-to-use lexical normalization tool", |
|
"authors": [ |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Van Der Goot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "201--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rob van der Goot. 2019a. Monoise: A multi-lingual and easy-to-use lexical normalization tool. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics: System Demon- strations, pages 201-206.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "MoNoise: A multi-lingual and easy-to-use lexical normalization tool", |
|
"authors": [ |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Van Der Goot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "201--206", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-3032" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rob van der Goot. 2019b. MoNoise: A multi-lingual and easy-to-use lexical normalization tool. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics: System Demon- strations, pages 201-206, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Neural turing machines", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Wayne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivo", |
|
"middle": [], |
|
"last": "Danihelka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1410.5401" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Pointing the unknown words", |
|
"authors": [ |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungjin", |
|
"middle": [], |
|
"last": "Ahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1603.08148" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallap- ati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Memory augmented neural networks with wormhole connections", |
|
"authors": [ |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarath", |
|
"middle": [], |
|
"last": "Chandar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1701.08718" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caglar Gulcehre, Sarath Chandar, and Yoshua Ben- gio. 2017. Memory augmented neural net- works with wormhole connections. arXiv preprint arXiv:1701.08718.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Discovering informative patterns and data cleaning", |
|
"authors": [ |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Guyon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nada", |
|
"middle": [], |
|
"last": "Matic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Isabelle Guyon, Nada Matic, Vladimir Vapnik, et al. 1996. Discovering informative patterns and data cleaning.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Lexical normalisation of short text messages: Makn sens a# twitter", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "368--378", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Han and Timothy Baldwin. 2011. Lexical normal- isation of short text messages: Makn sens a# twit- ter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies-Volume 1, pages 368- 378. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Learning distributed representations of concepts", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Geoffrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Proceedings of the eighth annual conference of the cognitive science society", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geoffrey E Hinton et al. 1986. Learning distributed representations of concepts. In Proceedings of the eighth annual conference of the cognitive science so- ciety, volume 1, page 12. Amherst, MA.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Clayton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Hutto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gilbert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Eighth international AAAI conference on weblogs and social media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clayton J Hutto and Eric Gilbert. 2014. Vader: A par- simonious rule-based model for sentiment analysis of social media text. In Eighth international AAAI conference on weblogs and social media.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Testing the correlation of word error rate and perplexity", |
|
"authors": [ |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jochen", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Speech Communication", |
|
"volume": "38", |
|
"issue": "1-2", |
|
"pages": "19--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dietrich Klakow and Jochen Peters. 2002. Testing the correlation of word error rate and perplexity. Speech Communication, 38(1-2):19-28.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Effective approaches to attention-based neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1421", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neu- ral machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1412-1421.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mc-Closky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Association for Computational Linguistics (ACL) System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Pointer sentinel mixture models", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Merity", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.07843" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els. arXiv preprint arXiv:1609.07843.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Ncsu sas wookhee: a deep contextual long-short term memory model for text normalization", |
|
"authors": [ |
|
{ |
|
"first": "Wookhee", |
|
"middle": [], |
|
"last": "Min", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bradford", |
|
"middle": [], |
|
"last": "Mott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Workshop on Noisy User-generated Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wookhee Min and Bradford Mott. 2015. Ncsu sas wookhee: a deep contextual long-short term memory model for text normalization. In Pro- ceedings of the Workshop on Noisy User-generated Text, pages 111-119.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "fairseq: A fast, extensible toolkit for sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Data cleaning: Problems and current approaches", |
|
"authors": [ |
|
{ |
|
"first": "Erhard", |
|
"middle": [], |
|
"last": "Rahm", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong", |
|
"middle": [ |
|
"Hai" |
|
], |
|
"last": "Do", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "IEEE Data Eng. Bull", |
|
"volume": "23", |
|
"issue": "4", |
|
"pages": "3--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erhard Rahm and Hong Hai Do. 2000. Data cleaning: Problems and current approaches. IEEE Data Eng. Bull., 23(4):3-13.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Detecting sensitive information from textual documents: an information-theoretic approach", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "S\u00e1nchez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Montserrat", |
|
"middle": [], |
|
"last": "Batet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Viejo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "International Conference on Modeling Decisions for Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David S\u00e1nchez, Montserrat Batet, and Alexandre Viejo. 2012. Detecting sensitive information from textual documents: an information-theoretic approach. In International Conference on Modeling Decisions for Artificial Intelligence, pages 173-184. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Seq2seq deep learning models for microtext normalization", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Satapathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Cavallari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Cambria", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 International Joint Conference on Neural Networks (IJCNN)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Satapathy, Y. Li, S. Cavallari, and E. Cambria. 2019. Seq2seq deep learning models for microtext normal- ization. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Get to the point: Summarization with pointergenerator networks", |
|
"authors": [ |
|
{ |
|
"first": "Abigail", |
|
"middle": [], |
|
"last": "See", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1073--1083", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Rnn approaches to text normalization: A challenge", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Sproat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navdeep", |
|
"middle": [], |
|
"last": "Jaitly", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.00068" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Sproat and Navdeep Jaitly. 2016. Rnn ap- proaches to text normalization: A challenge. arXiv preprint arXiv:1611.00068.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "End-to-end memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Sainbayar", |
|
"middle": [], |
|
"last": "Sukhbaatar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Fergus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2440--2448", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440-2448.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Replacing personallyidentifying information in medical records, the scrub system", |
|
"authors": [ |
|
{ |
|
"first": "Latanya", |
|
"middle": [], |
|
"last": "Sweeney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the AMIA annual fall symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Latanya Sweeney. 1996. Replacing personally- identifying information in medical records, the scrub system. In Proceedings of the AMIA annual fall symposium, page 333. American Medical Informat- ics Association.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Pointer networks", |
|
"authors": [ |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meire", |
|
"middle": [], |
|
"last": "Fortunato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navdeep", |
|
"middle": [], |
|
"last": "Jaitly", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2692--2700", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural In- formation Processing Systems, pages 2692-2700.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Xlnet: Generalized autoregressive pretraining for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.08237" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- ing for language understanding. arXiv preprint arXiv:1906.08237.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Neural models of text normalization for speech applications", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Sproat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Axel", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Stahlberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaochang", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Gorman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Computational Linguistics", |
|
"volume": "45", |
|
"issue": "2", |
|
"pages": "293--337", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Zhang, Richard Sproat, Axel H Ng, Felix Stahlberg, Xiaochang Peng, Kyle Gorman, and Brian Roark. 2019. Neural models of text normal- ization for speech applications. Computational Lin- guistics, 45(2):293-337.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Model architecture. The left part represent a bidirectional encode based on the Transformer architectuGre, while the right part represent an auto-regressive decoder with pointing capabilities also based on the Transformer architecture. Note that, for each decoder timestep, the probabilities of the ith position in the encoder being the start and end positions are calculated from the start and end pointer distribution. The pointer and vocabulary distribution are derived from the encoder hidden states of the input text and decoder hidden states of the partial output text." |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Examples of text over-masked due to regex application.", |
|
"num": null, |
|
"content": "<table><tr><td>Over-Masked Text</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Example of malformed input text is normalized in the output. Note that the tracking number is mapped to an unknown token while the transaction id is masked for security/privacy reasons. Why it is my transaction with id 781243692BSD0433 on hold ?", |
|
"num": null, |
|
"content": "<table><tr><td>Input Text</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "10) where d K stands for the output dimension of W k , [\u2022, ..., \u2022] is the concatenation of N different attention heads and W O is a linear transformation.", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "The datasets' statistics used for evaluation.", |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Conversational Classification</td></tr><tr><td>Total size</td><td>66151</td><td>17851</td></tr><tr><td>Training Set</td><td>54110</td><td>6851</td></tr><tr><td>Validation Set</td><td>6020</td><td>5500</td></tr><tr><td>Test Set</td><td>6021</td><td>5500</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Performance of our proposed Transformer versus baseline in text normalization and sanitization.", |
|
"num": null, |
|
"content": "<table><tr><td>Systems</td><td colspan=\"2\">Normalization BLEU4 WER</td><td colspan=\"2\">Sanitization BLEU4 WER</td></tr><tr><td>Monoise</td><td>0.9536</td><td>0.0206</td><td>-</td><td>-</td></tr><tr><td>LSTM</td><td>0.9955</td><td>0.0015</td><td>0.9827</td><td>0.0076</td></tr><tr><td>Transformer</td><td/><td/><td/><td/></tr><tr><td>(Our model)</td><td>0.9986</td><td>0.0007</td><td>0.9880</td><td>0.0052</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Test examples highlight the behaviours of different methods. Note that, misspelled phrases are highlighted in red and correctly normalised phrases are highlighted in blue.", |
|
"num": null, |
|
"content": "<table><tr><td>Input Text</td><td>Transformer (Our model)</td><td>Monoise</td></tr><tr><td>shipping address is incorrect on my or-</td><td/><td/></tr><tr><td>der.</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Test examples highlight the behaviours of different methods.Note that: misspelled phrases are highlighted in red, correctly normalised phrases are highlighted in blue, mishandled OOVs are highlighted in gray, and, correctly pointed OOVs are highlighted in green. connect my venmo account with hsbc and citibank account ? followw PayPal on twitter follow PayPal on UNK .followw PayPal on twitter . follow PayPal on twitter .", |
|
"num": null, |
|
"content": "<table><tr><td>Input Text</td><td>Generator-only Output</td><td>Pointer-only Ouput</td><td>Pointer-Generator Output</td></tr><tr><td>sennd mony from PayPal</td><td>send money from PayPal to</td><td>sennd mony from PayPal to</td><td>send money from PayPal to</td></tr><tr><td>to venmo account</td><td>UNK account .</td><td>venmo account .</td><td>venmo account .</td></tr><tr><td>how can I conect my</td><td>how can i connect my UNK</td><td>how can i conect my venmo</td><td>how can i</td></tr><tr><td>venmo account with hsbc</td><td>account with hub and UNK</td><td>account with hsbc and</td><td/></tr><tr><td>and citibank account?</td><td>account ?</td><td>citibank account ?</td><td/></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Classification performances with and without text normalized and sanitized input.", |
|
"num": null, |
|
"content": "<table><tr><td>Systems</td><td colspan=\"2\">Accuracy F1 score</td></tr><tr><td>with text-norm</td><td>0.7696</td><td>0.7175</td></tr><tr><td>without text-norm.</td><td>0.7583</td><td>0.6855</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |