{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:47:12.449837Z" }, "title": "Neural Multi-Task Text Normalization and Sanitization with Pointer-Generator", "authors": [ { "first": "Van-Hoang", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "Innovation Lab", "institution": "", "location": { "country": "PayPal Singapore" } }, "email": "vanguyen@paypal.com" }, { "first": "Cavallari", "middle": [], "last": "Sandro", "suffix": "", "affiliation": {}, "email": "scavallari@paypal.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text normalization and sanitization are intrinsic components of Natural Language Inferences. In Information Retrieval or Dialogue Generation, normalization of user queries or utterances enhances linguistic understanding by translating non-canonical text to its canonical form, on which many state-of-the-art language models are trained. On the other hand, text sanitization removes sensitive information to guarantee user privacy and anonymity. Existing approaches to normalization and sanitization mainly rely on hand-crafted heuristics and syntactic features of individual tokens while disregarding the linguistic context. Moreover, such context-unaware solutions cannot dynamically determine whether out-of-vocab tokens are misspelt or are entity names. In this work, we formulate text normalization and sanitization as a multi-task text generation approach and propose a neural pointer-generator network based on multihead attention. Its generator effectively captures linguistic context during normalization and sanitization while its pointer dynamically preserves the entities that are generally missing in the vocabulary. Experiments show that our generation approach outperforms both token-based text normalization and sanitization, while the pointer-generator improves the generator-only baseline in terms of BLEU4 score, and classical attentional pointer networks in terms of pointing accuracy.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Text normalization and sanitization are intrinsic components of Natural Language Inferences. In Information Retrieval or Dialogue Generation, normalization of user queries or utterances enhances linguistic understanding by translating non-canonical text to its canonical form, on which many state-of-the-art language models are trained. On the other hand, text sanitization removes sensitive information to guarantee user privacy and anonymity. Existing approaches to normalization and sanitization mainly rely on hand-crafted heuristics and syntactic features of individual tokens while disregarding the linguistic context. Moreover, such context-unaware solutions cannot dynamically determine whether out-of-vocab tokens are misspelt or are entity names. In this work, we formulate text normalization and sanitization as a multi-task text generation approach and propose a neural pointer-generator network based on multihead attention. Its generator effectively captures linguistic context during normalization and sanitization while its pointer dynamically preserves the entities that are generally missing in the vocabulary. Experiments show that our generation approach outperforms both token-based text normalization and sanitization, while the pointer-generator improves the generator-only baseline in terms of BLEU4 score, and classical attentional pointer networks in terms of pointing accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Early Natural Language Processing (NLP) faced the long-standing limitation of human language understanding, mainly due to linguistic morphology or the wide variance of word forms. Therefore, a crucial requirement to obtain outstanding performance for modern NLP systems is the availability of \"standardized\" textual data (Guyon et al., 1996; Rahm and Do, 2000) . Standardizing or normalizing textual data reduces the domain complexity, hence improves the generalization of the learned model. However, there are challenges to automatic text normalization. Natural language is by nature evolving, e.g. Urban Dictionary 1 is a crowdsourced online dictionary for slang words and phrases not typically found in a standard dictionary, but used in an informal setting such as text messages or social media posts. Moreover, abbreviations and emojis allow humans to express rich and informative content with few characters, but troubles machine understanding. Finally, humans are prone to spelling errors while writing or typing.", "cite_spans": [ { "start": 321, "end": 341, "text": "(Guyon et al., 1996;", "ref_id": "BIBREF14" }, { "start": 342, "end": 360, "text": "Rahm and Do, 2000)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Due to the reasons mentioned above, developers have designed pre-processing techniques to normalise textual data, including spell correction, tokenisation, stemming, lemmatization and partof-speech tagging. During the years, multiple libraries have been proposed to facilitate such preprocessing steps: e.g. NLTK (Bird, 2006) , spaCy 2 or Stanford Core NLP (Manning et al., 2014) . However, as textual domains vary greatly from medical records, legal documents to social media posts, there is no single solution or a fixed set of preprocessing steps for text normalization. Thus, up to date, defining a pre-processing pipeline remains an art form which requires a significant engineering effort. While researchers can define hard-policies to eliminate all noisy textual data, they also considerably reduce the amount of information available to the model, thus limit its performance. Such pruning approach appears problematic in the industry where engineers tackle domain-specific problems are given a relatively limited noisy textual dataset.", "cite_spans": [ { "start": 313, "end": 325, "text": "(Bird, 2006)", "ref_id": "BIBREF0" }, { "start": 357, "end": 379, "text": "(Manning et al., 2014)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Enterprises also have to comply with multiple policies concerning privacy. Thus, they are re- Table 1 : Example of well formatted text correctly masked with simple regex rules. Note that all the reported credit card number are artificially generated.", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 101, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Masked Text i need to delete my credit card 5496-9579-4394-2954 i need to delete my credit card **** the refund will post to your credit card ending in (8077) in the next 3-5 business days the refund will post to your credit card ending in (****) in the next 3-5 business days quired to mask or remove sensitive information rather than cache them inside data centers. Such sensitive information includes credit card numbers, email addresses and Social Security Number (SSN). Note that sanitation issues not only arise during an offline storage/backup process of user-generated content, but they might also happen in real-time. For example, it is common for big enterprises to outsource customer services, like live-chat or chatbot systems, to third parties. Thus, there is the need to remove all the sensitive information before expose the input text to any third party to prevent information leakage. At the same time, the semantic meaning of a customer's request has to be preserved to deliver good customer support. Enterprises have traditionally addressed sanitization by defining heuristics. Such an approach is effective over well-defined text such as official documents and notes. As shown in Tab. 1, carefully designed regex rules are able to properly mask content following a specific pattern, e.g. credit card numbers, from a document 3 . Instead, in an informal setting regex rules can fail due to the presence of typos or sensitive information whose syntax is not accounted in the predefined patterns; for example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unmasked Text", "sec_num": null }, { "text": "\u2022 \"my card ending -4810 has being refused.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unmasked Text", "sec_num": null }, { "text": "\u2022 \"i want to cancel my last transaction 6 9 0 8 2 0 5 7 3 D 1 4 8 0 4 3 3.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unmasked Text", "sec_num": null }, { "text": "On the other hand, rules-based approaches, begin semantic-unaware, tend to mask most of the insensitive but crucial numerical information, troubling the downstream analysis. For instance, Tab. 2 demonstrates a case when a tracking number is confused with a transaction number. Similarly, in the second case, a transaction amount is confused with a credit card number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unmasked Text", "sec_num": null }, { "text": "As mentioned, we claim that it is not possible to define a general heuristics that correctly cover all the corner cases while ignoring semantics. Instead, we propose a novel approach for text normalization and sanitization based on the recent advancements made in NLP, specifically in Machine Translation (MT). That is, we formulate the joint text normalization and sanitization task as learning to translate from non-canonical English to a sequence of welldefined or masked tokens. For example, Tab. 3 demonstrates how malformed texts are translated into a semantically equivalent sequence of welldefined tokens with properly masked information. To our knowledge, this is the first attempt to formulate the joint text normalization and sanitization under MT framework. In so doing, we propose a novel network architecture for MT that can solve this multi-task learning problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unmasked Text", "sec_num": null }, { "text": "Moreover, we address the thorny problem of generating unseen tokens during inference in sequenceto-sequence (seq2seq) learning by making use of pointer networks (Vinyals et al., 2015; See et al., 2017; Merity et al., 2016) . In addition to the generator, we integrate the pointer network, a module that learns to directly copy a specific segment within the input text to the output sequence. Compare to previous work, our design of the pointer is novel as it learns to predict the start and end positions of the correct text segment to be copied, and is built upon the concept of multi-head attention and positional encoding (Vaswani et al., 2017) . Experiments show that using a generating-pointing mechanism improves normalization performance compared to a pure generating mechanism. Our model can correctly identify and preserve most named entities contained in the input text, potentially benefits downstream analysis.", "cite_spans": [ { "start": 161, "end": 183, "text": "(Vinyals et al., 2015;", "ref_id": "BIBREF34" }, { "start": 184, "end": 201, "text": "See et al., 2017;", "ref_id": "BIBREF28" }, { "start": 202, "end": 222, "text": "Merity et al., 2016)", "ref_id": "BIBREF21" }, { "start": 625, "end": 647, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Unmasked Text", "sec_num": null }, { "text": "The introduction of word embeddings (Hinton et al., 1986; Mikolov et al., 2013; Goldberg and Levy, 2014) has produced a gigantic leap forward for most NLP-related task. Traditional problems such as vector sparsity and word interaction were solved by a simple, yet effective, methodology that exploits a large corpus rather than a sophisticated algorithm. However, such methods are limited by the challenge of inferring embeddings for words unobserved at training time, i.e. Out-Of-Vocabulary (OOV). Such scenarios are common in many socialmedia related applications where the input text is generated in real-time. Thus, the user's malformed language might affect downstream performance (Hutto and Gilbert, 2014). Another solution is to include all the misspelling words in the training dataset or to impose similar embeddings for all n-character variations of a canonical word. This, would not scale well due to the sheer amount of such non-canonical terms; thus researchers have studied the spelling correction problem since long time (Church and Gale, 1991; Brill and Moore, 2000) . However, traditional approaches are based on a word-per-word basis; which has shown acceptable results when applied to formal languages.", "cite_spans": [ { "start": 36, "end": 57, "text": "(Hinton et al., 1986;", "ref_id": "BIBREF16" }, { "start": 58, "end": 79, "text": "Mikolov et al., 2013;", "ref_id": "BIBREF22" }, { "start": 80, "end": 104, "text": "Goldberg and Levy, 2014)", "ref_id": "BIBREF7" }, { "start": 1036, "end": 1059, "text": "(Church and Gale, 1991;", "ref_id": "BIBREF3" }, { "start": 1060, "end": 1082, "text": "Brill and Moore, 2000)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There have been many robust approaches to token-level spelling correction and lemmatization. The pioneering work done by Han and Baldwin demonstrated that micro-phonetic similarity could provide valuable insight to correct the spelling in an informal context, as many of these relaxed spellings are often based on the word's phonetic, e.g thr for there or d for the. Monoise (van der Goot, 2019a) generates feature-engineered n-character candidates for a misspelt word not found in the vocabulary and ranks them using a Random Forest Classifier. However, to accurately identify misspelt words, let alone normalizing them, optimal approaches need to consider the whole contextual semantics rather than the word-level morphology. For example, the utterance Can I speak to a reel person? is not misspelt at word-level as every word is a valid English word. However, if we consider sentence-level semantics, reel should be normal-ized into real. To factor in such contextual signals, recent advancements in NLP has considered these sequential nature of a written language as well as the long-term dependencies present in sentences. Thus, the research community has proposed different methodologies to perform micro-text normalisation based on deep learning (Min and Mott, 2015; Edizel et al., 2019; Gu et al., 2019; Satapathy et al., 2019) . While we address the problem of text normalisation in the NLP context, it has also been adopted as a key component for speech applications (Sproat and Jaitly, 2016; Zhang et al., 2019) .", "cite_spans": [ { "start": 1253, "end": 1273, "text": "(Min and Mott, 2015;", "ref_id": "BIBREF23" }, { "start": 1274, "end": 1294, "text": "Edizel et al., 2019;", "ref_id": "BIBREF6" }, { "start": 1295, "end": 1311, "text": "Gu et al., 2019;", "ref_id": null }, { "start": 1312, "end": 1335, "text": "Satapathy et al., 2019)", "ref_id": "BIBREF27" }, { "start": 1477, "end": 1502, "text": "(Sproat and Jaitly, 2016;", "ref_id": "BIBREF29" }, { "start": 1503, "end": 1522, "text": "Zhang et al., 2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Pointer Network was first proposed to solve geometric problems where the size of the output classes is a variable not conforming to the fixed multi-label classification of traditional seq2seq learning (Vinyals et al., 2015) . Pointer Network becomes widely adopted in many NLP tasks including machine translation (Gulcehre et al., 2016) , abstractive summarization (See et al., 2017) and language modeling (Merity et al., 2016) as it aids accurate reproduction of factual details such as unseen proper nouns commonly treated as OOVs. However, existing works formulate the pointing operation as a single position classification task that returns one word (token) position in the encoding sequence to be copied to the decoding sequence. Such formulation is no longer suitable for our char-to-word strategy. Furthermore, with the recent state-of-the-art in seq2seq learning introduced by the Transformer architecture, there has not been a comprehensive comparison between different attention strategies, i.e. the classical attention mechanisms (Luong et al., 2015) and multihead attention (Vaswani et al., 2017) on this pointing objective.", "cite_spans": [ { "start": 201, "end": 223, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF34" }, { "start": 313, "end": 336, "text": "(Gulcehre et al., 2016)", "ref_id": "BIBREF12" }, { "start": 365, "end": 383, "text": "(See et al., 2017)", "ref_id": "BIBREF28" }, { "start": 406, "end": 427, "text": "(Merity et al., 2016)", "ref_id": "BIBREF21" }, { "start": 1041, "end": 1061, "text": "(Luong et al., 2015)", "ref_id": "BIBREF19" }, { "start": 1086, "end": 1108, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Finally, none of the previous research considered the joint privacy-preserving issue, which is common in commercial NLIs such as virtual agents for customer services. To the best of our knowledge, (S\u00e1nchez et al., 2012 ) is the first model that attempted to solve the sanitization problem at a semantic level, without using a rule-based approach (Sweeney, 1996) . However, the former approaches are based on manually defined policies that are application and context-specific or are limited to named entities; thus are not generalizable across domains and applications. why it is my transaction whith id on hold ? I can't enter the tracking number 781243692BSD0433 for a refund. i can not enter the tracking number for a refund", "cite_spans": [ { "start": 197, "end": 218, "text": "(S\u00e1nchez et al., 2012", "ref_id": "BIBREF26" }, { "start": 346, "end": 361, "text": "(Sweeney, 1996)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Neural seq2seq models (Sutskever et al., 2014; Cho et al., 2014; Vaswani et al., 2017) became the de facto standard for machine translation systems. Such models are composed by an encoderdecoder architecture which takes an input sequence x = [x 1 , ..., x M ] and generate the desired output sequence y = [y 1 , ..., y N ] according to the conditional probability distribution P gen \u03b8 (y|x), where \u03b8 stands for the model parameters. Due to their well-designed factorisation of P gen \u03b8 (y|x) based on an autoregressive approach:", "cite_spans": [ { "start": 22, "end": 46, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF31" }, { "start": 47, "end": 64, "text": "Cho et al., 2014;", "ref_id": "BIBREF2" }, { "start": 65, "end": 86, "text": "Vaswani et al., 2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "P gen \u03b8 (y|x) = N t=1 P \u03b8 (y t |y t\u22121 , ..., y 1 , x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "(1) seq2seq models have been proven capable of solving the translation task with outstanding results. However, in the traditional MT settings x and y are tokens' sequences of different languages, instead, in our context y represents the same input sentence, but rewritten in a formal and anonymised language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "In addition to the next token generation objective, we formulate the pointing objective as outputting two sequences of start positions u s = [u s 1 , ..., u s N ] and end positions u e = [u e 1 , ..., u e N ] of the input encoding sequence where u s i , u e i \u2208 [1, ..., M \u2212 1]. Similar to y, u s and u e are chosen according to the conditional probability distributions P pt-start \u03b8 (u|x) and P pt-end \u03b8 (u|x) which can be factored as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P pt-start \u03b8 (u e |x) = N t=1 P pt-start \u03b8 (u s t |y t\u22121 , ..., y 1 , x),", "eq_num": "(2)" } ], "section": "Problem Formulation", "sec_num": "3" }, { "text": "P pt-end \u03b8 (u s |x) = N t=1 P pt-end \u03b8 (u e t |y t\u22121 , ..., y 1 , x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "(3) Note that the factorisation proposed in Eq. 2 (and 3), convert the intractable estimation of u s conditioned on x in a sequence of classification tasks over the sequence length (M ) predicting u s t based on the previous predictions y