{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:29:53.056988Z" }, "title": "Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding", "authors": [ { "first": "Qile", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Florida", "location": {} }, "email": "" }, { "first": "Haidar", "middle": [], "last": "Khan", "suffix": "", "affiliation": { "laboratory": "Amazon Alexa AI", "institution": "", "location": {} }, "email": "" }, { "first": "Saleh", "middle": [], "last": "Soltan", "suffix": "", "affiliation": { "laboratory": "Amazon Alexa AI", "institution": "", "location": {} }, "email": "ssoltan@amazon.com" }, { "first": "Stephen", "middle": [], "last": "Rawls", "suffix": "", "affiliation": { "laboratory": "Amazon Alexa AI", "institution": "", "location": {} }, "email": "sterawls@amazon.com" }, { "first": "Wael", "middle": [], "last": "Hamza", "suffix": "", "affiliation": { "laboratory": "Amazon Alexa AI", "institution": "", "location": {} }, "email": "waelhamz@amazon.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Semantic parsing is one of the key components of natural language understanding systems. A successful parse transforms an input utterance to an action that is easily understood by the system. Many algorithms have been proposed to solve this problem, from conventional rulebased or statistical slot-filling systems to shiftreduce based neural parsers. For complex parsing tasks, the state-of-the-art method is based on autoregressive sequence to sequence models to generate the parse directly. This model is slow at inference time, generating parses in O(n) decoding steps (n is the length of the target sequence). In addition, we demonstrate that this method performs poorly in zero-shot cross-lingual transfer learning settings. In this paper, we propose a non-autoregressive parser which is based on the insertion transformer to overcome these two issues. Our approach 1) speeds up decoding by 3x while outperforming the autoregressive model and 2) significantly improves cross-lingual transfer in the low-resource setting by 37% compared to autoregressive baseline. We test our approach on three well-known monolingual datasets: ATIS, SNIPS and TOP. For cross lingual semantic parsing, we use the MultiATIS++ and the multilingual TOP datasets.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Semantic parsing is one of the key components of natural language understanding systems. A successful parse transforms an input utterance to an action that is easily understood by the system. Many algorithms have been proposed to solve this problem, from conventional rulebased or statistical slot-filling systems to shiftreduce based neural parsers. For complex parsing tasks, the state-of-the-art method is based on autoregressive sequence to sequence models to generate the parse directly. This model is slow at inference time, generating parses in O(n) decoding steps (n is the length of the target sequence). In addition, we demonstrate that this method performs poorly in zero-shot cross-lingual transfer learning settings. In this paper, we propose a non-autoregressive parser which is based on the insertion transformer to overcome these two issues. Our approach 1) speeds up decoding by 3x while outperforming the autoregressive model and 2) significantly improves cross-lingual transfer in the low-resource setting by 37% compared to autoregressive baseline. We test our approach on three well-known monolingual datasets: ATIS, SNIPS and TOP. For cross lingual semantic parsing, we use the MultiATIS++ and the multilingual TOP datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Given a query, a semantic parsing module identifies not only the intent (play music, book a flight) of the query but also extracts necessary slots (entities) that further refines the action to perform (which song to play? Where or when to go?). A traditional rulebased or slot-filling system classifies a query with one intent and tags each input token (Mesnil et al., 2013) . However, supporting more complex queries that are composed of multiple intents and nested slots is a challenging problem (Gupta et al., 2018) . * Work done while interning at Amazon Alexa Gupta et al. (2018) and Einolghozati et al. (2019) propose to use a Shift-Reduce parser based on Recurrent Neural Network for these complex queries. Recently, Rongali et al. (2020) propose directly generating the parse as a formatted sequence and design a unified model based on sequence to sequence generation and pointer networks. Their approach formulates the tagging problem into a generation task in which the target is constructed by combining all the necessary intents and slots in a flat sequence with no restriction on the semantic parse schema.", "cite_spans": [ { "start": 353, "end": 374, "text": "(Mesnil et al., 2013)", "ref_id": "BIBREF19" }, { "start": 498, "end": 518, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF10" }, { "start": 565, "end": 584, "text": "Gupta et al. (2018)", "ref_id": "BIBREF10" }, { "start": 589, "end": 615, "text": "Einolghozati et al. (2019)", "ref_id": "BIBREF6" }, { "start": 724, "end": 745, "text": "Rongali et al. (2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A relatively unexplored direction is the crosslingual transfer problem (Duong et al., 2017; Susanto and Lu, 2017) , where the parsing system is trained in a high-resource language and transfered directly to a low-resource language (zero-shot).", "cite_spans": [ { "start": 71, "end": 91, "text": "(Duong et al., 2017;", "ref_id": "BIBREF5" }, { "start": 92, "end": 113, "text": "Susanto and Lu, 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The state-of-the-art model leverages the autoregressive decoder such as Transformer (Vaswani et al., 2017) and Long-Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to generate the target sequence (representing the parse) from left to right. The left to right autoregressive generation constraint has two drawbacks: 1) generating a parse takes O(n) decoding time, where n is the length of the target sequence. This is further exacerbated when paired with standard search algorithms such as beam search. 2) In the cross-lingual setting, autoregressive parsers have difficulty transferring between languages.", "cite_spans": [ { "start": 84, "end": 106, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF32" }, { "start": 141, "end": 175, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A recent direction in machine translation and natural language generation to speed up sequence to sequence models is non-autoregressive decoding (Stern et al., 2019; Gu et al., 2018 Gu et al., , 2019 ). Since the parsing task in the sequence to sequence framework only requires inserting tags rather than generating the whole sequence, an insertion based parser is both faster and more natural for language transfer than an autoregressive parser.", "cite_spans": [ { "start": 145, "end": 165, "text": "(Stern et al., 2019;", "ref_id": "BIBREF26" }, { "start": 166, "end": 181, "text": "Gu et al., 2018", "ref_id": "BIBREF8" }, { "start": 182, "end": 199, "text": "Gu et al., , 2019", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we leverage insertion based se-quence to sequence models for the semantic parsing problem that require only O(log(n)) decoding time to generate a parse. We enhance the insertion transformer (Stern et al., 2019) with the pointer mechanism, since the entities in the source sequence are ensured to appear in the target sequence. Our non-autoregressive based model can also boost the performance on the zero-shot and few-shot crosslingual setting, in which the model is trained on a high-resource language and tested on low-resource languages. We also introduce a copy source mechanism for the decoder to further improve the cross lingual transfer performance. In this way, the pointer embedding will be replaced by the corresponding outputs from the encoder. We test our proposed model on several well known datasets, TOP (Gupta et al., 2018) , ATIS (Price, 1990), SNIPS (Coucke et al., 2018) , MultiATIS++ (Xu et al., 2020) and multilingual TOP (Xia and Monti, 2021) . In summary, the main contributions of our work include:", "cite_spans": [ { "start": 205, "end": 225, "text": "(Stern et al., 2019)", "ref_id": "BIBREF26" }, { "start": 835, "end": 855, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF10" }, { "start": 884, "end": 905, "text": "(Coucke et al., 2018)", "ref_id": "BIBREF2" }, { "start": 920, "end": 937, "text": "(Xu et al., 2020)", "ref_id": "BIBREF36" }, { "start": 959, "end": 980, "text": "(Xia and Monti, 2021)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 To our knowledge, we are the first to apply the non-autoregressive framework to the semantic parsing task. Experiments show that our approach can reduce the decoding steps by 66.7%. By starting generation with the whole source sequence, we can further reduce the number of decoding steps by 82.4%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We achieve new state-of-the-art Exact Match (EM) scores on ATIS (89.14), SNIPS (91.00) and TOP (86.74, single model) datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce a copy encoder outputs mechanism and achieve a significant improvement compared to the autoregressive decoder and sequence labeling on the zero-shot and fewshot setting in cross lingual transfer semantic parsing. Our approach surpasses the autoregressive baseline by 9 EM points on average over both simple (MultiATIS++) and complex (multilingual TOP) queries and matches the performance of the sequence labeling baseline on MultiATIS++.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we introduce the sequence generation via insertion operations and the pretrained models we leverage in our work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "We begin by briefly describing sequence generation via insertion, for a more complete description see (Stern et al., 2019) . Let x 1 , x 2 , ..., x m be the source sequence with length m and y 1 , y 2 , ..., y n denotes the target sequence with length n. We define the generated sequence h t at decoding step t. In the autoregressive setting, h t = y 1,2,...,t\u22121 . In insertion based decoding, h t is a subsequence of the target sequence y that preserves order. For example, if the final se-", "cite_spans": [ { "start": 102, "end": 122, "text": "(Stern et al., 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation Via Insertion", "sec_num": "2.1" }, { "text": "quence y = [A, B, C, D, E], then h t = [B, E] is a valid intermediate subsequence while h t = [C, A]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation Via Insertion", "sec_num": "2.1" }, { "text": "is an invalid intermediate subsequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation Via Insertion", "sec_num": "2.1" }, { "text": "During decoding step t + 1, we insert tokens into h t . In the previous example, there are three available insertion slots: before token B, between B and E and after E. We always add special tokens such as bos (begin of the sequence) and eos (end of the sequence) to the subsequences. The number of available insertion slots will be T \u2212 1 where T is the length of h t including bos and eos. If we insert one token in all available slots, multiple tokens can be generated in one time step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation Via Insertion", "sec_num": "2.1" }, { "text": "In order to predict the token to insert in a slot, we form the representation for each insertion slot by pooling the representations of adjacent tokens. We have T \u2212 1 slots for a sequence with length T . Let r \u2208 R T \u00d7d , where T is the sequence length and d denotes the hidden size of the transformer decoder layer. All slots s \u2208 R (T \u22121)\u00d7d can be computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation Via Insertion", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s = concat(r[1 :], r[: \u22121]) \u2022 W s ,", "eq_num": "(1)" } ], "section": "Sequence Generation Via Insertion", "sec_num": "2.1" }, { "text": "where r[1 :] is the entire sequence representation excluding the first token, r[: \u22121] is the entire sequence representation excluding the last token and W s \u2208 R 2d\u00d7d is a trainable projection matrix. We apply sof tmax to the slot representations to obtain the token probabilities to insert at each slot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation Via Insertion", "sec_num": "2.1" }, { "text": "Pretrained language models (Devlin et al., 2019; Lan et al., 2020; Dong et al., 2019; Peters et al., 2018) have sparked significant progress in a wide variety of natural language processing tasks. The basic idea of these models is to leverage the knowledge from large-scale corpora by using a language modeling objective to learn a representation for tokens and sentences. For downstream tasks, the learned representations are Figure 1 : Example of a simple query (left) and complex query (right). The complex query contains multiple intents and nested slots and can be represented as a tree structure. The two queries are represented as formatted sequences that are treated as the target sequence in the parsing task. IN is the intent, SL is the slot. Source tokens that appear in the target sequence are replaced by pointers with the form @n where n denotes its location in the source sequence. For complex queries, we can build the parse from top to bottom and left to right.", "cite_spans": [ { "start": 27, "end": 48, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF3" }, { "start": 49, "end": 66, "text": "Lan et al., 2020;", "ref_id": "BIBREF16" }, { "start": 67, "end": 85, "text": "Dong et al., 2019;", "ref_id": "BIBREF4" }, { "start": 86, "end": 106, "text": "Peters et al., 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 427, "end": 435, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Pretrained Models", "sec_num": "2.2" }, { "text": "fine-tuned for the task. This improvement is even more significant when the downstream task has few labeled examples. We also follow this trend, and use the Transformer (Vaswani et al., 2017) based pretrained language model. We use the RoBERTa base (Liu et al., 2019) (we refer to this model as RoBERTa) as our query encoder to fairly compare with the previous method. This model has the same architecture as BERT base (Devlin et al., 2019) with several modifications during pretraining. It uses a dynamic masking scheme and removes the next sentence prediction task. RoBERTa is also trained with longer sentences and larger batch sizes with more training samples. For the multilingual zeroshot and few-shot semantic parsing task, we use XLM-R (Conneau et al., 2020) and multilingual BERT (Devlin et al., 2019) which are trained on text for more than 100 languages.", "cite_spans": [ { "start": 169, "end": 191, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF32" }, { "start": 419, "end": 440, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 744, "end": 766, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF0" }, { "start": 789, "end": 810, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Pretrained Models", "sec_num": "2.2" }, { "text": "In this section, we introduce our non-autoregressive sequence to sequence model for the semantic parsing problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "To train a sequence to sequence model, we prepare a source sequence and a target sequence. For the task of semantic parsing, the source sequence is the query in natural language. We construct the target sequence following Rongali et al. (2020) and Einolghozati et al. (2019) . Tokens in the source sequence that are present in the target sequence are replaced with the special pointer token ptr-n, where n is the position of that token in the source sequence. By using pointers in the decoder, we can drastically reduce the vocabulary size. We follow previous work and use symmetrical tags for intents and slots. Fig. 1 shows two examples, a simple query and a complex query with the corresponding target sequences. This formulation is also able to express other tagging problems like named entity recognition (NER).", "cite_spans": [ { "start": 222, "end": 243, "text": "Rongali et al. (2020)", "ref_id": "BIBREF24" }, { "start": 248, "end": 274, "text": "Einolghozati et al. (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 613, "end": 619, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Query Formulation", "sec_num": "3.1" }, { "text": "We use the insertion transformer (Stern et al., 2019) as the base framework for the decoder. The insertion transformer is a modification of the original transformer decoder architecture (Vaswani et al., 2017) . The original transformer decoder predicts the next token based on the previously generated sequence while the insertion transformer can predict tokens for all the available slots. In this setup, tokens in the decoder side can attend to the entire sequence instead of only their left side. This means we remove the causal self-attention mask in the original decoder.", "cite_spans": [ { "start": 33, "end": 53, "text": "(Stern et al., 2019)", "ref_id": "BIBREF26" }, { "start": 186, "end": 208, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Insertion Transformer", "sec_num": "3.2" }, { "text": "Pointer Network: In the normal sequence to sequence model, target tokens are generated by feeding the final representations (decoder hidden states) through a feed-forward layer and applying a softmax function over the whole target vocabulary. This is slow when the vocabulary size is large . In parsing, the entities in the source sequence will always appear in the target sequence. We can leverage the pointer mechanism (Vinyals et al., 2015) to reduce the target vocabulary size by dividing the vocabulary into two types: tokens that are the parsing symbols like intent and slot names, and pointers to words in the source sequence.", "cite_spans": [ { "start": 421, "end": 443, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Pointer Network with Copy", "sec_num": "3.2.1" }, { "text": "Since we have two kinds of target tokens, we use two slightly different ways to obtain unnormalized probabilities for each type. For the tokens in the tagging vocabulary, we feed the hidden states generated by the insertion transformer and slot pooling to a dense layer to produce the logits of size V (tagging vocabulary). The tagging vocabulary contains only the parse symbols like intents and slots together with several special tokens such as bos, eos, the padding and unknown token. For the pointers, we compute the scaled dot product attention scores between the slot representation and the encoder output. The attention scores will be computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pointer Network with Copy", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a(Q, K) = QK T \u221a h ,", "eq_num": "(2)" } ], "section": "Pointer Network with Copy", "sec_num": "3.2.1" }, { "text": "where query (Q) is the slot representation, the encoder outputs would be the key (K) and h is the hidden size of the query. Since the hidden size of encoder and decoder may be different, we also do a projection of query and key to the same dimension with two dense layers. Notice that the length of attention scores follows the length of the source sequence. Concatenating the attention scores with size n and the logits for the tagging vocabulary (V), we get the unnormalized distribution over V + n tokens. We apply the sof tmax function to obtain the final distribution over these tokens. Copy Mechanism: Rongali et al. (2020) use a set of special embeddings to represent pointer tokens. This is a problem because the pointer embedding cannot encode semantic information since it points to different words across examples. Instead, we reuse the encoder output that the pointer token points to. Without copying, the special pointer embedding would learn a special position based representation for the source language that is hard to transfer to other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pointer Network with Copy", "sec_num": "3.2.1" }, { "text": "Training the insertion decoder requires sampling source and target sequences from the training data. We randomly sample valid subsequences from the target sequence to mimic intermediate insertion steps. We first sample a length k \u2208 [0, n] for the subsequence, where n is the length of the target sequence (here n excludes the bos and eos tokens). We select k tokens from the target sequence and maintain the original ordering. This sampling helps the model learn to insert tokens from the initial generation state as well as intermediate generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "The insertion transformer can do parallel decoding since we can insert tokens in all available insertion slots. However, for each insertion slot, there may be multiple candidate tokens that can be inserted. For example, given a target sequence [A, B, C, D, E] and a valid subsequence [A, E] , the candidates for the slot between token A and E are B, C, D. We use the two different weighting schemes proposed in Stern et al. (2019) : uniform weights and balanced binary tree weights. Binary Tree Weights: The motivation for applying binary tree weighting is to make the decoding time nearly O(log(n)). Consider the example of sequence A, B, C, D, E again, the desired order of generation would be [bos, eos] ", "cite_spans": [ { "start": 284, "end": 290, "text": "[A, E]", "ref_id": null }, { "start": 411, "end": 430, "text": "Stern et al. (2019)", "ref_id": "BIBREF26" }, { "start": 696, "end": 706, "text": "[bos, eos]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "\u2192 [bos, C, eos] \u2192 [bos, A, C, E, eos] \u2192 [bos, A, B, C, D, E, eos].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "To achieve this goal, we weight the candidates according to their positions. For the sequence above, candidates in the span of [bos, eos] are A, B, C, D, E. We assign token C the highest weight, then lower weights for B, D and the lowest weights for A, E.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "Given a sampled subsequence with length k + 1, we have k insertion slots at location l = (0, 1, ..., k \u2212 1). Let c l 0 , ...c l i be the candidates for one location l. We can define a distance function d j for each token j in the candidates of l:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d l (j) = |j \u2212 i 2 |,", "eq_num": "(3)" } ], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "where i is the number of candidates in the location l. We then use the negative distance to compute the softmax based weighting (Rusu et al., 2016; Norouzi et al., 2016) :", "cite_spans": [ { "start": 128, "end": 147, "text": "(Rusu et al., 2016;", "ref_id": "BIBREF25" }, { "start": 148, "end": 169, "text": "Norouzi et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w l (j) = exp(\u2212d l (j)/\u03c4 ) i m=0 exp(\u2212d l (m)/\u03c4 ) .", "eq_num": "(4)" } ], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "Where \u03c4 is the temperature hyperparameter which allows us to control the sharpness of the weight distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "Uniform Weights: Instead of encouraging the model to follow a tree structure generation order, we can also treat the candidates equally. This performs better than the binary tree weights when we input the whole source sequence to the decoder as the initial sequence. In this case, we only need to insert the tagging tokens; the number of candidates is not as large as from scratch ([bos, eos] ). This uniform weighting can be easily done by taking \u03c4 \u2192 \u221e. Loss Function:The autoregressive sequence to sequence model uses the negative log-likelihood loss since in each decoding step, there is only one ground-truth label. However, in our approach, we have multiple candidates for each insertion slot. Therefore, we use the KL-divergence between the predicted token distribution and the ground truth distribution. Then the loss for insertion slot l is:", "cite_spans": [ { "start": 381, "end": 392, "text": "([bos, eos]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L slot (x, h t , l) = D KL ((p l |(x, h t ))||g l ),", "eq_num": "(5)" } ], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "where p l is the distribution output by the decoder and g l is the target distribution where we set the probability to 0 for tokens that are not candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "Note that the ground truth distribution depends on the weighting scheme for generation. Finally, we have the complete loss averaged over all the insertion slots:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(x, h t ) = 1 k + 1 k l=0 L slot (x, h t , l)", "eq_num": "(6)" } ], "section": "Training and Loss", "sec_num": "3.3" }, { "text": "Terminating generation for insertion based decoding is not as straightforward as autoregressive decoding, which only needs the no-insertion token to be predicted. Insertion decoding requires a similar mechanism for every insertion slot. When computing the slot-loss above, if there are no candidates for the slot we set the ground truth label as the no-insertion token. At inference time, we can stop decoding when all available slots predict the no-insertion token. However, there is a problem when combining the sampling method and this termination strategy. The no-insertion token is more frequent compared with other tokens. The same situation is also encountered in (Stern et al., 2019) . This is solved by adding a penalty hyperparameter to control the sequence length generated by the decoder. The hyperparameter is simply a scalar subtracted from the log probability of the no-insertion token for each insertion slot during inference. By doing this, we set a threshold for the difference between the no-insertion token and the second-best choice.", "cite_spans": [ { "start": 671, "end": 691, "text": "(Stern et al., 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Termination Strategy", "sec_num": "3.4" }, { "text": "In this section, we introduce the datasets and baseline models we experiment with. Then we report the results of monolingual experiments and cross lingual transfer learning experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The SNIPS dataset (Coucke et al., 2018 ) is a public dataset aimed to improve the semantic parsing models. It contains seven different intents: SearchCreativeWork, GetWeather, BookRestaurant, PlayMusic, AddToPlaylist, RateBook, and SearchScreeningEvent. For each intent, there are about 2000 training samples and 100 test samples. The SNIPS dataset consists of only simple queries.", "cite_spans": [ { "start": 18, "end": 38, "text": "(Coucke et al., 2018", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "SNIPS", "sec_num": "4.1.1" }, { "text": "The Airline Travel Information System (ATIS) (Price, 1990) dataset was originally collected in the early 90s. The utterances are transcribed from the audio recordings of flight reservation calls. Similar to SNIPS, it consists of only simple queries. ATIS contains seventeen different intents. However, nearly 70% of the queries are the FLIGHT intent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ATIS", "sec_num": "4.1.2" }, { "text": "Recently, a multilingual version of ATIS called MultiATIS++ is introduced by Xu (2020). It is an extension of the Multilingual ATIS (Upadhyay et al., 2018) . Besides the original three languages (English, Hindi and Turkish), MultiATIS++ adds six new languages including Spanish, German, Chinese, Japanese, Portuguese and French annotated by human experts and consists of a total of 37,084 training samples and 7,859 test samples. We exclude Turkish in our experiments as the test set size is limited.", "cite_spans": [ { "start": 132, "end": 155, "text": "(Upadhyay et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ATIS", "sec_num": "4.1.2" }, { "text": "Since ATIS and SNIPS contain only simple queries, the Facebook Task Oriented Parsing (TOP) dataset (Gupta et al., 2018) was introduced for complex hierarchical and nested queries that are more challenging. The dataset contains around 45,000 annotated queries with 25 intents and 36 slots. They further split them into training (31,000), validation (5,000) and test (9,000). As shown in Fig. 1 , the nested slots make it harder to parse using a simple sequence tagging model. We also do experiments on multilingual TOP (Xia and Monti, 2021) with Italian and Japanese data. In this dataset, the training and validation set is machine translated while the test set is annotated by human experts.", "cite_spans": [ { "start": 99, "end": 119, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF10" }, { "start": 518, "end": 539, "text": "(Xia and Monti, 2021)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 386, "end": 392, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "TOP", "sec_num": "4.1.3" }, { "text": "Monolingual Baselines: For monolingual experiments, we select the algorithms reported in Rongali et al. 2020 (Gupta et al., 2018) 83.93 -----SR(E)+ELMO+SVMRank (Gupta et al., 2018) 87.25 -----AR-S2S-PTR (paper) (Rongali et al., 2020) 86.67 98.13 87.12 97.42 87.14 98.00 AR-S2S-PTR (reproduce) (Rongali et al., 2020) ", "cite_spans": [ { "start": 109, "end": 129, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF10" }, { "start": 160, "end": 180, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF10" }, { "start": 211, "end": 233, "text": "(Rongali et al., 2020)", "ref_id": "BIBREF24" }, { "start": 293, "end": 315, "text": "(Rongali et al., 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.2" }, { "text": "Avg. steps # tokens generated per step 1 2 3 4 5 6 7 8 9 AR-S2S-PTR 17.7 1 1 1 1 1 1 1 1 1 IT-S2S-PTR 5.9 1.0 2.0 3.96 6.66 6.24 3.17 1.6 1.4 1.2 IT-S2S-PTR(input-src)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "3.1 4.99 2.92 1.37 1.00 0.54 0.27 0.25 1.0 1.0 Table 2 : Decoding statistics on the TOP dataset. Average target sequence length of TOP is 17.7 tokens, we see that the insertion based parser can fully utilize binary tree decoding. \"input-src\" means we set the whole source sequence as the initial decoder state.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Two of them leverage the power of RNNs: with attention (Liu and Lane, 2016) and without attention (Hakkani-T\u00fcr et al., 2016) . Another model works completely with attention (Goo et al., 2018) .", "cite_spans": [ { "start": 55, "end": 75, "text": "(Liu and Lane, 2016)", "ref_id": "BIBREF17" }, { "start": 98, "end": 124, "text": "(Hakkani-T\u00fcr et al., 2016)", "ref_id": "BIBREF11" }, { "start": 173, "end": 191, "text": "(Goo et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "A Capsule Networks based model is also included (Zhang et al., 2019) . Finally, we compare with the autoregressive sequence to sequence with pointer model which is most recent (Rongali et al., 2020) . Simple tagging based models cannot easily handle the complex queries in the TOP dataset. For the TOP dataset, we compare with two previous models, a shift reduce parsing model (Gupta et al., 2018) and the autoregressive sequence to sequence model (Rongali et al., 2020) . For all monolingual experiments, we use RoBERTa as our pretrained encoder . Cross lingual Baselines: For multilingual experiments (zero-shot and few-shot), we use a sequence labelling model based on multilingual BERT and an autoregressive sequence to sequence model (Rongali et al., 2020) as our baseline. To make fair comparasion, we also use the copy source mechanism in the AR model. For sequence labeling, instead of using F1 score, we also use the exact match (EM) which requires all intents and slots are labeled correctly by the model.", "cite_spans": [ { "start": 48, "end": 68, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF39" }, { "start": 176, "end": 198, "text": "(Rongali et al., 2020)", "ref_id": "BIBREF24" }, { "start": 377, "end": 397, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF10" }, { "start": 448, "end": 470, "text": "(Rongali et al., 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We use the pretrained RoBERTa and mBERT as the encoder for our model. For the decoder side, we use 4 layers with 12 heads transformer decoder. The hidden size of the decoder is the same as the embedding size of the pretrained encoder. For optimization, we use Adam (Kingma and Ba, 2015) with \u03b2 1 = 0.9 and \u03b2 2 = 0.98, paired with the Noam learning rate (initialized with 0.15) scheduler (Vaswani et al., 2017) with 500 warmup steps. For cross-lingual experiments, we freeze the encoder's embedding layer.", "cite_spans": [ { "start": 387, "end": 409, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Model Configuration", "sec_num": "4.3.1" }, { "text": "We use the exact match (EM) accuracy as the main metric to measure the performance of different models. By using EM, the entire parsing sequence predicted by the model has to match the reference sequence, since it's not easy to apply the F1 score or semantic error rate (Thomson et al., 2012) Table 3 : Zero-shot cross lingual EM scores by our approach (IT), autoregressive baseline (AR) and sequence labeling baseline (mBERT). Results are averaged over four random seeds. For our approach, we initialize the decoder with source sequences. * indicates that the data format for the language is not consistent with the S2S model tokenizer.", "cite_spans": [ { "start": 270, "end": 292, "text": "(Thomson et al., 2012)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Monolingual Results", "sec_num": "4.3.2" }, { "text": "en it ja IT-S2S-PTR 84.61 50.07 3.64 AR-S2S-PTR 85.4 41.06 0.64 complex queries. It's better to use the EM here for both simple and complex queries. We also report the intent classification accuracy for our models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Results", "sec_num": "4.3.2" }, { "text": "Main Result: Table 1 shows the results from monolingual experiments on three datasets: TOP, ATIS and SNIPS. Our insertion transformer with pointer achieves new state-of-the-art performance on ATIS and SNIPS under EM metric. For TOP dataset, our model matches the best performance reported for single models (AR-S2S-PTR) despite being 3x faster.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Monolingual Results", "sec_num": "4.3.2" }, { "text": "We also experiment with starting generation with the entire source sequence as the initial state of the decoder. The performance degrades slightly in this case, possibly due to a training/inference mismatch in this setting. This degradation is likely due to training the model to generate the entire target sequence but only asking the model to generate tags during inference. Decoding Steps: Since our approach can do parallel decoding, the number of decoding steps is only O(log(n)). Table 2 shows the statistics for the average decoding steps for the TOP dataset and the number of generated tokens per step. The insertion transformer with pointer only needs 5.9 steps while the autoregressive needs 17.7, resulting in a 3x speedup with insertion decoding. The decoding steps can be further reduced to 3.1 when we start decoding with the source sequence as the initial sequence for the decoder. Theoretically, a perfect binary tree based insertion model should generate 2 n\u22121 tokens for the n th decoding step. We can see that our approach can make full use of the parallel decoding during the first three steps, since the average length of TOP's test samples is only 17.7. Weighting Strategy: We do experiments on both binary tree weighting and uniform weighting for the TOP dataset. We set \u03c4 \u2208 [0.5, 1.0, 1.5, 2.0] and find 1.0 performs best. Binary tree weights are better than uniform in the setting of decoding from scratch. However, uniform performs better when we decode from the whole source sequence.", "cite_spans": [], "ref_spans": [ { "start": 486, "end": 493, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Monolingual Results", "sec_num": "4.3.2" }, { "text": "For MultiATIS++, we train on English training data and test on all languages. Table 3 shows the results of our approach compared to the autoregressive and sequence labeling baselines. We find that:", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Cross Lingual Transfer Results", "sec_num": "4.3.3" }, { "text": "\u2022 Our approach outperforms the baseline on most of the languages except Hindi and Japanese. For Japanese, we found inconsistencies in the tokenizer that is the likely cause of the degradation 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Lingual Transfer Results", "sec_num": "4.3.3" }, { "text": "\u2022 The autoregressive baseline performs poorly on cross lingual experiments. For example, it only achieves 17.22 EM on the French test set while the other two systems achieve > 40 EM. This highlights the weakness of autoregressive parsers that cannot produce parses directly from the encoded representations of the source sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Lingual Transfer Results", "sec_num": "4.3.3" }, { "text": "\u2022 The order of the sentence in Hindi and Japanese is different from others, this may limit the performance of transfer learning for S2S parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Lingual Transfer Results", "sec_num": "4.3.3" }, { "text": "We also test on the multilingual TOP dataset (Xia and Monti, 2021) , which extends the TOP datasets to other languages providing human annotated Italian and Japanese test sets. TOP contains a much larger test set compared to ATIS. Table 4 shows the zero-shot results and Table 5 shows the few-shot results.", "cite_spans": [ { "start": 45, "end": 66, "text": "(Xia and Monti, 2021)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 231, "end": 238, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 271, "end": 278, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Cross Lingual Transfer Results", "sec_num": "4.3.3" }, { "text": "In the zero-shot setting, our approach achieves 50.07 EM score for Italian while AR only achieves 41.06. Both models are unable to achieve good performance in the zero-shot setting for Japanese. We speculate on this behavior in the few-shot experiment results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Lingual Transfer Results", "sec_num": "4.3.3" }, { "text": "In the few shot setting, we finetune the model in two stages, first on the entire English data and then with 10, 50, 100 training samples from other languages. Our approach outperforms the AR baseline in all few shot settings. For Italian, increasing training samples from 10 to 100 does not result in much gain, since the knowledge from English can readily be transferred to Italian, probably due to the similarity of the languages. To further improve the performance on Italian, the model may need many more training samples. However, for Japanese little knowledge (like word order) can be transferred from English so both models can perform as if training from scratch. There may be two reasons here: 1) the order of a sentence is different from English. 2) the annotated target is aligned with the original words in the multilingual TOP so the order of pointers are mixed. Thus, we see the EM scores improves drastically as the number of training samples increases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Lingual Transfer Results", "sec_num": "4.3.3" }, { "text": "For ablation study, we separate the experiments to monolingual and multilingual as above. For multilingual experiments, we use the Italian from multilingual TOP dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.4" }, { "text": "From Table 6 , we observe that the copy mechanism improves performance in the monolingual setting. For the hyperparameter \u03c4 , recall that a higher value for \u03c4 would result in flatter (more uniform) weights for the candidates. \u03c4 = 1.0 provides EM IT-S2S-PTR 86.74 \u03c4 = 0.1 74.84 \u03c4 = 0.5 85.47 \u03c4 = 1.0 86.74 \u03c4 = 1.5", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.4" }, { "text": "86.33 no copy 86.09 Table 6 : The ablation study for the \u03c4 parameter and copy source embedding vector vs. no copy in the monolingual setting. Results on the TOP dataset show the importance of copying source embeddings. We also observe that small values of \u03c4 (i.e. weighting the central token for insertion heavily) degrade performance.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.4" }, { "text": "50.07 -copy 47.00 -input-src 42.03 AR-S2S-PTR-BEST 41.06 -copy 30.87 Table 7 : The ablation study for source embedding copying and starting generation from source tokens in the cross-lingual setting. Results are zero-shot in Italian. For the IT-S2S model, both copying and starting generation with source tokens contribute to zero-shot performance the best balance between equally weighting the candidates and weighting the next token to be inserted heavily. However, we find that when initializing the decoder with source sequences, uniform weights performs better than binary tree weights. For cross-lingual experiments, we introduce two components to improve the performance. Table 7 shows that both of them help in the zero-shot transfer setting. From the results, we can observe that initializing the decoder with the source sequence plays an important role in zero-shot transfer, which is impossible for the autoregressive based models. The copy mechanism is again beneficial for both the sequence to sequence models, improving the performance of even the autoregressive model from 30.87 EM to 41.06 EM in the zero-shot Italian experiment.", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 76, "text": "Table 7", "ref_id": null }, { "start": 679, "end": 686, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Models EM IT-S2S-PTR-Best", "sec_num": null }, { "text": "Monolingual Semantic Parsing: The task oriented semantic parsing for intent classification and slot detection is usually achieved by sequence labeling. Normally, the system will first classify the query based on the sentence level semantic and then label each word in the query. Conditional Random Fields (CRFs) (Peters et al., 2018; Lan et al., 2020; Jiao et al., 2006) is one of the most successful algorithms applied to this task before deep learning dominated the area. Deep learning algorithms boost the performance of semantic parsing, especially using recurrent neural networks (Liu and Lane, 2016; Hakkani-T\u00fcr et al., 2016) . Other architectures are also explored, such as convolutional neural networks (Kim, 2014) and capsule networks (Zhang et al., 2019) . Cross Lingual Transfer Semantic Parsing: Multilingual natural language understanding has been studied in a variety of tasks including part-ofspeech (POS) tagging (Plank and Agi\u0107, 2018; Yarowsky et al., 2001; T\u00e4ckstr\u00f6m et al., 2013) , named entity recognition (Zirikly and Hagiwara, 2015; Tsai et al., 2016; Xie et al., 2018) and semantic parsing (Xu et al., 2020) . Before the advent of pretrained cross-lingual language models, researchers leveraged the representations learned by multilingual neural machine translation (NMT). Another approach is to use NMT to translate between the source language and the target language. However, it is challenging for the sequence tagging tasks: labels on the source language need to be projected on the translated sentences (Xu et al., 2020) . Pretrained cross-lingual language models (Devlin et al., 2019 ; CONNEAU and Lample, 2019) achieve great success in various multilingual natural language tasks.", "cite_spans": [ { "start": 312, "end": 333, "text": "(Peters et al., 2018;", "ref_id": "BIBREF21" }, { "start": 334, "end": 351, "text": "Lan et al., 2020;", "ref_id": "BIBREF16" }, { "start": 352, "end": 370, "text": "Jiao et al., 2006)", "ref_id": "BIBREF13" }, { "start": 585, "end": 605, "text": "(Liu and Lane, 2016;", "ref_id": "BIBREF17" }, { "start": 606, "end": 631, "text": "Hakkani-T\u00fcr et al., 2016)", "ref_id": "BIBREF11" }, { "start": 711, "end": 722, "text": "(Kim, 2014)", "ref_id": "BIBREF14" }, { "start": 744, "end": 764, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF39" }, { "start": 929, "end": 951, "text": "(Plank and Agi\u0107, 2018;", "ref_id": "BIBREF22" }, { "start": 952, "end": 974, "text": "Yarowsky et al., 2001;", "ref_id": "BIBREF38" }, { "start": 975, "end": 998, "text": "T\u00e4ckstr\u00f6m et al., 2013)", "ref_id": "BIBREF28" }, { "start": 1026, "end": 1054, "text": "(Zirikly and Hagiwara, 2015;", "ref_id": "BIBREF40" }, { "start": 1055, "end": 1073, "text": "Tsai et al., 2016;", "ref_id": "BIBREF30" }, { "start": 1074, "end": 1091, "text": "Xie et al., 2018)", "ref_id": "BIBREF35" }, { "start": 1113, "end": 1130, "text": "(Xu et al., 2020)", "ref_id": "BIBREF36" }, { "start": 1531, "end": 1548, "text": "(Xu et al., 2020)", "ref_id": "BIBREF36" }, { "start": 1592, "end": 1612, "text": "(Devlin et al., 2019", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we tackle two shortcomings of the autoregressive sequence to sequence semantic parsing models: 1) expensive decoding and 2) poor cross-lingual performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We propose 1) insertion transformer with pointers and 2) a copy mechanism which replaces the pointer embeding with corresponding encoder out-puts, to mitigate these two problems. Our model can achieve O(log(n)) decoding time with parallel decoding. For the specific task of semantic parsing, we can further reduce the decoding steps by initializing the decoder sequence with the whole source sequence. Our model achieves new state-ofthe-art performance on both simple queries (ATIS and SNIPS) and complex queries (TOP). In crosslingual transfer, our approach surpasses the baselines in the zero-shot setting by 9 EM points on average across 9 languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Chinese is tokenized at the character level in mBERT, while Katakana/Hiragana are tokenized with whitespace. Data in MultiATIS++ is mixed in these two fashions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the reviewers for their excellent feedback. Special thanks also to Emilio Monti and Menglin Xia for their help with the multilingual TOP dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Cross-lingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "7059--7069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis CONNEAU and Guillaume Lample. 2019. Cross-lingual language model pretraining. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Ad- vances in Neural Information Processing Systems 32, pages 7059-7069. Curran Associates, Inc.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Snips voice platform: an embedded spoken language understanding system for privateby-design voice interfaces", "authors": [ { "first": "Alice", "middle": [], "last": "Coucke", "suffix": "" }, { "first": "Alaa", "middle": [], "last": "Saade", "suffix": "" }, { "first": "Adrien", "middle": [], "last": "Ball", "suffix": "" }, { "first": "Th\u00e9odore", "middle": [], "last": "Bluche", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Caulier", "suffix": "" }, { "first": "David", "middle": [], "last": "Leroy", "suffix": "" }, { "first": "Cl\u00e9ment", "middle": [], "last": "Doumouro", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "Gisselbrecht", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Caltagirone", "suffix": "" }, { "first": "Thibaut", "middle": [], "last": "Lavril", "suffix": "" }, { "first": "Ma\u00ebl", "middle": [], "last": "Primet", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Dureau", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alice Coucke, Alaa Saade, Adrien Ball, Th\u00e9odore Bluche, Alexandre Caulier, David Leroy, Cl\u00e9ment Doumouro, Thibault Gisselbrecht, Francesco Calt- agirone, Thibaut Lavril, Ma\u00ebl Primet, and Joseph Dureau. 2018. Snips voice platform: an embedded spoken language understanding system for private- by-design voice interfaces. CoRR, abs/1805.10190.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/n19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unified language model pre-training for natural language understanding and generation", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wenhui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hsiao-Wuen", "middle": [], "last": "Hon", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "13042--13054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. In Advances in Neural Informa- tion Processing Systems, pages 13042-13054.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Multilingual semantic parsing and code-switching", "authors": [ { "first": "Long", "middle": [], "last": "Duong", "suffix": "" }, { "first": "Hadi", "middle": [], "last": "Afshar", "suffix": "" }, { "first": "Dominique", "middle": [], "last": "Estival", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Pink", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Philip R Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "379--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip R Cohen, and Mark Johnson. 2017. Multilingual semantic parsing and code-switching. In Proceedings of the 21st Conference on Compu- tational Natural Language Learning (CoNLL 2017), pages 379-389.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Improving semantic parsing for task oriented dialog", "authors": [ { "first": "Arash", "middle": [], "last": "Einolghozati", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Mrinal", "middle": [], "last": "Mohit", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, and Luke Zettlemoyer. 2019. Improving semantic parsing for task oriented dialog. CoRR, abs/1902.06000.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Slot-gated modeling for joint slot filling and intent prediction", "authors": [ { "first": "Guang", "middle": [], "last": "Chih-Wen Goo", "suffix": "" }, { "first": "Yun-Kai", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Chih-Li", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Tsung-Chieh", "middle": [], "last": "Huo", "suffix": "" }, { "first": "Keng-Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yun-Nung", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "753--757", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun- Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 753-757.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Nonautoregressive neural machine translation", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor O.K. Li, and Richard Socher. 2018. Non- autoregressive neural machine translation. In Inter- national Conference on Learning Representations.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Levenshtein transformer", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Changhan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "11181--11191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural In- formation Processing Systems, pages 11181-11191.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semantic parsing for task oriented dialog using hierarchical representations", "authors": [ { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Mrinal", "middle": [], "last": "Mohit", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2787--2792", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Ku- mar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representa- tions. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787-2792.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multi-domain joint semantic frame parsing using bi-directional rnn-lstm", "authors": [ { "first": "D", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "G", "middle": [], "last": "T\u00fcr", "suffix": "" }, { "first": "A", "middle": [], "last": "Elikyilmaz", "suffix": "" }, { "first": "Yun-Nung", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Ye-Yi", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Hakkani-T\u00fcr, G. T\u00fcr, A. \u00c7 elikyilmaz, Yun-Nung Chen, Jianfeng Gao, L. Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In INTERSPEECH.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semisupervised conditional random fields for improved sequence segmentation and labeling", "authors": [ { "first": "Feng", "middle": [], "last": "Jiao", "suffix": "" }, { "first": "Shaojun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chi-Hoon", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Russell", "middle": [], "last": "Greiner", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "209--216", "other_ids": { "DOI": [ "10.3115/1220175.1220202" ] }, "num": null, "urls": [], "raw_text": "Feng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, and Dale Schuurmans. 2006. Semi- supervised conditional random fields for improved sequence segmentation and labeling. In Proceed- ings of the 21st International Conference on Compu- tational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 209-216, Sydney, Australia. Association for Com- putational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/D14-1181" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "ALBERT: a lite BERT for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: a lite BERT for self-supervised learning of language representations. In Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Attention-based recurrent neural network models for joint intent detection and slot filling", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Lane", "suffix": "" } ], "year": 2016, "venue": "17th Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "685--689", "other_ids": { "DOI": [ "10.21437/Interspeech.2016-1352" ] }, "num": null, "urls": [], "raw_text": "Bing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling. In Interspeech 2016, 17th Annual Conference of the International Speech Communica- tion Association, San Francisco, CA, USA, Septem- ber 8-12, 2016, pages 685-689. ISCA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding", "authors": [ { "first": "Gr\u00e9goire", "middle": [], "last": "Mesnil", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2013, "venue": "Interspeech", "volume": "", "issue": "", "pages": "3771--3775", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gr\u00e9goire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural- network architectures and learning methods for spo- ken language understanding. In Interspeech, pages 3771-3775.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Reward augmented maximum likelihood for neural structured prediction", "authors": [ { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" } ], "year": 2016, "venue": "Advances In Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1723--1731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Norouzi, Samy Bengio, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans, et al. 2016. Reward augmented maximum likelihood for neural structured prediction. In Advances In Neural Information Processing Systems, pages 1723-1731.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distant supervision from disparate sources for low-resource partof-speech tagging", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "", "middle": [], "last": "Agi\u0107", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "614--620", "other_ids": { "DOI": [ "10.18653/v1/D18-1061" ] }, "num": null, "urls": [], "raw_text": "Barbara Plank and\u017deljko Agi\u0107. 2018. Distant super- vision from disparate sources for low-resource part- of-speech tagging. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 614-620, Brussels, Belgium. As- sociation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Evaluation of spoken language systems: The ATIS domain", "authors": [ { "first": "Patti", "middle": [ "Price" ], "last": "", "suffix": "" } ], "year": 1990, "venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patti Price. 1990. Evaluation of spoken language sys- tems: The ATIS domain. In Speech and Natural Language: Proceedings of a Workshop Held at Hid- den Valley, Pennsylvania, June 24-27, 1990.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing", "authors": [ { "first": "Subendhu", "middle": [], "last": "Rongali", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Soldaini", "suffix": "" }, { "first": "Emilio", "middle": [], "last": "Monti", "suffix": "" }, { "first": "Wael", "middle": [], "last": "Hamza", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The Web Conference 2020", "volume": "", "issue": "", "pages": "2962--2968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don't parse, generate! a se- quence to sequence architecture for task-oriented se- mantic parsing. In Proceedings of The Web Confer- ence 2020, pages 2962-2968.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Policy distillation", "authors": [ { "first": "Andrei", "middle": [ "A" ], "last": "Rusu", "suffix": "" }, { "first": "Sergio", "middle": [ "Gomez" ], "last": "Colmenarejo", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "G\u00fcl\u00e7ehre", "suffix": "" }, { "first": "James", "middle": [], "last": "Desjardins", "suffix": "" }, { "first": "Razvan", "middle": [], "last": "Kirkpatrick", "suffix": "" }, { "first": "Volodymyr", "middle": [], "last": "Pascanu", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Raia", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "", "middle": [], "last": "Hadsell", "suffix": "" } ], "year": 2016, "venue": "4th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrei A. Rusu, Sergio Gomez Colmenarejo, \u00c7 aglar G\u00fcl\u00e7ehre, Guillaume Desjardins, James Kirk- patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. 2016. Policy distil- lation. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Insertion transformer: Flexible sequence generation via insertion operations", "authors": [ { "first": "Mitchell", "middle": [], "last": "Stern", "suffix": "" }, { "first": "William", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2019, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. In ICML.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Neural architectures for multilingual semantic parsing", "authors": [ { "first": "Raymond Hendy", "middle": [], "last": "Susanto", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "38--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond Hendy Susanto and Wei Lu. 2017. Neural architectures for multilingual semantic parsing. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 38-44.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Token and type constraints for cross-lingual part-of-speech tagging", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mc-Donald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "Transactions of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan Mc- Donald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1-12.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "N-best error simulation for training spoken dialogue systems", "authors": [ { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Pirros", "middle": [], "last": "Tsiakoulis", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Young", "suffix": "" } ], "year": 2012, "venue": "2012 IEEE Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "37--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blaise Thomson, Milica Gasic, Matthew Henderson, Pirros Tsiakoulis, and Steve Young. 2012. N-best er- ror simulation for training spoken dialogue systems. In 2012 IEEE Spoken Language Technology Work- shop (SLT), pages 37-42. IEEE.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Cross-lingual named entity recognition via wikification", "authors": [ { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "219--228", "other_ids": { "DOI": [ "10.18653/v1/K16-1022" ] }, "num": null, "urls": [], "raw_text": "Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikifica- tion. In Proceedings of The 20th SIGNLL Confer- ence on Computational Natural Language Learning, pages 219-228, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "2018. (almost) zero-shot cross-lingual spoken language understanding", "authors": [ { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Gokhan", "middle": [], "last": "T\u00fcr", "suffix": "" }, { "first": "Hakkani-T\u00fcr", "middle": [], "last": "Dilek", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": null, "venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6034--6038", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shyam Upadhyay, Manaal Faruqui, Gokhan T\u00fcr, Hakkani-T\u00fcr Dilek, and Larry Heck. 2018. (almost) zero-shot cross-lingual spoken language understand- ing. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6034-6038. IEEE.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Pointer networks", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Meire", "middle": [], "last": "Fortunato", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2692--2700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in neural in- formation processing systems, pages 2692-2700.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Multilingual neural semantic parsing with pretrained encoders", "authors": [ { "first": "Menglin", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Emilio", "middle": [], "last": "Monti", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Menglin Xia and Emilio Monti. 2021. Multilingual neural semantic parsing with pretrained encoders. In Proceedings of the 16th European Chapter of the As- sociation for Computational Linguistics. Submitted.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Neural crosslingual named entity recognition with minimal resources", "authors": [ { "first": "Jiateng", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "369--379", "other_ids": { "DOI": [ "10.18653/v1/D18-1034" ] }, "num": null, "urls": [], "raw_text": "Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural cross- lingual named entity recognition with minimal re- sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369-379, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "End-to-end slot alignment and recognition for crosslingual NLU. CoRR, abs", "authors": [ { "first": "Weijia", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Batool", "middle": [], "last": "Haider", "suffix": "" }, { "first": "Saab", "middle": [], "last": "Mansour", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weijia Xu, Batool Haider, and Saab Mansour. 2020. End-to-end slot alignment and recognition for cross- lingual NLU. CoRR, abs/2004.14353.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Breaking the softmax bottleneck: A high-rank RNN language model", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2018. Breaking the softmax bot- tleneck: A high-rank RNN language model. In Inter- national Conference on Learning Representations.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Inducing multilingual text analysis tools via robust projection across aligned corpora", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the First International Conference on Human Language Technology Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Joint slot filling and intent detection via capsule neural networks", "authors": [ { "first": "Chenwei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Du", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5259--5267", "other_ids": { "DOI": [ "10.18653/v1/P19-1519" ] }, "num": null, "urls": [], "raw_text": "Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip Yu. 2019. Joint slot filling and intent detec- tion via capsule neural networks. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5259-5267, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Crosslingual transfer of named entity recognizers without parallel corpora", "authors": [ { "first": "Ayah", "middle": [], "last": "Zirikly", "suffix": "" }, { "first": "Masato", "middle": [], "last": "Hagiwara", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "390--396", "other_ids": { "DOI": [ "10.3115/v1/P15-2064" ] }, "num": null, "urls": [], "raw_text": "Ayah Zirikly and Masato Hagiwara. 2015. Cross- lingual transfer of named entity recognizers without parallel corpora. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 390-396, Beijing, China. Associa- tion for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "text": "as baselines for ATIS and SNIPS.", "html": null, "num": null, "type_str": "table", "content": "
MethodTOP EMICATIS EMICSNIPS EM IC
Joint BiRNN (Hakkani-T\u00fcr et al., 2016)--80.70 92.60 73.20 96.90
Attention BiRNN (Liu and Lane, 2016)--78.90 91.10 74.10 96.70
Slot Gated Full Attention (Goo et al., 2018)--82.20 93.60 75.50 97.00
CapsuleNlU (Zhang et al., 2019)--83.40 95.00 80.90 97.30
SR(S)+ELMO+SVMRank
" }, "TABREF2": { "text": "Exact Match and Intent Classification scores for on the test set. Input-src means the initial input of the decoder is the whole source sequence. For the shift reduce parsing models, E denotes the ensemble model and S is the single model.", "html": null, "num": null, "type_str": "table", "content": "" }, "TABREF3": { "text": "S2S-PTR 87.23 50.06 39.30 39.46 46.78 11.42 28.72 12.60 32.69 AR-S2S-PTR 86.83 40.72 33.38 34.00 17.22 7.45 23.74 10.04 23.77 mBERT 86.33 48.46 38.56 39.12 42.98 15.22 21.89 23.29 32.78", "html": null, "num": null, "type_str": "table", "content": "
enesptdefrhizhja *avg
IT-
to
" }, "TABREF4": { "text": "Zero-shot EM scores on multilingual TOP dataset. Model is trained on English only.", "html": null, "num": null, "type_str": "table", "content": "" }, "TABREF6": { "text": "Few-shot EM scores on multilingual TOP dataset with model pretrained on English. Training samples used in few-shot are sampled from the test set and excluded during testing.", "html": null, "num": null, "type_str": "table", "content": "
" } } } }