|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:55:15.018373Z" |
|
}, |
|
"title": "Multilingual Paraphrase Generation For Bootstrapping New Features in Task-Oriented Dialog Systems", |
|
"authors": [ |
|
{ |
|
"first": "Subhadarshi", |
|
"middle": [], |
|
"last": "Panda", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "City University of New York", |
|
"location": { |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "spanda@gc.cuny.edu" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Tirkaz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Amazon Alexa AI", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Falke", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Amazon Alexa AI", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lehnen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Amazon Alexa AI", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "plehnen@amazon.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The lack of labeled training data for new features is a common problem in rapidly changing real-world dialog systems. As a solution, we propose a multilingual paraphrase generation model that can be used to generate novel utterances for a target feature and target language. The generated utterances can be used to augment existing training data to improve intent classification and slot labeling models. We evaluate the quality of generated utterances using intrinsic evaluation metrics and by conducting downstream evaluation experiments with English as the source language and nine different target languages. Our method shows promise across languages, even in a zero-shot setting where no seed data is available.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The lack of labeled training data for new features is a common problem in rapidly changing real-world dialog systems. As a solution, we propose a multilingual paraphrase generation model that can be used to generate novel utterances for a target feature and target language. The generated utterances can be used to augment existing training data to improve intent classification and slot labeling models. We evaluate the quality of generated utterances using intrinsic evaluation metrics and by conducting downstream evaluation experiments with English as the source language and nine different target languages. Our method shows promise across languages, even in a zero-shot setting where no seed data is available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Spoken language understanding is a core problem in task oriented dialog systems with the goal of understanding and formalizing the intent expressed by an utterance (Tur and De Mori, 2011) . It is often modeled as intent classification (IC), an utterance-level multi-class classification problem, and slot labeling (SL), a sequence labeling problem over the utterance's tokens. In recent years, approaches that train joint models for both tasks and that leverage powerful pre-trained neural models greatly improved the state-of-the-art performance on available benchmarks for IC and SL (Louvan and Magnini, 2020; Weld et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 187, |
|
"text": "(Tur and De Mori, 2011)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 585, |
|
"end": 611, |
|
"text": "(Louvan and Magnini, 2020;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 630, |
|
"text": "Weld et al., 2021)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A common challenge in real-world systems is the problem of feature bootstrapping: If a new feature should be supported, the label space needs to be extended with new intent or slot labels, and the model needs to be retrained to learn to classify corresponding utterances. However, labeled examples for the new feature are typically limited to a small set of seed examples, as the collection of more annotations would make feature expansion costly and slow. As a possible solution, previous work explored the automatic generation of paraphrases to augment the seed data (Malandrakis et al., 2019; Cho et al., 2019; Jolly et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 595, |
|
"text": "(Malandrakis et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 613, |
|
"text": "Cho et al., 2019;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 633, |
|
"text": "Jolly et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we study feature bootstrapping in the case of a multilingual dialog system. Many large-scale real-world dialog systems, e.g. Apple's Siri, Amazon's Alexa and Google's Assistant, support interactions in multiple languages. In such systems, the coverage of languages and the range of features is continuously expanded. That can lead to differences in the supported intent and slot labels across languages, in particular if a new language is added later or if new features are not rolled out to all languages simultaneously. As a consequence, labeled data for a feature can be available in one language, but limited or completely absent in another. With multilingual paraphrase generation, we can benefit from this setup and improve data augmentation for data-scarce languages via cross-lingual transfer from data-rich languages. As a result, the data augmentation can not only be applied with seed data, i.e. in a few-shot setting, but even under zero-shot conditions with no seeds at all for the target language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To address this setup, we follow the recent work of Jolly et al. (2020) , which proposes to use an encoder-decoder model that maps from structured meaning representations to corresponding utterances. Because such an input is language-agnostic, it is particularly well-suited for the multilingual setup. We make the following extensions: First, we port their model to a transformer-based architecture and allow multilingual training by adding the desired target language as a new input to the conditional generation. Second, we let the model generate slot labels along with tokens to alleviate the need for additional slot projection techniques. And third, we introduce improved paraphrase decoding methods that leverage a model-based selec-tion strategy. With that, we are able to generate labeled data for a new feature even in the zero-shot setting where no seeds are available at all.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 71, |
|
"text": "Jolly et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluate our approach by simulating a crosslingual feature bootstrapping setting, either fewshot or zero-shot, on MultiATIS, a common IC/SL benchmark spanning nine languages. The experiments compare against several alternative methods, including previous work for mono-lingual paraphrase generation and machine translation. We find that our method produces paraphrases of high novelty and diversity and using it for IC/SL training shows promising downstream classification performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Various studies have explored paraphrase generation for dialog systems. Bowman et al. (2016) showed that generating sentences from a continuous latent space is possible using a variational autoencoder model and provided guidelines on how to train such a generation model. However, our model uses an encoder-decoder approach which can handle the intent and language as categorical inputs in addition to the sequence input. Malandrakis et al. (2019) explored a variety of controlled paraphrase generation approaches for data augmentation and proposed to use conditional variational autoencoders which they showed obtained the best results. Our method is different as it uses a conditional seq2seq model that can generate text from any sequence of slots and does not require an utterance as an input. Xia et al. (2020) propose a transformer-based conditional variational autoencoder for few shot utterance generation where the latent space represents the intent as two independent parts (domain and action). Our approach is different since it models the language and intent of the generation that can be controlled explicitly. Also, our model is the first to enable zero-shot utterance generation. Cho et al. (2019) generate paraphrases for seed examples with a transformer seq2seq model and self-label them with a baseline intent and slot model. We follow a similar approach but our model generates utterances from a sequence of slots rather than an utterance, which enables an explicitly controlled generation. Also the number of seed utterances we use is merely 20 for the few shot setup unlike around 1M seed para-carrier phrase pairs in Cho et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 92, |
|
"text": "Bowman et al. (2016)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 798, |
|
"end": 815, |
|
"text": "Xia et al. (2020)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1195, |
|
"end": 1212, |
|
"text": "Cho et al. (2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1639, |
|
"end": 1656, |
|
"text": "Cho et al. (2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Several other studies follow a text-to-text ap-proach and assume training data in the form of paraphrase pairs for training paraphrase generation models in a single language Li et al., 2018 Li et al., , 2019 . Our approach is focused towards generating utterances in the dialog domain that can generate utterances from a sequence of slots conditioned on both intent and language. Jolly et al. (2020) showed that an interpretationto-text model can be used with shuffling-based sampling techniques to generate diverse and novel paraphrases from small amounts of seed data, that improve accuracy when augmenting to the existing training data. Our approach is different as our model can generate the slot annotations along with the the utterance, which are necessary for the slot labeling task. Our model can be seen as an extension of the model by Jolly et al. (2020) to a transformer based model, with the added functionality of controlling the language in which the utterance generation is needed, which in turn enables zero shot generation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 189, |
|
"text": "Li et al., 2018", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 207, |
|
"text": "Li et al., , 2019", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 399, |
|
"text": "Jolly et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 845, |
|
"end": 864, |
|
"text": "Jolly et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Using large pre-trained models has also been shown to be effective for paraphrase generation. Chen et al. (2020) for instance show the effectiveness of using GPT-2 (Radford et al., 2019) for generating text from tabular data (a set of attributevalue pairs). Our model, however, does not rely on pre-trained weights from another model such as GPT-2, is scalable, and can be applied to training data from any domain, for instance, dialog domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 112, |
|
"text": "Chen et al. (2020)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 186, |
|
"text": "(Radford et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Beyond paraphrase generation, several other techniques have been proposed for feature bootstrapping. Machine translation can be used from data-rich to data-scarce languages (Gaspers et al., 2018; Xu et al., 2020) . Cross-lingual transfer learning can also leverage use existing data in other languages (Do and Gaspers, 2019) . If a feature is already being actively used, feedback signals from users, such as paraphrases or interruptions, can be used to obtain additional training data (Muralidharan et al., 2019; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 195, |
|
"text": "(Gaspers et al., 2018;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 212, |
|
"text": "Xu et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 324, |
|
"text": "(Do and Gaspers, 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 513, |
|
"text": "(Muralidharan et al., 2019;", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We want to augment existing labeled utterances by generating additional novel utterances in a desired target language. In our case, existing data consists of feature-unrelated data (intents and slots already supported) spanning all languages and featurerelated data, which is available in a source language but is small (few-shot) or not available (zero shot) in other languages. For generation, we first extract the intent and slot types from the available data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We then generate a new utterance by conditioning a multilingual language model on the intent, slot types and the target language. We refer to utterances that have the same intent and slot types as paraphrases of each other since they convey the same meaning in the context of the SLU system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to generate paraphrases, we train a multilingual paraphrase generation model that generates a paraphrase given a language, an intent and a set of slot types. The model architecture is outlined in Figure 1 . The model uses self-attention based encoder and decoder similar to the transformer (Vaswani et al., 2017) . The encoder of the model receives as input the language embedding and the intent embedding, which are added to the slot embedding. Unlike the transformer model (Vaswani et al., 2017) , we do not use the positional embedding in the encoder. This is because the order of the slot types in the input sequence does not matter and is thus made indistinguishable for the encoder. In order to generate paraphrases which can be used for data augmentation, we would need the slot annotations and the intents of the generations. Note that we already know the intent of the generated paraphrase since it is the same intent as specified while generating it. The slot annotations, however, are not readily obtained from the input slot types. We can make the slot annotations part of the output sequence by generating the slot label in BIO format in every alternate time step, which would be the slot label for the token generated in the previous time step. This enables the model to generate the slot annotations along with the paraphrase. An illustrative example is shown in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 321, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 506, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 213, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1387, |
|
"end": 1395, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Paraphrase Generation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Generating the output sequence token-by-token can be done by using greedy decoding where given learned model parameters \u03b8, the most likely token is picked at each decoding step as x t = argmax p \u03b8 (x t |x <t ). Such a generation process is deterministic. For our task of generating paraphrases, we are interested in generating diverse and novel utterances. Non-deterministic sampling methods such as top-k sampling has been used in related work (Fan et al., 2018; Welleck et al., 2020; Jolly et al., 2020) to achieve this. In top-k random sampling, we first scale the logits z w by using a temperature parameter \u03c4 before applying softmax.", |
|
"cite_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 463, |
|
"text": "(Fan et al., 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 485, |
|
"text": "Welleck et al., 2020;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 505, |
|
"text": "Jolly et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Techniques", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(x t = w|x <t ) = exp(z w /\u03c4 ) w \u2208V exp(z w /\u03c4 ) ,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Decoding Techniques", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where V is the decoder's vocabulary. Setting \u03c4 > 1 encourages the resulting probability distribution to be less spiky, thereby encouraging diverse choices during sampling. The top-k sampling restricts the size of the most likely candidate pool to k \u2264 |V | .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Techniques", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The generated paraphrases can be used to augment the existing training data. Since the training data we use is highly imbalanced, data augmentation might lead to disturbance in the original intent distribution. To ensure that the data augmentation process does not disturb the original intent distribution, we compute the number of samples to augment using the following constraint: the ratio of target intent to other intents for the target language should be the same as the ratio of target intent to other intents in the source language. Sometimes, using the above constraint results in a negligible number of samples for augmentation, in which cases we use a minimal number of samples (see experiments).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Balanced Augmentation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In addition to deciding how many paraphrases to augment, it is also crucial to decide which paraphrases to use. Preliminary experimental results showed that samping uniformly from all generated paraphrases does not lead to improvement over the baseline. Upon manual examination we found that not all the paraphrases belong to the desired target intent. To cope with that problem, we use the baseline downstream intent classification and slot labeling model, which is trained only on the existing data, to compute the likelihood of the generated paraphrases to belong to the target intent. We rank all the generated paraphrases based on these probabilities and select from the top of the pool for augmentation of the seed data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase Selection", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We evaluate our approach by simulating few-shot and zero-shot feature bootstrapping scenarios.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use the MultiATIS++ data (Xu et al., 2020) , a parallel IC/SL corpus that was created by translating the original English dataset. It covers a total of 9 languages: English, Hindi, Turkish, German, French, Portuguese, Spanish, Japanese and Chinese. The languages encompass a diverse set of language families: Indo-European, Sino-Tibetan, Japonic and Altaic.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 45, |
|
"text": "(Xu et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Choosing target intents To reduce the number of experiments, we only choose three different intents for simulating the feature bootstrapping scenario. The MultiATIS++ dataset is highly imbalanced in terms of intent frequencies. For instance, 74% of the English training data has the intent atis_flight and as many as 9 intents have less than 20 training samples. The trend is similar for the non-English languages. For choosing target intents for simulating the zero shot and few shot training data, we therefore consider the following three target intents: (a) atis_airfare, which is highly frequent, (b) atis_airline, which has medium frequency, and (c) atis_city which is scarce.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Preprocessing We remove the samples in the MultiATIS++ data for which the number of tokens and the number of slot values do not match. 1 We also only consider the first intent for the samples that have multiple intent annotations. We show the data sizes after preprocessing in Table 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 136, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 284, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Training setup To simulate the feature bootstrapping scenario, we consider only 20 samples (few shot setup) or no samples at all (zero shot setup) from the MultiATIS++ data for a specific target intent in a target language. 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Language setup We use English as the source language and consider 8 target languages (Hindi, Turkish, German, French, Portuguese, Spanish, Japanese, Chinese) simultaneously. This encourages the model parameters to be shared across all the 9 languages including the source language English. The purpose of this setup is to enable us to study the knowledge transfer across multiple target languages in addition to that from the source language. We train a single model for paraphrase generation on all the languages as well as a single multi-lingual downstream IC/SL model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Paraphrase generation training Since the training data is imbalanced, we balanced the training data by oversampling the intents to match the frequency of the most frequent intent. 3 For both the encoder and the decoder, the multi-head attention layers' hidden dimension was set to 128 and the position-wise feed forward layers' hidden dimension was set to 256. The number of encoder and decoder layers was set to 3 each. The number of heads was set to 8. Dropout of 0.1 was used in both the encoder and the decoder. The model parameters were initialized with Xavier initialization (Glorot and Bengio, 2010) . The model was trained using Adam optimizer (Kingma and Ba, 2014) with a learning rate of 5e-4 and a gradient clipping of 1. The training was stopped when the development loss did not improve for 5 epochs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 581, |
|
"end": 606, |
|
"text": "(Glorot and Bengio, 2010)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models and Training Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Generating paraphrases For generating paraphrases in the target intent in the target language, we used the slots appearing in the existing training data in the target intent. We used greedy decoding and top-k sampling with k = 3, 5, 10 and \u03c4 = 1.0, 2.0. For a given input, we generated using the top-k random sampling three times with different random seeds. We finally combined all generations and ranked the candidates using the baseline downstream system's prediction probability. The number of paraphrases that are selected is determined as in 3.3, with 20 as the minimum.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models and Training Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Methods for comparison We compare our method against four alternatives: (a) Baseline: No data augmentation at all. The downstream model is trained using just the available seed examples for the target intent. (b) Oversampling: We oversample the samples per intent uniformly at random to match the size of the augmented training data using the proposed method. This is only applicable to the few shot setup since for the zero shot setup, there are no existing samples in the target intent in the target language to sample from. (c) CVAE seq2seq model: We generate paraphrases using the CVAE seq2seq model by Malandrakis et al. (2019). The original CVAE seq2seq model as proposed by Malandrakis et al. (2019) defines the set {domain, intent, slots} as the signature of an utterance and denotes the carrier phrases for a given signature to be paraphrases. These carrier phrases are then used to create input-output pairs for the CVAE seq2seq model training. Since the original formulation does not take into account the language of generation, we adapt the method for our case by defining the signature as the set {language, intent, slots}. We set the model's hidden dimension to 128, used the 100-dimensional GloVe embeddings (Pennington et al., 2014 ) pretrained on Wikipedia, and trained the model without freezing embeddings using early stopping with a patience of 5 epochs by monitoring the development loss.", |
|
"cite_spans": [ |
|
{ |
|
"start": 681, |
|
"end": 706, |
|
"text": "Malandrakis et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1224, |
|
"end": 1248, |
|
"text": "(Pennington et al., 2014", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models and Training Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Finally we generated 100 carrier phrases for each carrier phrase input in the target intent in the target language. Paraphrases were obtained by injecting the slot values to the generated carrier phrases. The pool of all paraphrases was sorted using the baseline downstream system's prediction probabilities. The CVAE seq2seq model was only applicable to the few shot setup since in the zero shot setup there are no existing carrier phrases in the target language in the target intent that can be used to sample from. (d) Machine translation: We augmented the translations generated from English using the MT+fastalign approach from the MultiATIS++ paper (Xu et al., 2020) . For the few shot setup, we added all the translated utterances except the ones that correspond to those utterances we already picked as the few shot samples. For the zero shot setup, we added all the translated utterances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 655, |
|
"end": 672, |
|
"text": "(Xu et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models and Training Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Unlike the paraphrase generation model training, we do not balance the simulated training data by oversampling based on intent. This choice was made to make sure that the original intent distribution was preserved for the downstream model training. We used the BERT base multilingual cased model (Devlin et al., 2019) 4 and added an intent head and a slot head on top for joint intent classification and slot labeling. Each head uses a hidden size of 256 and ReLU activation. The model was trained using Adam optimizer with a learning rate of 0.1. The training was stopped when the development semantic error rate (Su et al., 2018) did not improve for 3 epochs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 296, |
|
"end": 317, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 631, |
|
"text": "(Su et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Downstream training", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We evaluate the quality of the generated paraphrases using the following metrics. Let S be the set of input slot types and G be the set of generated slot types. All retrieval score The all retrieval score r measures if all the input slots were retrieved in the generation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "r = 1 if |S \u2229 G| = |S| 0 otherwise (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Exact match The exact match score r measures if all the input slots and output slots exactly match (Malandrakis et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 125, |
|
"text": "(Malandrakis et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "r = 1 if S = G 0 otherwise (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Partial match The partial match score r measures if at least one output slot matches an input slot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "r = 1 if |S \u2229 G| > 0 0 otherwise (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "F1 slot score The F1 slot score F 1 measures the set similarity between S and G using precision and recall which are defined for sets as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "precision = |S \u2229 G| |G| , recall = |S \u2229 G| |S|", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Jaccard index Jaccard index measures the set similarity between S and G as their intersection size divided by the union size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Novelty Let P be the set of paraphrases generated from a base utterance u.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "novelty = 1 |P | u \u2208P 1 \u2212 BLEU4(u, u )", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Diversity The diversity is computed using the generated paraphrases P .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "diversity = u \u2208P,u \u2208P,u =u 1 \u2212 BLEU4(u , u ) |P | \u00d7 (|P | \u2212 1)", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Language detection score We are interested in quantifying if a generated paraphrase is in the target language. We use langdetect 5 to compute p(lang = target lang). Higher scores denote better language generation. Table 5 : Downstream slot labeling F1 scores (%). Each score shown is the average score of 10 runs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 221, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "5 Experimental results", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For both the few shot and zero shot setups, the paraphrases used for intrinsic evaluation are generated in the target intent and the target language only. For the top-k sampling based generation, we generate for each input three times with different random seeds and compute novelty and diversity scores. Table 2 shows intrinsic evaluation results for different generation methods. For the few shot setup, the all retrieval, exact match, partial match, F1 slot and Jaccard index scores decrease upon increasing top-k and temperature. The highest scores for the above metrics are obtained for the greedy generation, which indicates that the generated slot types are most similar to the input slot types in that case. However, it is the opposite for the novelty and diversity metrics where the scores are higher with larger top-k and temperatures. For the zero shot setup, the overall trend is similar to the few shot setup. The slot similarity based metrics are lower in general, which indicates that even as little as 20 samples in the few shot setup improve the generation of desired slots. The novelty scores for the zero shot setup are 1 as we would expect.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 312, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Intrinsic Evaluation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In Table 3 , we show that the intrinsic evaluation results using the proposed approach are consistently better than the CVAE seq2seq paraphrase generation model (Malandrakis et al., 2019) . The language detection score varies across languages, which may be due to the vocabulary overlap between languages, e.g., San Francisco appears in both English and German utterances. Interestingly we also observe code switching, i.e. mixedlanguage generations, while using our approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 187, |
|
"text": "(Malandrakis et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Intrinsic Evaluation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We evaluate the downstream intent classification using accuracy and the slot labeling using F1 score. Since we are interested in measuring the variation in scores for the target intents, we only report the scores for the test samples in the target intents in Tables 4 and 5. We run each downstream training experiment 10 times and report the mean scores for each language and also the average across languages in the AVG column in Tables 4 and 5. We are also interested in tracking the scores for the test samples having intents other than the target intents since we need to ensure that the scores on the other intents does not go down. We found that the effect on the scores (both intent classification and slot labeling) for the other intents is negligible using paraphrasing and other methods. 6 In Tables 4 and 5, our paraphrasing results outperform the baseline scores on average. In the few shot setup, our paraphrasing approach outperforms the CVAE seq2seq approach in 6 (DE, ES, FR, HI, JA, ZH) out of 8 languages in intent classification and overall obtains an improvement of 1.9% intent classification accuracy across all target languages. Both oversampling and MT approaches are competitive. Oversampling performs the best for JA whereas MT performs the best for ES and HI. Our paraphrasing approach results in the best intent classification scores overall (78%). In terms of slot F1 scores, we see mixed results with no clear best method (baseline, oversampling and CVAE all result in 87.6% F1 score). Notably, the MT approach results in the lowest overall slot F1 score of just 84.8% on average. In the zero shot setup, the MT approach outperforms our paraphrasing approach by a large margin in intent classification (62.5%). However we note that the paraphrasing approach requires no dependencies on other models or other data, unlike the MT approach which requires a parallel corpus to train the MT model. In terms of slot F1 scores, our paraphrasing approach and the baseline approach both result in almost similar overall scores (85.5% and 85.4%), both higher than the MT approach. The lower slot F1 scores using the MT approach in few and zero shot setups indicate that the fast align method to align slots in source and translation might result in noisy training data affecting the SL model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 798, |
|
"end": 799, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Downstream Evaluation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Paraphrases generated in different languages for a given input are shown in Table 6 . The intent is airline and the slots are fromloc.city_name for columbus and toloc.city_name for minneapolis. For this intent and the slots, the generated paraphrase in German (translated to English) is Show me all the airlines that fly from Toronto to Boston. The desired intent, that is airline is realized in the gener-ated paraphrase. Additionally, Toronto and Boston are the slot values respectively for the slot types fromloc.city_name and toloc.city_name. For Spanish, the generated paraphrase (translated to English) is Which Airlines Fly from Atlanta to Philadelphia. The airline intent is realized in the generated paraphrase and also Atlanta and Philadelphia are the slot values produced associated with the desired slot types. As illustrated by the examples, the model is free to pick a specific slot value during generation, leading to variations across languages, but all are consistent with the slot type.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 83, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Examples", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In this paper, we proposed a multilingual paraphrase generation model that can be used for feature bootstrapping with or without seed data in the target language. In addition to generating a paraphrase, the model also generates the associated slot labels, enabling the generation to be used directly for data augmentation to existing training data. Our method is language agnostic and scalable, with no dependencies on pre-trained models or additional data. We validate our method using experiments on the MultiATIS++ dataset containing utterances spanning 9 languages. Intrinsic evaluation shows that paraphrases generated using our approach have higher novelty and diversity in comparison to CVAE seq2seq based paraphrase generation. Additionally, downstream evaluation shows that using the generated paraphrases for data augmentation results in improvements over baseline and related techniques in a wide range of languages and setups. To the best of our knowledge, this is the first successful exploration of generating paraphrases for SLU in a cross-lingual setup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the future, we would like to explore strategies to exploit monolingual data in the target languages to further refine the paraphrase generation. We would also like to leverage pre-trained multilingual text-to-text models such as mT5 (Xue et al., 2020) for multilingual paraphrase generation in the dialog system domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 254, |
|
"text": "(Xue et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This leads to removal of 0.6% of the total samples. 2 For cases that have less than 20 samples to pick from, we consider all the samples which are available.3 Experiments with the original imbalanced training data resulted in generating paraphrases which belongs to one of the frequent intents, even if the desired intent was one with a low frequency in the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/google-research/ bert/blob/master/multilingual.md", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/Mimino666/ langdetect", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The maximum drop in score was less than 1% absolute.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank our anonymous reviewers for their thoughtful comments and suggestions that improved the final version of this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Generating sentences from a continuous space", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vilnis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Jozefowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--21", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K16-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Con- ference on Computational Natural Language Learn- ing, pages 10-21, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Few-shot NLG with pre-trained language model", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harini", |
|
"middle": [], |
|
"last": "Eavani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinyin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "183--190", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.18" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 183-190, Online. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Paraphrase generation for semi-supervised learning in NLU", |
|
"authors": [ |
|
{ |
|
"first": "Eunah", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Campbell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--54", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-2306" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eunah Cho, He Xie, and William M. Campbell. 2019. Paraphrase generation for semi-supervised learning in NLU. In Proceedings of the Workshop on Meth- ods for Optimizing and Evaluating Neural Language Generation, pages 45-54, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Cross-lingual transfer learning with data selection for large-scale spoken language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Quynh", |
|
"middle": [], |
|
"last": "Do", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Judith", |
|
"middle": [], |
|
"last": "Gaspers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1455--1460", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1153" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quynh Do and Judith Gaspers. 2019. Cross-lingual transfer learning with data selection for large-scale spoken language understanding. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1455-1460, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Leveraging user paraphrasing behavior in dialog systems to automatically collect annotations for long-tail utterances", |
|
"authors": [ |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Falke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Boese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniil", |
|
"middle": [], |
|
"last": "Sorokin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Tirkaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lehnen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics: Industry Track", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "21--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tobias Falke, Markus Boese, Daniil Sorokin, Caglar Tirkaz, and Patrick Lehnen. 2020. Leveraging user paraphrasing behavior in dialog systems to automat- ically collect annotations for long-tail utterances. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 21-32, Online. International Committee on Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Hierarchical neural story generation", |
|
"authors": [ |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "889--898", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Selecting machine-translated data for quick bootstrapping of a natural language understanding system", |
|
"authors": [ |
|
{ |
|
"first": "Judith", |
|
"middle": [], |
|
"last": "Gaspers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Penny", |
|
"middle": [], |
|
"last": "Karanasou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajen", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "137--144", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-3017" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Judith Gaspers, Penny Karanasou, and Rajen Chatter- jee. 2018. Selecting machine-translated data for quick bootstrapping of a natural language under- standing system. In Proceedings of the 2018 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 137-144, New Orleans -Louisiana. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Understanding the difficulty of training deep feedforward neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Glorot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "249--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neu- ral networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249-256, Chia Laguna Resort, Sardinia, Italy. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A deep generative framework for paraphrase generation", |
|
"authors": [ |
|
{ |
|
"first": "Ankush", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prawaan", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyush", |
|
"middle": [], |
|
"last": "Rai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Data-efficient paraphrase generation to bootstrap intent classification and slot labeling for new features in task-oriented dialog systems", |
|
"authors": [ |
|
{ |
|
"first": "Shailza", |
|
"middle": [], |
|
"last": "Jolly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Falke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Tirkaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniil", |
|
"middle": [], |
|
"last": "Sorokin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics: Industry Track", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shailza Jolly, Tobias Falke, Caglar Tirkaz, and Daniil Sorokin. 2020. Data-efficient paraphrase generation to bootstrap intent classification and slot labeling for new features in task-oriented dialog systems. In Pro- ceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 10-20, Online. International Committee on Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "the 3rd International Conference for Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Cite arxiv:1412.6980Comment: Published as a confer- ence paper at the 3rd International Conference for Learning Representations, San Diego, 2015.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Paraphrase generation with deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3865--3878", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1421" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase generation with deep reinforce- ment learning. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 3865-3878, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Decomposable neural paraphrase generation", |
|
"authors": [ |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3403--3414", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1332" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Qun Liu. 2019. Decomposable neural paraphrase generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3403-3414, Florence, Italy. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Recent neural methods on slot filling and intent classification for task-oriented dialogue systems: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Louvan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "480--496", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel Louvan and Bernardo Magnini. 2020. Re- cent neural methods on slot filling and intent clas- sification for task-oriented dialogue systems: A sur- vey. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 480- 496, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Controlled text generation for data augmentation in intelligent artificial agents", |
|
"authors": [ |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Malandrakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minmin", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anuj", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuyang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "90--98", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-5609" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikolaos Malandrakis, Minmin Shen, Anuj Goyal, Shuyang Gao, Abhishek Sethi, and Angeliki Met- allinou. 2019. Controlled text generation for data augmentation in intelligent artificial agents. In Pro- ceedings of the 3rd Workshop on Neural Generation and Translation, pages 90-98, Hong Kong. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Leveraging User Engagement Signals For Entity Labeling in a Virtual Assistant", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Kothari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1909, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kothari, and Jason Williams. 2019. Leveraging User Engagement Signals For Entity Labeling in a Virtual Assistant. arXiv, 1909.09143.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Language models are unsupervised multitask learners", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rewon", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A re-ranker scheme for integrating large scale nlu models", |
|
"authors": [ |
|
{ |
|
"first": "Chengwei", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "2018 IEEE Spoken Language Technology Workshop (SLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "670--676", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/SLT.2018.8639519" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chengwei Su, Rahul Gupta, Shankar Ananthakrish- nan, and Spyros Matsoukas. 2018. A re-ranker scheme for integrating large scale nlu models. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 670-676.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Spoken Language Understanding: Systems for Extracting Semantic Information from Speech", |
|
"authors": [ |
|
{ |
|
"first": "Gokhan", |
|
"middle": [], |
|
"last": "Tur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Renato", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Mori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gokhan Tur and Renato De Mori. 2011. Spoken Lan- guage Understanding: Systems for Extracting Se- mantic Information from Speech. John Wiley and Sons.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A survey of joint intent detection and slotfilling models in natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Poon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Weld, X. Huang, S. Long, J. Poon, and S. C. Han. 2021. A survey of joint intent detection and slot- filling models in natural language understanding. arXiv, 2101.08091.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Neural text generation with unlikelihood training", |
|
"authors": [ |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Welleck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilia", |
|
"middle": [], |
|
"last": "Kulikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Roller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Di- nan, Kyunghyun Cho, and Jason Weston. 2020. Neu- ral text generation with unlikelihood training. In International Conference on Learning Representa- tions.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Composed variational natural language generation for few-shot intents", |
|
"authors": [ |
|
{ |
|
"first": "Congying", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3379--3388", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.findings-emnlp.303" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Congying Xia, Caiming Xiong, Philip Yu, and Richard Socher. 2020. Composed variational natural lan- guage generation for few-shot intents. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3379-3388, Online. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "End-to-end slot alignment and recognition for crosslingual NLU", |
|
"authors": [ |
|
{ |
|
"first": "Weijia", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Batool", |
|
"middle": [], |
|
"last": "Haider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saab", |
|
"middle": [], |
|
"last": "Mansour", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5052--5063", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.410" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weijia Xu, Batool Haider, and Saab Mansour. 2020. End-to-end slot alignment and recognition for cross- lingual NLU. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 5052-5063, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td/><td>how O much</td><td/></tr><tr><td/><td/><td colspan=\"2\">Transformer Encoder</td><td/><td>Transformer Decoder</td></tr><tr><td>Slot embeddings</td><td><s></td><td>fromloc.city_name</td><td>toloc.city_name</td><td></s></td><td>Positional Encoding</td></tr><tr><td/><td>+</td><td>+</td><td>+</td><td>+</td><td/></tr><tr><td>Intent embeddings</td><td>airfare</td><td>airfare</td><td>airfare</td><td>airfare</td><td/></tr><tr><td/><td>+</td><td>+</td><td>+</td><td>+</td><td>Outputs (Shifted right)</td></tr><tr><td>Language embeddings</td><td>EN</td><td>EN</td><td>EN</td><td>EN</td><td/></tr></table>", |
|
"text": "O is O a O flight O from O washington B-fromloc.city_name to O montreal B-toloc.city_nameFigure 1: Overall architecture of the multilingual paraphrase generation model. The slot, intent and language embeddings are added at the slot level to obtain representations to input to the encoder. The <s> and </s> tags are necessary as they enable handling cases where we want to generate paraphrases having no associated slots. The decoder generates the slot labels along with the paraphrase tokens.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "MultiATIS++ data statistics.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td>Few shot</td><td/><td/><td/><td>Zero shot</td><td/><td/></tr><tr><td>Language</td><td colspan=\"2\">Lang. detection score</td><td colspan=\"2\">Novelty</td><td colspan=\"2\">Diversity</td><td colspan=\"3\">Lang. detection score Novelty Diversity</td></tr><tr><td/><td>CVAE</td><td>Ours</td><td colspan=\"4\">CVAE Ours CVAE Ours</td><td>Ours</td><td/><td/></tr><tr><td>DE</td><td>0.69</td><td>0.95</td><td>0.43</td><td>0.97</td><td>0.33</td><td>0.81</td><td>0.97</td><td>1</td><td>0.85</td></tr><tr><td>ES</td><td>0.71</td><td>0.91</td><td>0.48</td><td>0.98</td><td>0.44</td><td>0.82</td><td>0.93</td><td>1</td><td>0.83</td></tr><tr><td>FR</td><td>0.78</td><td>0.94</td><td>0.47</td><td>0.98</td><td>0.36</td><td>0.82</td><td>0.95</td><td>1</td><td>0.85</td></tr><tr><td>HI</td><td>0.69</td><td>0.97</td><td>0.5</td><td>0.97</td><td>0.28</td><td>0.81</td><td>0.97</td><td>1</td><td>0.81</td></tr><tr><td>JA</td><td>0.83</td><td>0.96</td><td>0.52</td><td>1</td><td>0.39</td><td>0.85</td><td>1</td><td>1</td><td>0.85</td></tr><tr><td>PT</td><td>0.5</td><td>0.75</td><td>0.55</td><td>0.97</td><td>0.38</td><td>0.81</td><td>0.86</td><td>1</td><td>0.85</td></tr><tr><td>TR</td><td>0.01</td><td>0.34</td><td>0.25</td><td>0.99</td><td>0.22</td><td>0.85</td><td>0.53</td><td>1</td><td>0.84</td></tr><tr><td>ZH</td><td>0.57</td><td>0.68</td><td>0.61</td><td>1</td><td>0.52</td><td>0.85</td><td>0.62</td><td>1</td><td>0.85</td></tr></table>", |
|
"text": "Intrinsic evaluation scores for different generation methods in few shot and zero shot scenarios.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Intrinsic evaluation scores for different target languages in few shot and zero shot scenarios.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Method</td><td>DE</td><td>ES</td><td>FR</td><td>Slot labeling HI JA</td><td>PT</td><td>TR</td><td>ZH</td><td>AVG.</td></tr><tr><td/><td>Baseline</td><td colspan=\"7\">98.0 85.0 91.3 74.6 89.6 92.2 79.6 90.6</td><td>87.6</td></tr><tr><td/><td colspan=\"8\">Oversampling 96.2 84.6 91.7 76.9 89.9 90.0 81.3 90.3</td><td>87.6</td></tr><tr><td>Few shot</td><td>CVAE</td><td colspan=\"7\">97.3 83.5 90.0 75.8 90.8 91.6 82.1 89.6</td><td>87.6</td></tr><tr><td/><td>MT</td><td colspan=\"7\">95.0 78.8 90.9 73.0 90.8 82.9 77.9 88.8</td><td>84.8</td></tr><tr><td/><td>Paraphrasing</td><td colspan=\"7\">97.2 80.8 89.7 76.2 90.2 91.3 78.6 91.6</td><td>86.9</td></tr><tr><td/><td>Baseline</td><td colspan=\"7\">93.9 84.5 89.3 72.5 89.1 88.7 77.3 87.8</td><td>85.4</td></tr><tr><td>Zero shot</td><td>MT</td><td colspan=\"7\">92.0 79.9 88.5 73.3 92.1 82.1 76.2 88.7</td><td>84.1</td></tr><tr><td/><td>Paraphrasing</td><td colspan=\"7\">93.1 84.1 90.5 70.3 89.5 91.5 77.6 87.2</td><td>85.5</td></tr></table>", |
|
"text": "Downstream intent classification accuracies (%). Each score shown is the average score of 10 runs.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>JA</td><td>\u30c7\u30f3\u30d0\u30fc \u304b\u3089 \u30d4\u30c3\u30c4\u30d0\u30fc\u30b0 \u307e\u3067\u98db\u3093\u3067\u3044\u308b\u822a\u7a7a\u4f1a\u793e\u3092\u6559\u3048\u3066</td></tr><tr><td>PT</td><td>Mostre todas companhias a\u00e9reas voam de Denver</td></tr><tr><td>TR</td><td>hangi havayolu boston pittsburgh ' a ucar</td></tr><tr><td>ZH</td><td>\u4ece \u4e39\u4f5b \u5230 \u65e7\u91d1\u5c71 \u822a\u73ed\u7684\u822a\u7a7a\u516c\u53f8</td></tr></table>", |
|
"text": "Input airline and flight number from columbus to minneapolis DE Zeige mir alle Fluglinien, die von Toronto nach Boston fliegen ES Qu\u00e9 aerol\u00edneas vuelan desde Atlanta hasta Filadelfia FR Quelles compagnies volent de Toronto \u00e0 San Francisco HI \u091c\u094b \u090f\u092f\u0930\u0932\u093e\u0907\u0928 \u0921\u0947 \u0930 \u0938\u0947 \u0905\u091f\u0932\u093e\u0902 \u091f\u093e \u0924\u0915 \u0909\u095c\u093e\u0928 \u092d\u0930\u0924\u0940 \u0939\u0948", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Examples of paraphrases generated using the multilingual paraphrase generation model for airline and slots fromloc and toloc. The paraphrases shown are cherry picked from a set of generations.", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |