ACL-OCL / Base_JSON /prefixN /json /nlp4convai /2021.nlp4convai-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:55:44.714597Z"
},
"title": "Few-Shot Intent Classification by Gauging Entailment Relationship Between Utterance and Semantic Label",
"authors": [
{
"first": "Jin",
"middle": [],
"last": "Qu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Salesforce Research",
"location": {
"settlement": "Palo Alto",
"region": "CA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Salesforce Research",
"location": {
"settlement": "Palo Alto",
"region": "CA",
"country": "USA"
}
},
"email": "k.hashimoto@salesforce.com"
},
{
"first": "Wenhao",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Salesforce Research",
"location": {
"settlement": "Palo Alto",
"region": "CA",
"country": "USA"
}
},
"email": "wenhao.liu@salesforce.com"
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Salesforce Research",
"location": {
"settlement": "Palo Alto",
"region": "CA",
"country": "USA"
}
},
"email": "cxiong@salesforce.com"
},
{
"first": "Yingbo",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Salesforce Research",
"location": {
"settlement": "Palo Alto",
"region": "CA",
"country": "USA"
}
},
"email": "yingbo.zhou@salesforce.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Zhang et al. (2020) proposed to formulate fewshot intent classification as natural language inference (NLI) between query utterances and examples in the training set. The method is known as discriminative nearest neighbor classification or DNNC. Inspired by this work, we propose to simplify the NLI-style classification pipeline to be the entailment prediction on the utterance-semantic-label-pair (USLP). The semantic information in the labels can thus been infused into the classification process. Compared with DNNC, our proposed method is more efficient in both training and serving since it is based upon the entailment between query utterance and labels instead of all the training examples. The DNNC method requires more than one example per intent while the USLP approach does not have such constraint. In the 1-shot experiments on the CLINC150 (Larson et al., 2019) dataset, the USLP method outperforms traditional classification approach by >20 points (in-domain accuracy). We also find that longer and semantically meaningful labels tend to benefit model performance, however, the benefit shrinks as more training data is available.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Zhang et al. (2020) proposed to formulate fewshot intent classification as natural language inference (NLI) between query utterances and examples in the training set. The method is known as discriminative nearest neighbor classification or DNNC. Inspired by this work, we propose to simplify the NLI-style classification pipeline to be the entailment prediction on the utterance-semantic-label-pair (USLP). The semantic information in the labels can thus been infused into the classification process. Compared with DNNC, our proposed method is more efficient in both training and serving since it is based upon the entailment between query utterance and labels instead of all the training examples. The DNNC method requires more than one example per intent while the USLP approach does not have such constraint. In the 1-shot experiments on the CLINC150 (Larson et al., 2019) dataset, the USLP method outperforms traditional classification approach by >20 points (in-domain accuracy). We also find that longer and semantically meaningful labels tend to benefit model performance, however, the benefit shrinks as more training data is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many methods have been considered for few-shot intent classification. A simple but often effective approach is to simply generate more data through data augmentation. Zou, 2019 and Kumar et al., 2019 explored data augmentation at token-and feature-level to boost model performance. Meta-learning has also been studied extensively for few-shot learning. For instance, Induction Network (Geng et al., 2019) tried to learn general class representations via episode-based meta training and predict utterance labels based on the relation score between the query utterance and classes. Furthermore, large-scale pre-trained language models are often employed to mitigate the lack of annotated data for the target task. Schick and Sch\u00fctze, 2021 leveraged pre-trained RoBERTa (Liu et al., 2019) and XLM-R (Conneau et al., 2020 ) to learn to generate task descriptions on small labeled datasets. They then use the trained models to produce descriptions and soft labels on large, task-specific unlabeled datasets and use them to train classifier. Although this approach has been proven to be effective, it requires extra unlabeled data and additional human supervision on description generation task. DNNC (Zhang et al., 2020) reformulates few-shot text classification as NLI-style pairwise comparison between training example and query utterance. However, DNNC requires at least two examples per intent for training and has to make M \u00d7N (M: number of intents; N: number of training examples per intent) pairwise comparisons for each classification. Along the line of NLI-based classification, Yin et al. (2019) explored to leverage short semantic labels. However, this work is limited to zero-short setting and doesn't provide extensive analysis on how semantic information in labels affects model performance.",
"cite_spans": [
{
"start": 167,
"end": 180,
"text": "Zou, 2019 and",
"ref_id": "BIBREF15"
},
{
"start": 181,
"end": 199,
"text": "Kumar et al., 2019",
"ref_id": "BIBREF6"
},
{
"start": 385,
"end": 404,
"text": "(Geng et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 767,
"end": 785,
"text": "(Liu et al., 2019)",
"ref_id": null
},
{
"start": 796,
"end": 817,
"text": "(Conneau et al., 2020",
"ref_id": "BIBREF1"
},
{
"start": 1195,
"end": 1215,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 1583,
"end": 1600,
"text": "Yin et al. (2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the DNNC work ignores one valuable and readily available supervision in the training data, the semantics in the labels. Our work is largely motivated by the hypothesis that semantic labels may carry valuable information about the intents and could benefit few-shot learning. There are prior works exploring how to leverage semantic labels in NLP tasks. For examples, Hou et al. (2020b) has proposed to improve the Prototypical Network (Snell et al., 2017) by directly embedding semantic labels; Hou et al. (2020a) has tried to use semantic information in labels for few-shot slot tagging. To our knowledge, however, there is no known work that has explored to leverage semantic labels for NLI-style intent classification. Neither has any work been done to study how model per-formance changes with regard to the interplay of data augmentation, different labeling, and number of training examples.",
"cite_spans": [
{
"start": 376,
"end": 394,
"text": "Hou et al. (2020b)",
"ref_id": null
},
{
"start": 444,
"end": 464,
"text": "(Snell et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 504,
"end": 522,
"text": "Hou et al. (2020a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based upon DNNC, our proposed method, utterance-semantic-label-pair (USLP), also leverages NLI-style classification. Instead of computing the entailment relationship between query sentence and the example sentences in the support set, we use the model to gauge the entailment relationship between query text and semantic labels. The semantic information in the labels can be perfectly infused into the classification process in this way. The pairwise entailment prediction is also reduced to M times per classification compared with the DNNC's M \u00d7N. Figure 1 provides a few examples to illustrate the difference between USLP and DNNC.",
"cite_spans": [],
"ref_spans": [
{
"start": 550,
"end": 558,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following 1-shot experiments on the CLINC150 (Larson et al., 2019 ) dataset, we show that the USLP method outperforms the standard classification approach over 20 points with respect to in-domain accuracy. It is noteworthy that the predecessor of USLP, DNNC requires more than one example per intent for training. Although DNNC could do self-entailment training in 1-shot setting, our preliminary results show that the in-domain accuracy of multiple runs is extremely low (best result is below 20 from multiple runs). We also show that data augmentation, longer and more descriptive labeling, and NLI pre-training could boost model performance in few shot setting. However, as more training data is available, the efficacy of these performance boosters tends to shrink or even becomes negative. Our contributions can be summarized in two fold: 1, we proposed a new intent classification pipeline, USLP, and showed its effectiveness especially in 1-shot setting; 2, we studied how data augmentations, different labeling methods, and NLI pre-training might impact model performance in different few shot scenarios.",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "(Larson et al., 2019",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Natural Language Inference, or NLI, is a fundamental NLP task that aims to identify the relationship between a premise and a hypothesis. The relationship can be binary, (entailment and nonentailment) or ternary (entailment, contradiction, and neutral). Pre-trained transformer models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have achieved promising results on NLI tasks. Here the NLI task is treated as a textual sequence classification problem, where the premise and hypothesis sentences are concatenated as [CLS] , premise, [SEP ], hypothesis, [SEP ] (depending on the tokenizer, the concatenated text might be slightly different) and fed into the model. The last hidden state of the [CLS] token is commonly used for classification.",
"cite_spans": [
{
"start": 297,
"end": 318,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 331,
"end": 349,
"text": "(Liu et al., 2019)",
"ref_id": null
},
{
"start": 534,
"end": 539,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method 2.1 Natural Language Inference",
"sec_num": "2"
},
{
"text": "The utterance-semantic-label-pair (USLP) approach builds on top of NLI framework as aforementioned. In USLP, utterances in training data are treated as premise while semantic labels are considered as hypothesis. We use binary entailment relationship for USLP, namely entailment and nonentailment. During training, an utterance-label pair is treated as a positive or entailment example if the label is the assigned intent for the utterance. Similarly, if the label is not the right intent label for the utterance, the pair is considered as a negative or non-entailment example. Although the USLP method does not necessarily require intent labels to have semantic meaning, detailed and semantically meaningful labels can benefit in-domain classification, which will be demonstrated in the following experiments. The DNNC (Zhang et al., 2020) method is also based upon NLI-style classification, the major difference is, it predicts the entailment relationship between the query utterance and examples in the training set. We provide Figure 1 to compare the two methods with more details.",
"cite_spans": [
{
"start": 819,
"end": 839,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1030,
"end": 1038,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Utterance-Semantic-Label-Pair",
"sec_num": "2.2"
},
{
"text": "For inference, we first generate all the possible query utterance and label pair and compute the entailment probability scores. The pair with the highest score has the predicted label for the utterance. To accommodate out-of-scope (OOS) prediction, we can either treat it same as an additional intent class like the other intent labels, or set up a threshold T , if the maximum entailment probability score is over T , we assign the corresponding label as the prediction, otherwise we assign OOS as the prediction for the query utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Semantic-Label-Pair",
"sec_num": "2.2"
},
{
"text": "3.1 Datasets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "To unleash the full potential of transformer model on NLI task, we follow the data processing and training pipeline provided by Zhang et al. (2020) to combine three NLI corpus (SNLI (Bowman (Williams et al., 2018) , and WNLI (Levesque, 2011)) from the GLUE benchmark (Wang et al., 2018) and use them for NLI pre-training.",
"cite_spans": [
{
"start": 128,
"end": 147,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 190,
"end": 213,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 267,
"end": 286,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General NLI corpus for pre-training",
"sec_num": "3.1.1"
},
{
"text": "CLINC150, introduced by Larson et al. 2019, is a multi-domain dataset for intent classification task. It has three dataset variants for in-domain and outof-scope (OOS). We use the small dataset, which contains 150 intents, 50 examples/intent and 100 OOS examples for training. The original labeling has hyphen between each token in the label, we replace hyphen with empty space to format the label as short phrase. To simulate 1-, 5-, and 10-shot experiment, we randomly draw examples from the small dataset. We run each experiment five times with different seeds to capture the variations in random samplings. We remove dev set to simulate real few-shot scenario and use the original testing set for final results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLINC150",
"sec_num": "3.1.2"
},
{
"text": "SGD (Rastogi et al., 2019) , or \"Schema-Guided Dialogue Dataset\", is a dataset about task-oriented dialogue. Its intent labels have detailed description, which is ideal for evaluating if detailed semantic labeling can help improve model performance. Since the original SGD dataset is not designed for fewshot intent classification, we went through a few data processing steps to customize the dataset for our use case. More details about the data processing steps can be found in Appendix B. We ended up with a subset of SGD dataset with 25 intents and 110 OOS utterances.",
"cite_spans": [
{
"start": 4,
"end": 26,
"text": "(Rastogi et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Schema-Guided Dialogue Dataset",
"sec_num": "3.1.3"
},
{
"text": "We use the nlpaug library (Ma, 2019) for tokenlevel data augmentation. In-domain utterances are augmented 4 times using random insertion, cBERTbased substitution, random swapping, and synonym replacement API. More details about the configurations can be found in Appendix A.",
"cite_spans": [
{
"start": 26,
"end": 36,
"text": "(Ma, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "3.2"
},
{
"text": "We use the transformer library (4.5.1) (Wolf et al., 2020) by Huggingface for modeling. In NLI pre-training, we use the pre-trained roberta-base 1 model and follow the pre-training pipeline provided by Zhang et al. (2020) . For downstream few-shot training, we use AdamW optimizer and linear scheduler, learning rate as 5e-5, epochs as 100, and train batch size as 128. We learnt this hyper-parameter set to be effective from our previous experiments with our in-house dataset. To simulate a real few shot setting, where dev set is often unavailable for hyper-parameter tuning and to demonstrate that the proposed method can be easily generalized into different datasets, we disregard all the dev sets in the following experiments and simply use the same hyper-parameter set without any further hyper-parameter tuning.",
"cite_spans": [
{
"start": 39,
"end": 58,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 202,
"end": 221,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "Balanced sampling Since the NLI reformulation of text classification results in much more negative examples than positive ones, we sample equal number of positive and negative examples for every batch to keep the model been exposed the balanced training examples. Furthermore, to prevent overfitting, each epoch iterates through all the positive examples while the negative examples are randomly sampled to form batchs with positive examples. This data sampling strategy leads to better performance based upon previous empirical results on other in-house datasets. The previous DNNC (Zhang et al., 2020) work doesn't enforce balanced sampling, the positive and negative examples are mixed together and sampled randomly. We apply this sampling strategy to all the following experiments.",
"cite_spans": [
{
"start": 583,
"end": 603,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "USLP outperforms other methods by a large margin in 1-shot setting. Results from Table 1 show that USLP-T-A outperforms traditional classification approach by 20, 10, and 15 points in terms of in-domain accuracy, OOS recall, and OOS precision. The DNNC approch requires more than 1 example per class to start with, so it is out of the comparison. Compared with the 100-shot BERT-large results reported in Larson et al. (2019) , the USLP-T-A achieves about 75% of the in-domain performance and has significantly higher OOS-recall score. Noticeably, within different USLP methods, the USLP-T has much better performance for indomain accuracy (~20 points) and OOS-precision (>30 points) than USLP-O, but the USLP-O outperforms USLP-T by around 30 points for OOS-recall. One potential reason is the extremely unbalanced data; there are only one example per in-domain class, and in total we have 150 in-domain examples, but 100 examples for OOS. The USLP-O treats OOS as an extra class, but the OOS class has overwhelmingly more examples than other classes do, which could make the model favor OOS prediction. USLP-T approach, however, uses threshold to control in-domain and OOS prediction. Our experiments use 0.01 as the threshold, which tend to favor in-domain predictions and alleviates the extreme unbalance issue. Data augmentation can help improve in-domain classification and OOS-precision, but its impacts on OOS-recall and OOS-precision are opposite for USLP-T-A and USLP-O-A.",
"cite_spans": [
{
"start": 405,
"end": 425,
"text": "Larson et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Benchmark Results on CLINC150",
"sec_num": "4.1"
},
{
"text": "As we add more in-domain data into the experiments, in 5-shot and 10-shot experiments, we see the traditional classifier and DNNC in general perform better than USLP in terms of in-domain classification, but USLP has better and more balanced OOS-recall and OOS-precision scores. For example, in 10-shot experiments, CLS-T has the best in-domain accuracy, but it is unable to make OOS detection; DNNC has slightly better in-domain and OOS-precision result than USLP, but its OOSrecall is below that of USLP-T by around 30 points. Data augmentation seems to be more effective with USLP; it tends to hurt CLS and DNNC performance. Applying data augmentation on DNNC 10-shot training takes too much time (10+ hours on a single V-100 GPU), so we omit DNNC-A 10shot experiment. Although the data augmentation continues to boost USLP in-domain performance for 5-shot and 10-shot experiments, it hurts OOSrecall. We believe that this is because the data augmentation will cause the model to be trained for more iterations due to fixed number of epochs and we sample 1/4 of batch size from OOS examples for every batch during training. As a result, the model is likely to overfit to the 100 OOS training examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark Results on CLINC150",
"sec_num": "4.1"
},
{
"text": "Augmentation, and NLI pre-training",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Labeling Technique, Data",
"sec_num": "4.2"
},
{
"text": "We use the SGD dataset to further study how relevant factors like labeling technique, data augmentation, and NLI pre-training on general corpus might impact USLP-T performance in different few-shot settings. Results are shown in Table 2 . Descriptive labeling can help improve USLP in-domain accuracy and OOS-precision. The SGD dataset provides intent labels as well as detailed descriptions for each label. To figure out the role of different labeling techniques in USLP-based intent classification, we set up three experiments with different labeling, 1) short labels, which are simply the original intent label. They are composed of either single words or short phrases and have limited semantic meaning; 2) long labels, which is the label description. Each description is usually a longer sentence than short labels and therefore can carry more semantic information; 3) symbolic labels. We convert labels into symbols like \"0\" and \"1\", which carry no semantic information. The results in Table 2 show that, long labels can effectively improve model performance. Especially at extreme low-resource scenario (1-shot), the long labels boost both in-domain accuracy and OOSprecision by 8+ points. Interestingly, long labels hurt model performance on OOS-recall. We hypothesize that long labels can boost model confidence on positive predictions resulting in producing higher prediction score favoring in-domain prediction. Table 1 : CLINC150 few-shot benchmark results. 1 \"CLS\": traditional classifier using [CLS] token embedding for classification; \"T\": threshold, for all the experiments we use 0.01 as the threshold; \"A\": data augmentation; 2 \"O\": treating OOS as an additional class; BL-100shot a and BL-100shot b are based on bert-large model, reported by Larson et al. (2019) . All other methods are based on roberta-base model. 3 \"In-Acc\", 4 \"OOS-R\", and 5 \"OOS-P\" stands for in-domain accuracy, OOS-recall, and OOS-precision respectively, numbers in the brackets represent standard deviation from multiple runs. DNNC requires >1 examples/intent for training and its 10-shot experiment with data augmentation takes >10 hours on a single V-100 GPU, so the corresponding experiments are skipped and results are shown as NA. Table 2 : USLP-T few-shot results on SGD dataset. 1 \"short\": original intent labels, which are either short phrases or single words; \"Aug\": data augmentation; 2 \"long\": detailed intent descriptions are used to replace short label to form utterance-label-pair; 3 \"Symb\": symbolic labels encoded as symbols like \"0\", \"1\", etc. They are converted from semantic labels; 4 \"Non-NLI\": the model is not fine-tuned on general NLI corpus.",
"cite_spans": [
{
"start": 1761,
"end": 1781,
"text": "Larson et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 2",
"ref_id": null
},
{
"start": 992,
"end": 999,
"text": "Table 2",
"ref_id": null
},
{
"start": 1423,
"end": 1430,
"text": "Table 1",
"ref_id": null
},
{
"start": 2229,
"end": 2236,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Role of Labeling Technique, Data",
"sec_num": "4.2"
},
{
"text": "Data augmentation is not always helpful. Quite different from the CLINC150 results, data augmentation fails to improve performance. In fact, data augmentation play a negative role in most experiments here. We tend to think that the effect of data augmentation is task-dependent, it might work well on some datasets but fail on other datasets. When developing few-shot applications with USLP, developers should be careful about applying data augmentation if no dev set is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Labeling Technique, Data",
"sec_num": "4.2"
},
{
"text": "NLI pre-training can boost performance in low-shot setting, but might have adverse effect when more training data is available. Our original hypothesis is that by exposing transformer model to NLI pre-training, the model can be more adapted into NLI related tasks and achieves better performance compared with the model without NLI pre-training. In 1-shot and 5-shot setting, we do observe that NLI pre-trained model can improve in-domain accuracy and OOS recall. But in 10-shot experiments, the NLI pre-trained model has weaker performance in terms of in-domain accuracy and OOS-precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Labeling Technique, Data",
"sec_num": "4.2"
},
{
"text": "We have created a new few-shot intent classification method, USLP, based upon NLI-style prediction. The USLP approach significantly outperforms traditional classification method by a large margin on 1-shot CLINC150 dataset and achieves about 75% of the 100-shot traditional classifier on indomain classification with better OOS performance. This outstanding result indicates that the USLP approach can be an effective solution for developers who want to quickly build an intent classifier with extremely limited amount of training data. We have also found that detailed description can further boost USLP performance, but detailed labeling also requires labelers to have deeper understanding of each intent class and thus prolongs labeling process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://huggingface.co/roberta-base",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank all the reviewers for their helpful comments. We would also like to thank Shashank Harinath, Shilpa Bhagavath, and Mridul Gupta for their insightful discussions on data augmentation methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "We first extract utterances, intents, and detailed intent descriptions from the training set. The original labels formatted as tokens been concatenated together with the first letter capitalized, we introduce an empty space between each token. In the original dataset, the label set of the testing set does not fully overlap with the training set, so we keep the utterances with overlapped intents (25 intents) for in-domain and use the utterances with nonoverlapped intents for OOS training (11 intents). Since our goal of using the SGD dataset is to explore how different labeling techniques might impact final results, we want to use the same training set to exclude the confounding factor of random training data sampling, so we sample 1-, 5-, 10shot in-domain and 110 OOS (10 utterances/nonoverlapped intent) utterances from the processed training set for all the SGD experiments. The original testing set has 11,105 utterances, which is expensive to run through for evaluation. So we sample 50 utterances per overlapped intents for in-domain testing set and 50 utterances per nonoverlapped intents (9 non-overlapped intents) for OOS testing set, resulting in a testing set with 1,250 in-domain and 450 OOS utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B SGD Data Processing",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1075"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming",
"middle": [
"Wei"
],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL HLT 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Conference",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. NAACL HLT 2019 -2019 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies -Proceedings of the Conference, 1(Mlm):4171-4186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Induction networks for few-shot text classification",
"authors": [
{
"first": "Ruiying",
"middle": [],
"last": "Geng",
"suffix": ""
},
{
"first": "Binhua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yongbin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Jian",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3904--3913",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1403"
]
},
"num": null,
"urls": [],
"raw_text": "Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction networks for few-shot text classification. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3904-3913, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network",
"authors": [
{
"first": "Yutai",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Yongkui",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Zhihan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1381--1393",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.128"
]
},
"num": null,
"urls": [],
"raw_text": "Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020a. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1381- 1393, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wanxiang Che, and Ting Liu. 2020b. Few-shot Learning for Multilabel Intent Detection",
"authors": [
{
"first": "Yutai",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yongkui",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Yushan",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yutai Hou, Yongkui Lai, Yushan Wu, Wanxiang Che, and Ting Liu. 2020b. Few-shot Learning for Multi- label Intent Detection.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Classification",
"authors": [
{
"first": "Varun",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Hadrien",
"middle": [],
"last": "Glaude",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/d19-6101"
]
},
"num": null,
"urls": [],
"raw_text": "Varun Kumar, Hadrien Glaude, Cyprien de Lichy, and Wlliam Campbell. 2019. A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Clas- sification. pages 1-10.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An evaluation dataset for intent classification and out-of-scope prediction",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Larson",
"suffix": ""
},
{
"first": "Anish",
"middle": [],
"last": "Mahendran",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"J"
],
"last": "Peper",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Parker",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Leach",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"A"
],
"last": "Laurenzano",
"suffix": ""
},
{
"first": "Lingjia",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Mars",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Levesque -The Winograd Schema Challenge",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hector",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levesque",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hector J Levesque. 2011. Levesque -The Winograd Schema Challenge. (1989).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Nlp augmentation",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Ma. 2019. Nlp augmentation.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Xiaoxue",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Sunkara",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Khaitan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exploiting cloze-questions for few-shot text classification and natural language inference",
"authors": [
{
"first": "Timo",
"middle": [],
"last": "Schick",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "255--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timo Schick and Hinrich Sch\u00fctze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the As- sociation for Computational Linguistics: Main Vol- ume, pages 255-269, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Prototypical networks for few-shot learning",
"authors": [
{
"first": "Jake",
"middle": [],
"last": "Snell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Swersky",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4078--4088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Ad- vances in Neural Information Processing Systems, 2017-Decem:4078-4088.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6382--6388",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1670"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Wei and Kai Zou. 2019. EDA: Easy data aug- mentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Jamaal",
"middle": [],
"last": "Hay",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3914--3923",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1404"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914-3923, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Discriminative nearest neighbor few-shot intent detection by transferring natural language inference",
"authors": [
{
"first": "Jianguo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Wenhao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5064--5082",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.411"
]
},
"num": null,
"urls": [],
"raw_text": "Jianguo Zhang, Kazuma Hashimoto, Wenhao Liu, Chien-Sheng Wu, Yao Wan, Philip Yu, Richard Socher, and Caiming Xiong. 2020. Discrimina- tive nearest neighbor few-shot intent detection by transferring natural language inference. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5064-5082, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "An illustration of training data in USLP and DNNC et al., 2015), MNLI",
"type_str": "figure"
}
}
}
}