{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:12:37.605408Z" }, "title": "Meta-Learning for Few-Shot Named Entity Recognition", "authors": [ { "first": "Hadrien", "middle": [], "last": "Glaude", "suffix": "", "affiliation": {}, "email": "hglaude@amazon.com" }, { "first": "William", "middle": [], "last": "Campbell", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Meta-learning has recently been proposed to learn models and algorithms that can generalize from a handful of examples. However, applications to structured prediction and textual tasks pose challenges for meta-learning algorithms. In this paper, we apply two metalearning algorithms, Prototypical Networks and Reptile, to few-shot Named Entity Recognition (NER), including a method for incorporating language model pre-training and Conditional Random Fields (CRF). We propose a task generation scheme for converting classical NER datasets into the few-shot setting, for both training and evaluation. Using three public datasets, we show these meta-learning algorithms outperform a reasonable fine-tuned BERT baseline. In addition, we propose a novel combination of Prototypical Networks and Reptile.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Meta-learning has recently been proposed to learn models and algorithms that can generalize from a handful of examples. However, applications to structured prediction and textual tasks pose challenges for meta-learning algorithms. In this paper, we apply two metalearning algorithms, Prototypical Networks and Reptile, to few-shot Named Entity Recognition (NER), including a method for incorporating language model pre-training and Conditional Random Fields (CRF). We propose a task generation scheme for converting classical NER datasets into the few-shot setting, for both training and evaluation. Using three public datasets, we show these meta-learning algorithms outperform a reasonable fine-tuned BERT baseline. In addition, we propose a novel combination of Prototypical Networks and Reptile.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The usage of Natural Language Understanding (NLU) technologies has spread widely in the last decade thanks to the recent jump in accuracy due to Deep Neural Networks (DNN). In addition, DNN libraries have made easier than ever the productization of NLU technologies. Applications have spread in quality and quantity with the broadened usage of chat bots by customer services, the development of virtual assistants (e.g. Amazon Alexa, Google Home, Apple's Siri or Microsoft Cortana) and the need of document parsing (e.g. medical reports, receipts, tweets, news articles) for data extraction. These applications often rely on NER to locate and classify named entities in text. NER aims at extracting named entities (e.g. \"artist\", \"city\" or \"restaurant type\") from a sequence of words. This problem is often approached (Mc-Callum and Li, 2003 ) as a sequence labeling task that assigns to each word one of the different entity types or the \"other\" label for words that do not belong to any named entity.", "cite_spans": [ { "start": 818, "end": 841, "text": "(Mc-Callum and Li, 2003", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The wide variety of applications has made the need for domain specific data the main bottleneck to train or fine-tune statistical models. This data is often acquired by running the application itself and collecting user inputs. Then, the annotation effort can be significantly reduced using active learning (Peshterliev et al., 2019) or semi-supervised learning (Cho et al., 2019b) . However, to reach this bootstrapping stage, statistical models have to perform reasonably before being exposed to users. Indeed, low performing models can turn away users or shift the input distribution as users lose engagement with features that do not work.", "cite_spans": [ { "start": 307, "end": 333, "text": "(Peshterliev et al., 2019)", "ref_id": "BIBREF38" }, { "start": 362, "end": 381, "text": "(Cho et al., 2019b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Transfer learning (Do and Gaspers, 2019) is an efficient way to cope with the data shortage by extracting task-agnostic high-level features. In particular, for NER, fine-tuning language models (Peters et al., 2018; Devlin et al., 2018; Conneau and Lample, 2019) allows achieving state-of-the-art performances (Wang et al., 2018a) . However, fine tuning to specific tasks still requires a reasonable amount of data, especially for a task like NER with large structured label spaces. In certain cases, for example to learn personalized models or for products with restricted budgets, only a handful \"reference\" examples are available. As we will show, in such scenarios where very few training examples are available, transfer learning has its limitations.", "cite_spans": [ { "start": 18, "end": 40, "text": "(Do and Gaspers, 2019)", "ref_id": "BIBREF10" }, { "start": 193, "end": 214, "text": "(Peters et al., 2018;", "ref_id": "BIBREF39" }, { "start": 215, "end": 235, "text": "Devlin et al., 2018;", "ref_id": "BIBREF9" }, { "start": 236, "end": 261, "text": "Conneau and Lample, 2019)", "ref_id": "BIBREF7" }, { "start": 309, "end": 329, "text": "(Wang et al., 2018a)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Few-Shot Learning (FSL) is a rapidly growing field of research, reviewed in Section 2, that aims at building models that can generalize from very few examples as detailed in (Miller et al., 2000; Koch et al., 2015) . This area of research is motivated by the ability of humans and animals to learn object categories from few examples, and at a rapid pace. In particular, inductive bias (Mitchell, 1980) has been identified for a long time as a key component to fast generalization to new inputs. Previous work has suggested that meta-learning (Schmidhuber, 1987) can help quickly acquire knowledge from few examples by learning an inductive bias from a distribution of similar tasks but with different categories.", "cite_spans": [ { "start": 174, "end": 195, "text": "(Miller et al., 2000;", "ref_id": "BIBREF34" }, { "start": 196, "end": 214, "text": "Koch et al., 2015)", "ref_id": "BIBREF27" }, { "start": 386, "end": 402, "text": "(Mitchell, 1980)", "ref_id": "BIBREF35" }, { "start": 543, "end": 562, "text": "(Schmidhuber, 1987)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we leverage recent progress made in transfer learning and meta-learning to address few-shot NER. First, we provide a novel definition of few-shot NER in Section 3.1 where few-shot NER aims at building models to solve NER tasks given only a handful of labeled utterances per entity type. Then, in Section 3.2, we define a transfer learning baseline consisting in fine-tuning a pretrained language model (BERT Devlin et al., 2018) using only few examples. In addition, we introduce an extension of Prototypical Networks (Snell et al., 2017) , a metric-based model, capable of handling structured prediction. In particular, we detail how it can be combined with Conditional Random Fields (CRF) (Lafferty et al., 2001) . In Section 3.3, we explain how such models can be trained using meta-learning. In addition, we introduce the application of an optimization-based algorithm to NER, Reptile (Nichol et al., 2018) , capable of metalearning a better initialization model. We also propose a novel combination of Prototypical Networks and Reptile that brings the best of both worlds, performance and the ability to handle a different number of classes between training and testing. Finally, in Section 3.4, we show how to generate diverse and realistic FSL tasks, corresponding to the bootstrapping phase of NER systems, from classical NER datasets either for meta-training or metatesting.", "cite_spans": [ { "start": 417, "end": 443, "text": "(BERT Devlin et al., 2018)", "ref_id": null }, { "start": 533, "end": 553, "text": "(Snell et al., 2017)", "ref_id": "BIBREF46" }, { "start": 706, "end": 729, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF30" }, { "start": 904, "end": 925, "text": "(Nichol et al., 2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 4, we conduct an extensive evaluation on three public datasets: SNIPS (Coucke et al., 2018) , Task Oriented Parsing (TOP Gupta et al., 2018) and Google Schema-Guided Dialogue State Tracking (DSTC8 Rastogi et al., 2019) where we compare our three meta-learning approaches to the transfer learning baseline. Source code and datasets will be made available online.", "cite_spans": [ { "start": 81, "end": 102, "text": "(Coucke et al., 2018)", "ref_id": "BIBREF8" }, { "start": 132, "end": 151, "text": "Gupta et al., 2018)", "ref_id": "BIBREF17" }, { "start": 208, "end": 229, "text": "Rastogi et al., 2019)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Few-shot learning has been addressed using metric-learning, data augmentation and metalearning. Metric-learning relies on learning how to compare pairs (Koch et al., 2015) or triplets (Ye and Guo, 2018) of examples and use that distance function to classify new examples. Data augmentation through deformation has been known to be effective in image recognition tasks. More advanced approaches rely on generative models (Gupta, 2019; Hou et al., 2018; Zhao et al., 2019; Guu et al., 2018; Yoo et al., 2018) , paraphrasing (Cho et al., 2019a) or machine translation (Johnson et al., 2019) . All the methods above rely somewhat on transfer learning with the hope that representations learned in one domain can be applied to another one.", "cite_spans": [ { "start": 152, "end": 171, "text": "(Koch et al., 2015)", "ref_id": "BIBREF27" }, { "start": 184, "end": 202, "text": "(Ye and Guo, 2018)", "ref_id": "BIBREF59" }, { "start": 420, "end": 433, "text": "(Gupta, 2019;", "ref_id": "BIBREF16" }, { "start": 434, "end": 451, "text": "Hou et al., 2018;", "ref_id": "BIBREF23" }, { "start": 452, "end": 470, "text": "Zhao et al., 2019;", "ref_id": "BIBREF64" }, { "start": 471, "end": 488, "text": "Guu et al., 2018;", "ref_id": "BIBREF18" }, { "start": 489, "end": 506, "text": "Yoo et al., 2018)", "ref_id": "BIBREF61" }, { "start": 522, "end": 541, "text": "(Cho et al., 2019a)", "ref_id": "BIBREF4" }, { "start": 565, "end": 587, "text": "(Johnson et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Meta-learning takes a different approach by trying to learn an inductive bias on a distribution of similar tasks that can be utilized to build models from very few examples. There are four common approaches. Model-based meta-learning relies on a meta-model to update or predict the weights of a task specific model (Munkhdalai and Yu, 2017) . Generation-based meta-learning Schwartz et al., 2018 ) produces generative models able to quickly learn how to generate task specific examples, often in the feature space . The other two approaches are explained in detail below.", "cite_spans": [ { "start": 315, "end": 340, "text": "(Munkhdalai and Yu, 2017)", "ref_id": "BIBREF36" }, { "start": 374, "end": 395, "text": "Schwartz et al., 2018", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Metric-based meta-learning is similar to nearest neighbors algorithms. In particular, several metricbased meta-learning methods (Vinyals et al., 2016; Snell et al., 2017; Rippel et al., 2015) have been proposed for few-shot classification where an embedding space or a metric is meta-learned and used at test time to embed the few support examples of new categories and the queries. Prediction is performed by comparing embedded queries and support examples. In many cases, the loss function is based on a distance between the supports and the queries. More advanced losses have been proposed in (Triantafillou et al., 2017; Wang et al., 2018b; Sung et al., 2018) for example based on triplet, ranking and max-margin losses. One of the issues with approaches listed above is that the distance is the same for all categories. Thus, Fort (2017); Hilliard et al. (2018) have explored scaling the distance for new categories.", "cite_spans": [ { "start": 128, "end": 150, "text": "(Vinyals et al., 2016;", "ref_id": "BIBREF54" }, { "start": 151, "end": 170, "text": "Snell et al., 2017;", "ref_id": "BIBREF46" }, { "start": 171, "end": 191, "text": "Rippel et al., 2015)", "ref_id": "BIBREF43" }, { "start": 596, "end": 624, "text": "(Triantafillou et al., 2017;", "ref_id": "BIBREF52" }, { "start": 625, "end": 644, "text": "Wang et al., 2018b;", "ref_id": "BIBREF56" }, { "start": 645, "end": 663, "text": "Sung et al., 2018)", "ref_id": "BIBREF49" }, { "start": 844, "end": 866, "text": "Hilliard et al. (2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Optimization-based meta-learning explicitly meta-learns an update rule or weight initialization that enables fast learning during meta-testing. In Ravi and Larochelle (2017) , they use an LSTM meta-learner trained to be an optimization algorithm. However, this approach incurs a high complexity. In Finn et al. (2017) , the authors explored with success using ordinary gradient descent in the learner and meta-learning the initialization weights. However, this algorithm named MAML, requires to back propagate through gradient updates and so rely on second order derivatives which are expensive to compute. They also proposed an algorithm, FOMAML, relying only on first order deriva-tives. This idea has been extended by Nichol et al. (2018) to propose an algorithm, Reptile, that does not need a training-test split for each task as explained in Section 3.3. Note that, Triantafillou et al. (2019) gives an overview of many meta-learning algorithms and propose a set of benchmarks to evaluate them. Finally, instead of just learning a model initialization, Li et al. (2017) propose to learn a full-stack Stochastic Gradient Descent (SGD), including update direction, and learning rate.", "cite_spans": [ { "start": 147, "end": 173, "text": "Ravi and Larochelle (2017)", "ref_id": "BIBREF42" }, { "start": 299, "end": 317, "text": "Finn et al. (2017)", "ref_id": "BIBREF12" }, { "start": 721, "end": 741, "text": "Nichol et al. (2018)", "ref_id": "BIBREF37" }, { "start": 871, "end": 898, "text": "Triantafillou et al. (2019)", "ref_id": "BIBREF53" }, { "start": 1058, "end": 1074, "text": "Li et al. (2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Few-Shot Learning on textual data has been explored recently, mostly for text classification tasks. propose to meta-learn a set of distances and learn a task-specific weighted combination of those. Jiang et al. (2018) build on top of MAML and attention mechanisms to propose an algorithm for text classification. Geng et al. (2019) focuses on sentiment and intent classification. propose to use metricbased meta-learning to learn task-specific metrics that can handle imbalanced datasets. Recently, Bansal et al. (2019) proposed a new optimizationbased meta-learning algorithm, LEOPARD, that outperforms strong baselines on several text classification problems (entity typing, natural language inference, sentiment analysis). Few-shot relation classification has also attracted some attention in the past two years, thanks to Han et al. (2018) who proposed a new dataset and using Prototypical Networks. Several works built on top of this to combine Prototypical Networks with attention models Ye and Ling, 2019) . NER has been addressed in several works. In (Fritzler et al., 2019; Yang and Katiyar, 2020) the task of interest consists of recognizing one class of named entities, for tag set extension or domain transfer. In our work, we extend the N-way K-shot setting to structured prediction. (Hou et al., 2020) propose a CRF with coarse-grained transitions between abstract classes. In (Krone et al., 2020 ) the authors propose a task sampling algorithm based on intents which can result in leakage between metatraining and meta-testing sets. In (Hofer et al., 2018) the authors don't use pre-trained language models. As we will show subsequently our work differs significantly from those. First, our task sampling method, that can generate a very large amount of tasks, is key to learn efficiently an inductive bias. Second, we utilize pre-trained language models. Third, using a fine-grained CRF, amenable to meta-learning, our model can learn sequential de-pendencies between labels. Fourth, we fine-tune our meta-learned Prototypical Network per task and even utilize optimization-based meta-learning to improve the fine-tuning. Those contributions are central in achieving the best performance on few-shot NER as shown in Section 4.", "cite_spans": [ { "start": 313, "end": 331, "text": "Geng et al. (2019)", "ref_id": "BIBREF15" }, { "start": 499, "end": 519, "text": "Bansal et al. (2019)", "ref_id": "BIBREF1" }, { "start": 826, "end": 843, "text": "Han et al. (2018)", "ref_id": "BIBREF19" }, { "start": 994, "end": 1012, "text": "Ye and Ling, 2019)", "ref_id": "BIBREF60" }, { "start": 1059, "end": 1082, "text": "(Fritzler et al., 2019;", "ref_id": "BIBREF14" }, { "start": 1083, "end": 1106, "text": "Yang and Katiyar, 2020)", "ref_id": "BIBREF58" }, { "start": 1297, "end": 1315, "text": "(Hou et al., 2020)", "ref_id": "BIBREF22" }, { "start": 1391, "end": 1410, "text": "(Krone et al., 2020", "ref_id": "BIBREF28" }, { "start": 1551, "end": 1571, "text": "(Hofer et al., 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Few-Shot Named Entity Recognition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We define the few-shot NER problem by describing what is a task. A task is defined by a set of N target entity types (examples of entity types could be \"song\", \"city\" or \"date\"), a small training set of N \u00d7 K utterances (with their labels) called support set and another disjoint set of labeled utterances called query set. Similarly to Triantafillou et al. 2019, we refer to this setting as N -way-K-shot with the difference that we have a total of N \u00d7 K support utterances rather than K examples for each of the N entity types, which is not feasible as one utterance might contain several entities. Thus, the number of mentions per entity type can be imbalanced. In addition, the support set follows the same distribution as the query set. Evaluation is performed by sampling a set of tasks from the metatesting set. For each task, an NER model is learned from the support set. This model is evaluated on the query set. The performance is finally averaged across tasks. During meta-training, an additional set of meta-training tasks is available with disjoint entity types from the meta-testing set. Queries are used to train the meta-model. At meta-testing, this meta-model is tailored to the task using the support examples as mentioned above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "3.1" }, { "text": "This paper builds on top of Prototypical Networks, introduced by Snell et al. (2017) . Their model embeds support and query examples into a vector space. Then, one prototype per category is computed by taking the mean of its supports. Finally, queries are compared to prototypes using the euclidean distance. The distances are converted to probabilities using a Gibbs distribution. The model is meta-trained to predict the query labels using only few examples. This Section details the architecture of Prototypical Networks for sequence labeling. The next Section explains how the embedding function is meta-learned. Without metalearning the architecture of Prototypical Networks does not bring any advantage over classical ones. For a sequence labeling task, like NER, the difference is that to each word is assigned one label. Let S = {(x 1 , y 1 ), . . . , (x n , y n )} be a small support set of n labeled sequences where", "cite_spans": [ { "start": 65, "end": 84, "text": "Snell et al. (2017)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "x i = (x i 1 , . . . , x i L )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "is an utterance of length L and y i = (y i 1 , . . . , y i L ) a sequence of entity labels. For each entity type k, we compute a prototype c k by embedding all words tagged as k using an embedding function f \u03b8 where \u03b8 represents the metalearned parameters. The fundamental difference with the common implementation of Prototypical Networks is that the embedding function f \u03b8 utilizes the context of the current word to compute its representation in a vector space. Although, we should formally note f \u03b8 (x i j ; x i ) the representation of x i j in the embedding space, we will just write f \u03b8 (x i j ) in the sequel to not overload equations. Thus, prototypes are defined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c k = 1 |S k | x\u2208S k f \u03b8 (x),", "eq_num": "(1)" } ], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "S k = {x i j | y i j = k, (x i , y i ) \u2208 S}, i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "e. the set of all tokens with a particular label k. Note that we compute one prototype per entity type and also one for \"other\". As mentioned in Section 5, we leave better handling of \"other\" for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "In this paper, we use BERT to generate embeddings for each word. More specifically, we used the pre-trained English BERT Base uncased model from (Wolf et al., 2019 ). This BERT model has 12 layers, 768 hidden states, and 12 heads. Then, we followed recommendation from Souza et al. (2019) to fine-tune BERT. Since BERT uses Word-Piece sub-word units and NER labels are aligned to words, we elected to pick the last sub-word representation of a word as the final word representation. Then, we sum the outputs of the last 4 layers to get a word-level representation and then add dropout and a linear layer. 1 For our baseline model, the linear layer output size is the number of entity types plus \"other\". When using Prototypical Networks, the linear layer output size is 64. Then, distances to prototypes are computed for every word, giving the same output size than for the baseline model. Finally, in our experiments, we tried two different decoders. For the first one, we simply feed the distances into a SoftMax layer and use the negative log-likelihood (NLL) summed over all positions for the loss function, as follow,", "cite_spans": [ { "start": 145, "end": 163, "text": "(Wolf et al., 2019", "ref_id": "BIBREF57" }, { "start": 269, "end": 288, "text": "Souza et al. (2019)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y t = k | x) = e \u2212 f \u03b8 (xt)\u2212c k 2 k e \u2212 f \u03b8 (xt)\u2212c k 2 , (2) p(y | x) = t p(y t | x, {c k }).", "eq_num": "(3)" } ], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "For our second decoder, we use a CRF, as Lample et al. 2016have shown they are effective for NER when combined with neural networks. Using a CRF instead of making independent tagging decisions allows to model the dependencies between labels by considering a transition score between labels in addition to the standard emission scores to obtain a probability distribution,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y | x) = exp t U (x t , y t ) + T (y t , y t+1 ) Z(x) ,", "eq_num": "(4)" } ], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "Z(x) = y exp \uf8eb \uf8ed t U (x t , y t ) + T (y t , y t+1 ) \uf8f6 \uf8f8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "(5) where, T is a transition matrix, U the emission network and Z the partition function -a normalization factor used so that the probabilities sum to 1, equal to the sum of the scores over all label sequences. The loss function is the standard NLL. The emission network is the same as the SoftMax decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "For our baseline, the transition matrix is just a parameter of our network. However, estimating transitions between labels in the FSL setting is very prone to over-fitting as many transition pairs are likely to be absent from the limited training data. This intuition will be confirmed empirically in Section 4. Hence, we make use of prototypes and transfer learning to estimate the transition matrix. More specifically,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "U (x t , y t ) = \u2212 f \u03b8 (x t ) \u2212 c yt 2 and (6) T (y t , y t+1 ) = g \u03c8 (c yt , c y t+1 ),", "eq_num": "(7)" } ], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "where the weights \u03c8 of our neural network g are learned across tasks during meta-training and eventually fine-tuned during meta-testing. In our experiments, g is implemented as a feed-forward neural network on stacked prototype representation with one hidden layer of size 64 and ELU activation function. Looking only at the learning of the transition matrix during meta-training, this setting is equivalent to a standard training procedure that uses classes, represented by prototypes, as training examples and tries to predict transitions between them. Hence, we rely on the generalization capability of our transition DNN during meta-testing to handle new classes. We will see in Section 4, that using our Prototypical CRF decoder is very beneficial compared to a standard CRF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prototypical Networks for NER", "sec_num": "3.2" }, { "text": "In this Section, we introduce meta-learning and how it can be used to meta-learn initialization weights for the baseline architecture using Reptile, the embedding function in Prototypical Networks or both. In most cases, meta-learning algorithms, i.e. algorithms that learn how to learn, are typically comprised of two processes. The inner process is a traditional learning process capable of learning quickly using only a small number of task-specific examples. The outer loop, or meta-learning loop, slowly learns the inductive bias across a set of tasks. Thus, the objective of the outer loop is to improve generalization during the inner learning process. This is often achieved thanks to a meta-model. For Prototypical Networks the meta-model is the embedding function that defines the prototypes and the distance. For Reptile, the meta-model are the initialization weights that will be fine-tuned during meta-testing. During meta-testing, task specific models are derived from the meta-model and the support examples, for example by building prototypes or by gradient descent. Then, all queries are used to evaluate the task-specific model. Meta-training runs in episodes. For each episode, a task or a batch of tasks is sampled. In our setting, we are only considering one task at a time. Then, from the current meta-model, a task specific model is built using the inner process and the support examples. The loss is computed using the queries and back-propagated through the inner process to update the meta-model. Good performance is often achieved when the inner process at meta-training and meta-testing are alike.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "In the case of Prototypical Networks for sequence labeling, the meta-learner learns a representation amenable to generalization where queries can be compared to prototypes built from few sup-port examples. Hence, the inner process just builds one prototype per entity type k \u2208 E, where E is the set of entity types for this task (including \"other\") as described in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "Algorithm 1 ProtoNet INITIALIZE \u03b8 while has not converged do E, S, Q \u2190 SAMPLETASK(T , K, N ) for all entity type k in E do c k \u2190 1 |S k | x\u2208S k f \u03b8 (x) as in eq. (1) end for L \u2190 NLL(p, BATCH(Q)) where p is defined in eq. (3) or eq. (4) \u03b8 \u2190 UPDATE(\u03b8, \u2202L \u2202\u03b8 ) end while", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "During meta-testing, we can simply compute the prototypes from the support examples as in eq. 1, in that case training is done without any backpropagation. However, in our experiments, see Section 4, we found that fine-tuning the metamodel using the task-specific supports was improving the performance. To fine-tune the model we further split the supports into two subsets using 80% to build the prototypes and the remaining to compute the loss and backpropagating it to update the model. By introducing this additional fine-tuning step at test time, the inner process now differs between meta-training and metatesting. Similarly, for our baseline, we fine-tune our BERT-based model using the support utterances at meta-test time. In both cases, to better align meta-training and meta-testing, we turned to optimization-based meta-learning. Optimizationbased meta-learning encompasses methods where the inner process consists in fine-tuning the metamodel. Back-propagating through the inner optimization loop allows computing a meta-gradient to update the meta-model as done in MAML. However doing so requires to compute second order derivatives. Instead, Reptile builds a first order approximation as shown in Algorithm 2, where T is the number of steps used to compute the first order approximation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "In addition, for MAML, the inner-loop optimization uses support examples, whereas the loss is computed using the queries. This way MAML optimizes for generalization. However, Reptile does not require a query-support split to compute the meta-gradient, which makes it a better candidate to be combined with Prototypical Networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "INITIALIZE \u03b8 0 while has not converged do E, S, Q \u2190 SAMPLETASK(T , K, N ) for t \u2208 1..T do L \u2190 NLL(p, BATCH(S \u222a Q)) \u03b8 t \u2190 UPDATE(\u03b8 t\u22121 , \u2202L \u2202\u03b8 t\u22121 ) end for \u03b8 0 \u2190 UPDATE(\u03b8 0 , \u03b8 T \u2212 \u03b8 0 ) end while", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "To combine MAML and Prototypical Networks, Triantafillou et al. (2019) use the same support examples to compute prototypes and to compute the loss for backpropagation in the MAML inner loop. However, having two disjoints support sets is preferable so as not to compare examples to prototypes computed from the same examples. With Reptile, this issue is alleviated altogether as shown in Algorithm 3.", "cite_spans": [ { "start": 43, "end": 70, "text": "Triantafillou et al. (2019)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "Algorithm 3 Proto-Reptile INITIALIZE \u03b8 0 while has not converged do E, S, Q \u2190 SAMPLETASK(T , K, N ) for all entity type k in E do c k \u2190 1 |S k | x\u2208S k f \u03b8 (x) as in eq. (1) end for for t \u2208 1..T do L \u2190 NLL(p, BATCH(Q)) \u03b8 t \u2190 UPDATE(\u03b8 t\u22121 , \u2202L \u2202\u03b8 t\u22121 ) end for \u03b8 0 \u2190 UPDATE(\u03b8 0 , \u03b8 T \u2212 \u03b8 0 ) end while", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "In Algorithms 1 to 3, NLL stands for the negative log-likelihood function, BATCH for a function that samples a batch. T is the training set, K the number of shots, N the number of ways, S the support set and Q the query set, T is the number of steps in Reptile. In addition, UPDATE can be any optimizer, such that SGD or Adam (Kingma and Ba, 2015). In our experiments, we use Adam in Algorithm 1, and in the inner loop of Algorithm 3. For the outer loop of Algorithm 3, we use the classical SGD update rule without any momentum. Note that, each loop has its own learning rate. In addition, we used different learning rates for the BERT encoder and the rest of the network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta-Learning", "sec_num": "3.3" }, { "text": "To generate training and testing data from classical NER datasets, we first randomly partition entity types and utterances to either the train, the validation or the test split. Utterances are assigned based on the majority split of its entity types, counted per word. In other words, for a given utterance we count the number of words for entity types that are in each split and utterances are assigned to the partition that was the most represented in that utterance. In case of tie, priority is given to the test split, then the valid split and finally to the train split. Any entity contained in an utterance that is not in the corresponding partition is replaced with \"other\" to ensure, e.g., no test entities are seen during training. Finally, utterances with no entities are dropped. This task sampling procedure can both simulate a realistic few-shot NER testing setting and generate a large number of training tasks. During metatraining, having a diverse enough distribution of training tasks is crucial to learn an inductive bias effectively, similarly to having many examples helps generalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Tasks for Training or Testing", "sec_num": "3.4" }, { "text": "Experiments were conducted on the SNIPS (Coucke et al., 2018) , Task Oriented Parsing (TOP Gupta et al., 2018) and Google Schema-Guided Dialogue State Tracking (DSTC8 Rastogi et al., 2019) datasets. For evaluation, we sampled 50 tasks from the meta-test set to average the Micro F1 across tasks. We use the Micro F1 metric introduced in (Tjong Kim Sang, 2002) that does not give any credit to partial matches. For SNIPS, we combine B and I labels from the BIO (Ramshaw and Marcus, 1995) encoding into a single label. For DSTC8, we used utterances from both the system and user, we discarded utterances containing more than 1 frame. For the TOP dataset, which contains hierarchical labels for slot labels and intents, we used the finest-grained entity types (the leaf nodes) as labels and discarded intents. We did not adhere to any pre-defined train, valid and test partitions, but followed our own task-based procedure defined in Section 3.4. Additional details about data preparation and datasets statistics are given in the appendix.", "cite_spans": [ { "start": 40, "end": 61, "text": "(Coucke et al., 2018)", "ref_id": "BIBREF8" }, { "start": 86, "end": 110, "text": "(TOP Gupta et al., 2018)", "ref_id": null }, { "start": 167, "end": 188, "text": "Rastogi et al., 2019)", "ref_id": "BIBREF41" }, { "start": 348, "end": 359, "text": "Sang, 2002)", "ref_id": "BIBREF51" }, { "start": 460, "end": 486, "text": "(Ramshaw and Marcus, 1995)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Pre-Processing", "sec_num": "4.1" }, { "text": "During meta-testing, only a few support examples are available to fine-tune the task specific model derived from the meta-model. As such, it is impractical to set aside some as a validation set for early stopping. However, early stopping is really important in the few-shot setting as the model can easily overfit. Hence, we find the best number of fine-tuning epochs on the validation split and then use it during meta-testing. For the baseline, this is the only purpose of meta-training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyper-Parameter Tuning", "sec_num": "4.2" }, { "text": "For each algorithm (Baseline, ProtoNet, Reptile, Proto-Reptile) and decoder (SoftMax or CRF), we conducted an extensive hyper-parameter optimization (HPO) procedure using the built-in Bayesian optimization of AWS SageMaker (Amazon Web Services, 2017) on the SNIPS metavalidation dataset. The search space, the best hyperparameters, the best performance and the training times are given in the appendix. We used the same hyper-parameters in all our experiments. However, after HPO, we retrained all our models with a number of meta updates and updates manually tuned per algorithm on each meta-validation dataset to avoid (meta-)stopping too early. All results on the meta-validation set and training times can be found in the appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyper-Parameter Tuning", "sec_num": "4.2" }, { "text": "We conducted four types of experiments. First, we compared all approaches on the three datasets using N = 4 and K = 10 in Table 1 . Fine-tuning produces the largest gains, especially on SNIPS and TOP (less on DSTC8). Indeed, starting with the baseline, fine-tuning a pre-trained BERT model with aggressive dropout (0.9) is quite effective. Chen et al. (2019) ; Tian et al. (2020) also observed that transfer learning baselines are often competitive and neglected in FSL works. We also evaluated Prototypical Networks without fine-tuning at metatest time using the supports. We refer to those algorithms by ProtoNet* and Proto-Reptile*. Compared to previous work on image recognition (Chen et al., 2019) , fine-tuning the Prototypical Network seems to be extremely beneficial for textual application that builds on top of pre-trained language models instead of solely building the prototypes. Hence, combining optimization-based and metricbased meta-learning sounds a natural idea.", "cite_spans": [ { "start": 340, "end": 358, "text": "Chen et al. (2019)", "ref_id": "BIBREF2" }, { "start": 361, "end": 379, "text": "Tian et al. (2020)", "ref_id": "BIBREF50" }, { "start": 683, "end": 702, "text": "(Chen et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Comparing ProtoNet and Reptile, we can see that the Prototypical Network architecture helps generalization in the low data regime thanks to being instance-based. In addition, gains are even larger when combined with a CRF, with or without fine-tuning, in particular on DSTC8. Indeed, the CRF can only be slightly beneficial compared to using a simple SoftMax decoder for the Baseline and for Reptile. On the other hand, using our Prototypical CRF achieves a significant jump in Micro F1, especially on DSTC8, demonstrating that the transition network can generalize to new classes unseen at meta-training. We believe that, Reptile's meta-learning approach is inefficient because the initialization weights of the transition matrix do not have enough capacity to encode an inductive bias. Maybe other optimization-based meta-learning methods relying on external neural networks with larger capacity, e.g. a network that predicts the update direction as proposed by Li et al. (2017) , could be more efficient than relying solely on the initialization weights to learn the inductive bias.", "cite_spans": [ { "start": 964, "end": 980, "text": "Li et al. (2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Comparing Reptile to Baseline and Proto-Reptile to ProtoNet, we see that optimizationbased meta-learning can help significantly with fine-tuning. Although the gap is less impressive between Proto-Reptile to ProtoNet, Proto-Reptile obtains the best result in most cases. Comparing results between datasets, DSTC8 high diversity seems to be a real game changer for meta-learning. Indeed, all meta-learning approaches achieve twice or more the Baseline Micro F1. We argue that, the richer the task distribution, the better the learned inductive bias.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In our second experiment, we evaluated crossdomain transfer learning of the inductive bias by meta-training on TOP or DTSC8 and meta-testing on SNIPS. Note that early stopping was calibrated on the source meta-validation set, which gives an unfair advantage to the baseline to avoid overfitting. On inductive bias transfer, Proto and Proto-Reptile outperform the baseline by a small but statistically significant margin. As already observed, DTCS8 diversity is better to learn an inductive bias that can transfer across domain. Showing that task diversity is key to meta-learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In the third experiment, we varied N and K on the DSTC8 dataset to observe the performance gap between Proto-Reptile and the baseline. Results are plotted in the first row of Figure 1 . As expected, Micro F1 increases when there are fewer entity types to discriminate (smaller N ) or more examples Figure 1: Micro F1 averaged over 50 tasks on N -way-K-shot DTSC8 for different value of (K, N ). Error bars represent Gaussian 95% confidence intervals. In the first row of plots, (K, N ) match between training and testing. In the second row, models trained on different N -way-K-shot settings are tested on 4-way-10-shot.", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 183, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "for each entity type (larger K). Indeed, either the problem becomes easier -fewer entity types to discriminate -or we get more data per entity type. Nevertheless, the Micro F1 increases faster with K for the baseline. We expect that, in the high data regime (very large K), the baseline would catch up to our approach. However, comparing those approaches in the high data regime would not be very relevant and the meta-learning would not scale.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Finally, we looked at meta-training on N -way-K-shot datasets but meta-testing on the 4-way-10shot dataset in the second row of Figure 1 . Training with more shots or more ways does not seem to improve or decrease performances significantly for Proto-Reptile. This demonstrate our approach is robust to variations in the meta-testing scheme, compared to what is usually observed in the fewshot literature. This is probably because we sample imbalanced support sets. All results in Figure 1 are reported numerically in the appendix.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 136, "text": "Figure 1", "ref_id": null }, { "start": 481, "end": 489, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In this paper, we have proposed a new definition of few-shot learning for NER, not relying a coarsegrain approach, like in (Fritzler et al., 2019) , based on the intent to generate tasks. We have shown that, combining fine-tuning language models, CRF, diverse task generation, optimization-based and metric-based meta-learning, can significantly and consistently outperform transfer learning on three datasets. Also, our combination of Prototypical Network and Reptile is quite robust to mismatches in the number of shots or ways between metatraining and meta-testing. Thus, our approaches are effective to bootstrap NLU systems.", "cite_spans": [ { "start": 123, "end": 146, "text": "(Fritzler et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "For future works, one specificity of few-shot NER has not been properly addressed yet. Although different in every tasks, the definition of the background class (\"other\") is partially shared between tasks. This assumption could be better leveraged in our approaches to transfer some of that knowledge across tasks instead of treating the background class as a different entity type in every tasks. Another interesting direction to explore is few-shot integration, when we have to build a model that performs well on tasks made of entity types seen and unseen during meta-training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "This Section details how data was prepared. First, utterances without any named entities and the ones that are longer than 40 sub-word units (given by the BERT tokenizer) were removed. For each dataset, less than 1% of utterances were longer than 40 subwords. Removing long utterances allowed us to increase the computation efficiency significantly without impacting the results too much. datasets statistics are given in Table 2 . For SNIPS, we used the data preprocessed in https://github.com/ MiuLab/SlotGated-SLU/.", "cite_spans": [], "ref_spans": [ { "start": 422, "end": 429, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Dataset preparation and statistics", "sec_num": "6.1" }, { "text": "This section describes the search space for hyperparameters of each algorithm. The dropout parameter is the dropout of the additional layers on trop of BERT. In all settings, we used 0.1 for the BERT dropout and 64 for the batch size. During validation, we fine-tuned the current meta-model for 10 epochs, each epoch consisting of 64 batches, for each tasks. Validation Micro F1 was averaged over 5 sampled tasks with 128 queries each, using the same tasks in-between epochs to reduce the randomness. In the outer loop, we used early stopping with a patience of 4 and a maximum of 12 meta-epochs. At every meta-epoch, we reported the best epoch during the validation fine-tuning, to be used for meta-testing. The number of task per meta-epoch varies per algorithm and so is given in Tables 3 to 6 along with all the other parameters optimized. Bayesian optimization ran with 4 workers in parallel and a total of 30 training jobs, optimizing for the validation Micro F1. For Reptilebased algorithm, the number of steps stands for the number of steps used to compute the first order approximation (T in algorithms 2 and 3 of the main paper). Note that, Reptile was quite sensitive to hyper-parameter tuning and less stable than other approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyper-parameters Tuning", "sec_num": "6.2" }, { "text": "Training times are reported in Table 8 . We used p2.xlarge AWS instances to train our models. Most of the training time actually is spent in validation that requires fine-tuning the meta-model.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 38, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Hyper-parameters Tuning", "sec_num": "6.2" }, { "text": "In Figure 2 , we reported how the performance of the best model increased overtime during hyperparameters tuning. Because, we used Bayesian optimization instead of random search, it would have been very computationally intensive to compute the expected validation performance as suggested by (Dodge et al., 2019) . Indeed, because random search produces i.i.d. trials, they can build an estimator of the validation performance and its variance at no cost. In our case, trials are dependant from the previous ones. We believe, Figure 2 provides a decent estimation of the budget needed for hyper-parameters tuning and how it affects the performance.", "cite_spans": [ { "start": 292, "end": 312, "text": "(Dodge et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 526, "end": 534, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Hyper-parameters Tuning", "sec_num": "6.2" }, { "text": "The best hyper-parameters per algorithm and per decoder is reported in Table 7 and the best validation Micro F1 is reported in Table 8 .", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 7", "ref_id": null }, { "start": 127, "end": 134, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Hyper-parameters Tuning", "sec_num": "6.2" }, { "text": "All our models used almost the same number of parameters. The differences introduced by the CRFs are negligible compared to BERT (110 millions parameters). Putting aside BERT, without Prototypical Networks, the linear layer on top of BERT adds 768\u00d74\u00d7N parameters and the CRF transition matrix adds N \u00d7 N parameters. With Prototypical Networks, the linear layer on top of BERT adds 768 \u00d7 4 \u00d7 64 parameters and the CRF transition network adds 64 \u00d7 64 parameters. Table 9 list the validation Micro F1, the training time, the best number of meta-epochs and the best number of epochs that is reused to stop the training during meta-testing. Note that most of the training time of meta-training is spend during validation. 14:53:58 Table 8 : Best validation run found using Bayesian optimization. Micro F1 is averaged over 5 tasks. Results are reported with Gaussian 95% confidence interval. However, note that the same 5 validations tasks are used for every algorithms and models, which introduces a beneficial dependency. ", "cite_spans": [], "ref_spans": [ { "start": 461, "end": 468, "text": "Table 9", "ref_id": "TABREF4" }, { "start": 726, "end": 733, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Number of parameters", "sec_num": "6.3" }, { "text": "In our experiments, we also tried an alternative architecture consisting of a frozen BERT model topped with three ELU-activation linear layers with dropout(Clevert et al., 2016), motivated by the fact that fine-tuning a large capacity model with very few examples might degrade the performances. As the first architecture worked better by a significant margin for the baseline, we did not pursue further this alternative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Train Valid Test Train Valid Test Train Valid Test Utterances 9166 3832 1486 12868 13316 11547 107763 26562 26851 Entity types 27 5 7 20 6 8 84 18 20 ", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 157, "text": "Valid Test Train Valid Test Train Valid Test Utterances 9166 3832 1486 12868 13316 11547 107763 26562 26851 Entity types 27 5 7 20 6 8 84", "ref_id": null } ], "eq_spans": [], "section": "SNIPS TOP DSTC8", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Amazon Web Services", "authors": [], "year": 2017, "venue": "AWS SageMaker", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amazon Web Services. 2017. AWS SageMaker.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning to few-shot learn across diverse natural language classification tasks", "authors": [ { "first": "Trapit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Rishikesh", "middle": [], "last": "Jha", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03863" ] }, "num": null, "urls": [], "raw_text": "Trapit Bansal, Rishikesh Jha, and Andrew McCallum. 2019. Learning to few-shot learn across diverse nat- ural language classification tasks. arXiv preprint arXiv:1911.03863.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A closer look at few-shot classification", "authors": [ { "first": "Wei-Yu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yen-Cheng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zsolt", "middle": [], "last": "Kira", "suffix": "" }, { "first": "Yu-Chiang Frank", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jia-Bin", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations, ICLR 2019", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu- Chiang Frank Wang, and Jia-Bin Huang. 2019. A closer look at few-shot classification. In 7th Inter- national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Few-shot learning with meta metric learners", "authors": [ { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaoxiao", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Meta-Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Cheng, Mo Yu, Xiaoxiao Guo, and Bowen Zhou. 2019. Few-shot learning with meta metric learn- ers. In Proceedings of the 3rd Workshop on Meta- Learning (MetaLearn 2019).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Paraphrase generation for semi-supervised learning in nlu", "authors": [ { "first": "Eunah", "middle": [], "last": "Cho", "suffix": "" }, { "first": "He", "middle": [], "last": "Xie", "suffix": "" }, { "first": "William M", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eunah Cho, He Xie, and William M Campbell. 2019a. Paraphrase generation for semi-supervised learning in nlu. In Proceedings of the Workshop on Meth- ods for Optimizing and Evaluating Neural Language Generation.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Efficient semisupervised learning for natural language understanding by optimizing diversity", "authors": [ { "first": "Eunah", "middle": [], "last": "Cho", "suffix": "" }, { "first": "He", "middle": [], "last": "Xie", "suffix": "" }, { "first": "P", "middle": [], "last": "John", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Lalor", "suffix": "" }, { "first": "William M", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 IEEE Automatic Speech Recognition and Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eunah Cho, He Xie, John P Lalor, Varun Kumar, and William M Campbell. 2019b. Efficient semi- supervised learning for natural language understand- ing by optimizing diversity. In Proceedings of the 2019 IEEE Automatic Speech Recognition and Un- derstanding Workshop, ASRU 2019, Singapore, De- cember 14-18, 2019. IEEE.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Fast and accurate deep network learning by exponential linear units (elus)", "authors": [ { "first": "Djork-Arn\u00e9", "middle": [], "last": "Clevert", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Unterthiner", "suffix": "" }, { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" } ], "year": 2016, "venue": "4th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2016. Fast and accurate deep network learning by exponential linear units (elus). In 4th International Conference on Learning Representa- tions, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "7059--7069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In H. Wal- lach, H. Larochelle, A. Beygelzimer, F. d\u00c1lch\u00e9 Buc, E. Fox, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 32, pages 7059- 7069. Curran Associates, Inc.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces", "authors": [ { "first": "Alice", "middle": [], "last": "Coucke", "suffix": "" }, { "first": "Alaa", "middle": [], "last": "Saade", "suffix": "" }, { "first": "Adrien", "middle": [], "last": "Ball", "suffix": "" }, { "first": "Th\u00e9odore", "middle": [], "last": "Bluche", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Caulier", "suffix": "" }, { "first": "David", "middle": [], "last": "Leroy", "suffix": "" }, { "first": "Cl\u00e9ment", "middle": [], "last": "Doumouro", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "Gisselbrecht", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Caltagirone", "suffix": "" }, { "first": "Thibaut", "middle": [], "last": "Lavril", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.10190" ] }, "num": null, "urls": [], "raw_text": "Alice Coucke, Alaa Saade, Adrien Ball, Th\u00e9odore Bluche, Alexandre Caulier, David Leroy, Cl\u00e9ment Doumouro, Thibault Gisselbrecht, Francesco Calta- girone, Thibaut Lavril, et al. 2018. Snips voice plat- form: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Cross-lingual transfer learning with data selection for large-scale spoken language understanding", "authors": [ { "first": "Quynh", "middle": [], "last": "Do", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Gaspers", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1455--1460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quynh Do and Judith Gaspers. 2019. Cross-lingual transfer learning with data selection for large-scale spoken language understanding. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1455-1460.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Show your work: Improved reporting of experimental results", "authors": [ { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Suchin", "middle": [], "last": "Gururangan", "suffix": "" }, { "first": "Dallas", "middle": [], "last": "Card", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2185--2194", "other_ids": { "DOI": [ "10.18653/v1/D19-1224" ] }, "num": null, "urls": [], "raw_text": "Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show your work: Improved reporting of experimental results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2185- 2194, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "authors": [ { "first": "Chelsea", "middle": [], "last": "Finn", "suffix": "" }, { "first": "Pieter", "middle": [], "last": "Abbeel", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Levine", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning. JMLR.org", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th Interna- tional Conference on Machine Learning. JMLR.org.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Gaussian prototypical networks for few-shot learning on omniglot", "authors": [ { "first": "Stanislav", "middle": [], "last": "Fort", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.02735" ] }, "num": null, "urls": [], "raw_text": "Stanislav Fort. 2017. Gaussian prototypical networks for few-shot learning on omniglot. arXiv preprint arXiv:1708.02735.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Few-shot classification in named entity recognition task", "authors": [ { "first": "Alexander", "middle": [], "last": "Fritzler", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Maksim", "middle": [], "last": "Kretov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing", "volume": "", "issue": "", "pages": "993--1000", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Fritzler, Varvara Logacheva, and Maksim Kretov. 2019. Few-shot classification in named en- tity recognition task. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pages 993-1000.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Induction networks for few-shot text classification", "authors": [ { "first": "Ruiying", "middle": [], "last": "Geng", "suffix": "" }, { "first": "Binhua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yongbin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Jian", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3895--3904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction networks for few-shot text classification. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3895-3904.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Data augmentation for low resource sentiment analysis using generative adversarial networks", "authors": [ { "first": "Rahul", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2019, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahul Gupta. 2019. Data augmentation for low re- source sentiment analysis using generative adversar- ial networks. In IEEE International Conference on Acoustics, Speech and Signal Processing.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Anuj Kumar, and Mike Lewis", "authors": [ { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Mrinal", "middle": [], "last": "Mohit", "suffix": "" } ], "year": 2018, "venue": "Semantic parsing for task oriented dialog using hierarchical representations", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.07942" ] }, "num": null, "urls": [], "raw_text": "Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Ku- mar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representa- tions. arXiv preprint arXiv:1810.07942.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Generating sentences by editing prototypes", "authors": [ { "first": "Kelvin", "middle": [], "last": "Guu", "suffix": "" }, { "first": "B", "middle": [], "last": "Tatsunori", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Oren", "suffix": "" }, { "first": "", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association of Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation", "authors": [ { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Ziyun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4803--4809", "other_ids": { "DOI": [ "10.18653/v1/D18-1514" ] }, "num": null, "urls": [], "raw_text": "Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classifica- tion dataset with state-of-the-art evaluation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4803- 4809, Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Few-shot learning with metricagnostic conditional embeddings", "authors": [ { "first": "Nathan", "middle": [], "last": "Hilliard", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Phillips", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Howland", "suffix": "" }, { "first": "Art\u00ebm", "middle": [], "last": "Yankov", "suffix": "" }, { "first": "D", "middle": [], "last": "Courtney", "suffix": "" }, { "first": "Nathan", "middle": [ "O" ], "last": "Corley", "suffix": "" }, { "first": "", "middle": [], "last": "Hodas", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.04376" ] }, "num": null, "urls": [], "raw_text": "Nathan Hilliard, Lawrence Phillips, Scott Howland, Art\u00ebm Yankov, Courtney D Corley, and Nathan O Hodas. 2018. Few-shot learning with metric- agnostic conditional embeddings. arXiv preprint arXiv:1802.04376.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Few-shot learning for named entity recognition in medical text", "authors": [ { "first": "Maximilian", "middle": [], "last": "Hofer", "suffix": "" }, { "first": "A", "middle": [], "last": "Kormilitzin", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "A", "middle": [], "last": "Nevado-Holgado", "suffix": "" } ], "year": 2018, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Hofer, A. Kormilitzin, Paul Goldberg, and A. Nevado-Holgado. 2018. Few-shot learning for named entity recognition in medical text. ArXiv, abs/1811.05468.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network", "authors": [ { "first": "Yutai", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Yongkui", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Zhihan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Han", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Sequence-to-sequence data augmentation for dialogue language understanding", "authors": [ { "first": "Yutai", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1807.01554" ] }, "num": null, "urls": [], "raw_text": "Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. 2018. Sequence-to-sequence data augmentation for dialogue language understanding. arXiv preprint arXiv:1807.01554.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attentive task-agnostic meta-learning for few-shot text classification", "authors": [ { "first": "Xiang", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Havaei", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Chartrand", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Chouaib", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Jesson", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Chapados", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Matwin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Meta-Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Jiang, Mohammad Havaei, Gabriel Chartrand, Hassan Chouaib, Thomas Vincent, Andrew Jesson, Nicolas Chapados, and Stan Matwin. 2018. Atten- tive task-agnostic meta-learning for few-shot text classification. In Proceedings of the 2nd Workshop on Meta-Learning (MetaLearn 2018).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Cross-lingual transfer learning for japanese named entity recognition", "authors": [ { "first": "Andrew", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Penny", "middle": [], "last": "Karanasou", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Gaspers", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "182--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Johnson, Penny Karanasou, Judith Gaspers, and Dietrich Klakow. 2019. Cross-lingual transfer learning for japanese named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 182-189.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Siamese neural networks for one-shot image recognition", "authors": [ { "first": "Gregory", "middle": [], "last": "Koch", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2015, "venue": "ICML Deep Learning Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Koch, Richard Zemel, and Ruslan Salakhut- dinov. 2015. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Work- shop.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Learning to classify intents and slot labels given a handful of examples", "authors": [ { "first": "Jason", "middle": [], "last": "Krone", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Krone, Yi Zhang, and Mona Diab. 2020. Learn- ing to classify intents and slot labels given a handful of examples. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A closer look at feature space data augmentation for few-shot intent classification", "authors": [ { "first": "Varun", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Hadrien", "middle": [], "last": "Glaude", "suffix": "" }, { "first": "Cyprien", "middle": [], "last": "De Lichy", "suffix": "" }, { "first": "Wlliam", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP", "volume": "", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/D19-6101" ] }, "num": null, "urls": [], "raw_text": "Varun Kumar, Hadrien Glaude, Cyprien de Lichy, and Wlliam Campbell. 2019. A closer look at feature space data augmentation for few-shot intent classi- fication. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 1-10, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 -July 1, 2001, pages 282-289. Morgan Kaufmann.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Meta-sgd: Learning to learn quickly for fewshot learning", "authors": [ { "first": "Zhenguo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fengwei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1707.09835" ] }, "num": null, "urls": [], "raw_text": "Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. 2017. Meta-sgd: Learning to learn quickly for few- shot learning. arXiv preprint arXiv:1707.09835.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons", "authors": [ { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Seventh Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "188--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of the Seventh Conference on Natu- ral Language Learning, CoNLL 2003, Held in coop- eration with HLT-NAACL 2003, Edmonton, Canada, May 31 -June 1, 2003, pages 188-191. ACL.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Learning from one example through shared densities on transforms", "authors": [ { "first": "G", "middle": [], "last": "Erik", "suffix": "" }, { "first": "Nicholas", "middle": [ "E" ], "last": "Miller", "suffix": "" }, { "first": "Paul A", "middle": [], "last": "Matsakis", "suffix": "" }, { "first": "", "middle": [], "last": "Viola", "suffix": "" } ], "year": 2000, "venue": "Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662)", "volume": "1", "issue": "", "pages": "464--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik G Miller, Nicholas E Matsakis, and Paul A Viola. 2000. Learning from one example through shared densities on transforms. In Proceedings IEEE Con- ference on Computer Vision and Pattern Recogni- tion. CVPR 2000 (Cat. No. PR00662), volume 1, pages 464-471. IEEE.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "The need for biases in learning generalizations", "authors": [ { "first": "Tom", "middle": [ "M" ], "last": "Mitchell", "suffix": "" } ], "year": 1980, "venue": "Rutgers University", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom M. Mitchell. 1980. The need for biases in learn- ing generalizations. Technical report, Rutgers Uni- versity, New Brunswick, NJ.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Proceedings of the 34th International Conference on Machine Learning", "authors": [ { "first": "Tsendsuren", "middle": [], "last": "Munkhdalai", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsendsuren Munkhdalai and Hong Yu. 2017. Meta networks. In Proceedings of the 34th International Conference on Machine Learning.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "On first-order meta-learning algorithms", "authors": [ { "first": "Alex", "middle": [], "last": "Nichol", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Achiam", "suffix": "" }, { "first": "John", "middle": [], "last": "Schulman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Active learning for new domains in natural language understanding", "authors": [ { "first": "Stanislav", "middle": [], "last": "Peshterliev", "suffix": "" }, { "first": "John", "middle": [], "last": "Kearney", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "90--96", "other_ids": { "DOI": [ "10.18653/v1/N19-2012" ] }, "num": null, "urls": [], "raw_text": "Stanislav Peshterliev, John Kearney, Abhyuday Jagan- natha, Imre Kiss, and Spyros Matsoukas. 2019. Ac- tive learning for new domains in natural language un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 90- 96, Minneapolis, Minnesota. Association for Com- putational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Deep contextualized word representations", "authors": [ { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of NAACL-HLT, pages 2227-2237.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Mitch", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Third Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lance Ramshaw and Mitch Marcus. 1995. Text chunk- ing using transformation-based learning. In Third Workshop on Very Large Corpora.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset", "authors": [ { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Xiaoxue", "middle": [], "last": "Zang", "suffix": "" }, { "first": "Srinivas", "middle": [], "last": "Sunkara", "suffix": "" }, { "first": "Raghav", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Khaitan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.05855" ] }, "num": null, "urls": [], "raw_text": "Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Optimization as a model for few-shot learning", "authors": [ { "first": "Sachin", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sachin Ravi and Hugo Larochelle. 2017. Optimization as a model for few-shot learning. In In International Conference on Learning Representations.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Metric learning with adaptive density discrimination", "authors": [ { "first": "Oren", "middle": [], "last": "Rippel", "suffix": "" }, { "first": "Manohar", "middle": [], "last": "Paluri", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Dollar", "suffix": "" }, { "first": "Lubomir", "middle": [], "last": "Bourdev", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.05939" ] }, "num": null, "urls": [], "raw_text": "Oren Rippel, Manohar Paluri, Piotr Dollar, and Lubomir Bourdev. 2015. Metric learning with adaptive density discrimination. arXiv preprint arXiv:1511.05939.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Evolutionary principles in self-referential learning. on learning now to learn: The meta-meta-meta", "authors": [ { "first": "Jurgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1987, "venue": "Diploma thesis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jurgen Schmidhuber. 1987. Evolutionary principles in self-referential learning. on learning now to learn: The meta-meta-meta...-hook. Diploma thesis, Tech- nische Universitat Munchen, Germany, 14 May.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Deltaencoder: an effective sample synthesis method for few-shot object recognition", "authors": [ { "first": "Eli", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Leonid", "middle": [], "last": "Karlinsky", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Shtok", "suffix": "" }, { "first": "Sivan", "middle": [], "last": "Harary", "suffix": "" }, { "first": "Mattias", "middle": [], "last": "Marder", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Abhishek Kumar, Rogerio Feris, Raja Giryes, and Alex Bronstein. 2018. Delta- encoder: an effective sample synthesis method for few-shot object recognition. In Advances in Neural Information Processing Systems.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Prototypical networks for few-shot learning", "authors": [ { "first": "Jake", "middle": [], "last": "Snell", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Swersky", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": ";", "middle": [ "I" ], "last": "Guyon", "suffix": "" }, { "first": "U", "middle": [ "V" ], "last": "Luxburg", "suffix": "" }, { "first": "S", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "H", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "R", "middle": [], "last": "Fergus", "suffix": "" }, { "first": "S", "middle": [], "last": "Vishwanathan", "suffix": "" }, { "first": "R", "middle": [], "last": "Garnett", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "4077--4087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4077-4087. Curran Associates, Inc.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Portuguese Named Entity Recognition using BERT-CRF", "authors": [ { "first": "F\u00e1bio", "middle": [], "last": "Souza", "suffix": "" }, { "first": "Rodrigo", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Lotufo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F\u00e1bio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2019. Portuguese Named Entity Recognition using BERT-CRF. arXiv e-prints.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Hierarchical attention prototypical networks for few-shot text classification", "authors": [ { "first": "Shengli", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qingfeng", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Tengchao", "middle": [], "last": "Lv", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "476--485", "other_ids": { "DOI": [ "10.18653/v1/D19-1045" ] }, "num": null, "urls": [], "raw_text": "Shengli Sun, Qingfeng Sun, Kevin Zhou, and Tengchao Lv. 2019. Hierarchical attention prototypical net- works for few-shot text classification. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 476-485, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Learning to compare: Relation network for few-shot learning", "authors": [ { "first": "Flood", "middle": [], "last": "Sung", "suffix": "" }, { "first": "Yongxin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "H", "middle": [ "S" ], "last": "Philip", "suffix": "" }, { "first": "Timothy", "middle": [ "M" ], "last": "Torr", "suffix": "" }, { "first": "", "middle": [], "last": "Hospedales", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "1199--1208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199-1208.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Rethinking few-shot image classification: a good embedding is all you need? CoRR, abs", "authors": [ { "first": "Yonglong", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Joshua", "middle": [ "B" ], "last": "Tenenbaum", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Isola", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, and Phillip Isola. 2020. Rethinking few-shot image classification: a good embedding is all you need? CoRR, abs/2003.11539.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik", "middle": [ "F" ], "last": "", "suffix": "" }, { "first": "Tjong Kim", "middle": [], "last": "Sang", "suffix": "" } ], "year": 2002, "venue": "COLING-02: The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Few-shot learning through an information retrieval lens", "authors": [ { "first": "Eleni", "middle": [], "last": "Triantafillou", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eleni Triantafillou, Richard Zemel, and Raquel Urta- sun. 2017. Few-shot learning through an informa- tion retrieval lens. In Advances in Neural Informa- tion Processing Systems.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Meta-dataset: A dataset of datasets for learning to learn from few examples", "authors": [ { "first": "Eleni", "middle": [], "last": "Triantafillou", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Dumoulin", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Lamblin", "suffix": "" }, { "first": "Utku", "middle": [], "last": "Evci", "suffix": "" }, { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Goroshin", "suffix": "" }, { "first": "Carles", "middle": [], "last": "Gelada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Swersky", "suffix": "" }, { "first": "Pierre-Antoine", "middle": [], "last": "Manzagol", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.03096" ] }, "num": null, "urls": [], "raw_text": "Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pas- cal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Man- zagol, et al. 2019. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Matching networks for one shot learning", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Blundell", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Lillicrap", "suffix": "" }, { "first": "Daan", "middle": [], "last": "Wierstra", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "3630--3638", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Charles Blundell, Timothy Lillicrap, ko- ray kavukcuoglu, and Daan Wierstra. 2016. Match- ing networks for one shot learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 29, pages 3630-3638. Curran Asso- ciates, Inc.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "353--355", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018a. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 353-355.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Large margin few-shot learning", "authors": [ { "first": "Yong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiao-Ming", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Qimai", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Wangmeng", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yong Wang, Xiao-Ming Wu, Qimai Li, Jiatao Gu, Wangmeng Xiang, Lei Zhang, and Victor O. K. Li. 2018b. Large margin few-shot learning.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Simple and effective few-shot named entity recognition with structured nearest neighbor learning", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Arzoo", "middle": [], "last": "Katiyar", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Yang and Arzoo Katiyar. 2020. Simple and effec- tive few-shot named entity recognition with struc- tured nearest neighbor learning. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP).", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Deep triplet ranking networks for one-shot recognition", "authors": [ { "first": "Meng", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Yuhong", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07275" ] }, "num": null, "urls": [], "raw_text": "Meng Ye and Yuhong Guo. 2018. Deep triplet ranking networks for one-shot recognition. arXiv preprint arXiv:1804.07275.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Multi-level matching and aggregation network for few-shot relation classification", "authors": [ { "first": "Zhen-Hua", "middle": [], "last": "Zhi-Xiu Ye", "suffix": "" }, { "first": "", "middle": [], "last": "Ling", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2872--2881", "other_ids": { "DOI": [ "10.18653/v1/P19-1277" ] }, "num": null, "urls": [], "raw_text": "Zhi-Xiu Ye and Zhen-Hua Ling. 2019. Multi-level matching and aggregation network for few-shot re- lation classification. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 2872-2881, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Data augmentation for spoken language understanding via joint variational generation", "authors": [ { "first": "Youhyun", "middle": [], "last": "Kang Min Yoo", "suffix": "" }, { "first": "Sang", "middle": [], "last": "Shin", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kang Min Yoo, Youhyun Shin, and Sang goo Lee. 2018. Data augmentation for spoken language understand- ing via joint variational generation.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Diverse few-shot text classification with multiple metrics", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaoxiao", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Jinfeng", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Shiyu", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Saloni", "middle": [], "last": "Potdar", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Gerald", "middle": [], "last": "Tesauro", "suffix": "" }, { "first": "Haoyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the North American Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text clas- sification with multiple metrics. In Proceedings of the North American Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Metagan: An adversarial approach to few-shot learning", "authors": [ { "first": "Ruixiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Che", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Yangqiu", "middle": [], "last": "Song", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruixiang Zhang, Tong Che, Zoubin Ghahramani, Yoshua Bengio, and Yangqiu Song. 2018. Metagan: An adversarial approach to few-shot learning. In Ad- vances in Neural Information Processing Systems.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Data augmentation with atomic templates for spoken language understanding", "authors": [ { "first": "Zijian", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Su", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.10770" ] }, "num": null, "urls": [], "raw_text": "Zijian Zhao, Su Zhu, and Kai Yu. 2019. Data augmen- tation with atomic templates for spoken language un- derstanding. arXiv preprint arXiv:1908.10770.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Averaged Micro F1 over the same 5 tasks randomly drawn from the SNIPS validation split during Bayesian optimization of the hyper-parameters. Each dot represents one meta-training. The lines indicate the best model performance overtime.", "type_str": "figure", "num": null }, "TABREF0": { "html": null, "num": null, "text": "48.18 \u00b1 4.78 35.18 \u00b1 3.27 CRF 89.67 \u00b1 0.63 78.78 \u00b1 1.14 82.88 \u00b1 0.99 64.99 \u00b1 3.51 75.69 \u00b1 2.53 ProtoNet SoftMax 87.11 \u00b1 1.26 78.49 \u00b1 1.37 80.37 \u00b1 1.51 62.08 \u00b1 3.58 66.39 \u00b1 2.73 CRF 58.56 \u00b1 1.78 44.75 \u00b1 1.92 52.97 \u00b1 2.04 29.53 \u00b1 4.40 71.49 \u00b1 3.81 ProtoNet* SoftMax 54.52 \u00b1 1.82 43.23 \u00b1 2.08 45.77 \u00b1 1.26 28.34 \u00b1 3.74 60.07 \u00b1 2.62 CRF 80.08 \u00b1 3.58 74.85 \u00b1 3.47 75.06 \u00b1 3.32 57.18 \u00b1 6.02 70.50 \u00b1 2.60 Reptile SoftMax 80.00 \u00b1 3.51 75.82 \u00b1 3.48 75.14 \u00b1 3.45 57.64 \u00b1 5.96 71.06 \u00b1 2.77", "type_str": "table", "content": "
Meta-train datasetSNIPSTOPDSTC8TOPDSTC8
Meta-test datasetSNIPSSNIPSSNIPSTOPDSTC8
CRF SoftMax 73.68 \u00b1 3.41 76.84 \u00b1 3.75 N/A Proto-N/A N/A 51.09 \u00b1 5.06 34.57 \u00b1 4.70 Baseline N/A CRF 89.20 \u00b1 0.89 80.50 \u00b1 1.24 82.96 \u00b1 1.19 67.34 \u00b1 3.87 78.96 \u00b1 1.60
ReptileSoftMax 88.09 \u00b1 0.90 77.53 \u00b1 1.30 79.83 \u00b1 1.74 64.06 \u00b1 3.75 62.56 \u00b1 2.14
Proto-CRF49.98 \u00b1 2.02 48.09 \u00b1 1.85 51.63 \u00b1 1.37 33.78 \u00b1 3.41 75.22 \u00b1 2.44
Reptile*SoftMax 58.41 \u00b1 1.63 44.14 \u00b1 1.88 37.93 \u00b1 1.23 24.63 \u00b1 3.68 58.09 \u00b1 2.55
" }, "TABREF1": { "html": null, "num": null, "text": "Micro F1 averaged over 50 tasks. Results are reported with a Gaussian 95% confidence interval. Asterisks indicate that prototypes were not finetuned. The best result per column is in bold.", "type_str": "table", "content": "
N = 4K = 10
Micro F10 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80Testing on (K,N) Testing on (10,4)
(5,4)(10,4)(20,4)(10,4) (K,N) used for training(10,9)(10,14)
Proto-Reptile+CRFProto-Reptile+SoftMaxProto-Reptile*+CRF
Proto-Reptile*+SoftMaxBaseline+CRFBaseline+SoftMax
" }, "TABREF4": { "html": null, "num": null, "text": "Validation Micro F1 with Gaussian 95% confidence interval and training times.", "type_str": "table", "content": "" } } } }