|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:55:20.322679Z" |
|
}, |
|
"title": "Overcoming Conflicting Data when Updating a Neural Semantic Parser", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Gaddy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"settlement": "Berkeley *" |
|
} |
|
}, |
|
"email": "dgaddy@berkeley.edu" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Kouzemtchenko", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"settlement": "Berkeley *" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Reddy", |
|
"middle": [], |
|
"last": "Muddireddy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"settlement": "Berkeley *" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Prateek", |
|
"middle": [], |
|
"last": "Kolhar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"settlement": "Berkeley *" |
|
} |
|
}, |
|
"email": "pkolhar@google.com" |
|
}, |
|
{ |
|
"first": "Rushin", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"settlement": "Berkeley *" |
|
} |
|
}, |
|
"email": "rushinshah@google.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we explore how to use a small amount of new data to update a task-oriented semantic parsing model when the desired output for some examples has changed. When making updates in this way, one potential problem that arises is the presence of conflicting data, or out-of-date labels in the original training set. To evaluate the impact of this understudied problem, we propose an experimental setup for simulating changes to a neural semantic parser. We show that the presence of conflicting data greatly hinders learning of an update, then explore several methods to mitigate its effect. Our multi-task and data selection methods lead to large improvements in model accuracy compared to a naive datamixing strategy, and our best method closes 86% of the accuracy gap between this baseline and an oracle upper bound. * Work performed during an internship at Google.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we explore how to use a small amount of new data to update a task-oriented semantic parsing model when the desired output for some examples has changed. When making updates in this way, one potential problem that arises is the presence of conflicting data, or out-of-date labels in the original training set. To evaluate the impact of this understudied problem, we propose an experimental setup for simulating changes to a neural semantic parser. We show that the presence of conflicting data greatly hinders learning of an update, then explore several methods to mitigate its effect. Our multi-task and data selection methods lead to large improvements in model accuracy compared to a naive datamixing strategy, and our best method closes 86% of the accuracy gap between this baseline and an oracle upper bound. * Work performed during an internship at Google.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Most work in semantic parsing (and NLP in general) considers a scenario where the desired outputs of a model are static and can be specified with a large, fixed dataset. However, when deploying a semantic parsing model in a real world virtual assistant, it is often necessary to update a model to support new features or to enable improvements in downstream processing. For example, creators of a media assistant may want to add new commands specialized towards new media types like podcasts, or those of a navigation assistant might like to add a new feature to allow users to specify roads to avoid. Such changes require that the structures output by the model are updated, either by introducing a new intent (a representation of a desired action) or re-configuring arguments of existing intents. In this work, we investigate the best way to make updates to the intents and arguments of a task-oriented neural semantic parsing model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To make an update, new data annotations must be collected to specify the new form that is desired for model outputs. Because changes can be quite frequent, we would like to be able to collect a small amount of data for each update (on the order of 50 examples) and merge the new information with a much larger existing dataset. Naively, we might hope to simply combine new data in with the old and train on the combination. However, this approach has the problem that some of the older data with out-of-date labels may conflict with the new labeling. These conflicts occur whenever inputs that would be affected by a change appear in the original dataset. For example, when introducing a new intent for podcasts, the original dataset may have included podcast examples labeled with a more generic media intent or a label indicating that the feature is unsupported. When introducing a new argument, say 'roads to avoid', there may be instances in the original dataset that should have this argument labeled but do not because they were annotated before the argument was introduced. This conflicting data can confuse the model and cause it to predict labels as they were before an update rather than after. Unfortunately, this problem of conflicting data during model updates is understudied in the academic literature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To enable exploration of this problem on publicly available datasets, we propose a method to easily create simulated updates with conflicts (Section 3.3) and release our update data. The idea behind our method is to form updates in the reverse direction, relabeling instances of a particular intent or argument to simulate out-of-date labels. Using our proposed setup, we demonstrate how conflicting data greatly hinders learning of updates (Section 5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In addition, we explore several methods for mitigating the negative effects of conflicting data by modifying how we combine the old and the new data (Section 6). One approach is to keep the old and new datasets separate, but to share information indirectly with either fine-tuning or multi-task learning. Another approach is to explicitly filter data that is likely to conflict using a learned classifier. Each of these methods substantially improves performance compared to naively mixing in new data, establishing strong baselines for future work on this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In summary, the contributions of this work are 1) establishing an experimental setup to test updates with conflicting data, 2) demonstrating that conflicting data leads to large losses in model performance if left unmitigated, and 3) exploring several possible approaches to mitigate this problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There has been a substantial amount of prior work on making updates to neural models (Xiao et al., 2014; Rusu et al., 2016; Li and Hoiem, 2016; Kirkpatrick et al., 2017; Castro et al., 2018) , demonstrating a recognition in the community that the ability to update models is important. However, most of these works consider a setting where none of the original data conflicts with the new labels. Thus these works, and the general continual learning and class-incremental learning literature, assume that the space of inputs affected by a change does not appear at all in the original data. In many scenarios, this assumption does not hold because the original dataset will aim to cover the full distribution of inputs a model might encounter. Because of the non-conflicting assumption, this body of prior work focuses on other questions such as what can be done when the original data is no longer available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 104, |
|
"text": "(Xiao et al., 2014;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 123, |
|
"text": "Rusu et al., 2016;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 143, |
|
"text": "Li and Hoiem, 2016;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 169, |
|
"text": "Kirkpatrick et al., 2017;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 190, |
|
"text": "Castro et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "One paper that does consider updates with label conflicts is Chen and Moschitti (2019) . Although they do not intentionally set out to study conflictingdata updates, in their NER task locations where new labels apply are tagged with special \"outside\" labels prior to the update, which cause conflicts with the new labels. While their work avoids the conflicting data problem by considering a setting where the original data is no longer available, our experiments show that it can be advantageous to instead keep the original data around and use more direct methods to avoid problems of conflicting data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 86, |
|
"text": "Chen and Moschitti (2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our work also has parallels to the concept drift literature (Tsymbal, 2004; Lu et al., 2018) . However, concept drift work focuses on unintentional changes to a natural data distribution over time, while our work is concerned with intentional restructuring of the output representation space, leading to very a different setup and choice of methods. In particular, that work operates over a stream of examples where changes often occur gradually, does not generally include structured prediction tasks, and does not allow the practitioner to introduce additional structure on the types of annotation given (as we do in Section 3.2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 75, |
|
"text": "(Tsymbal, 2004;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 76, |
|
"end": 92, |
|
"text": "Lu et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, our work relates to work on training with noisy labels (Sukhbaatar et al., 2014; Veit et al., 2017; Jiang et al., 2018) , since the incorrect labels from conflicting data could be viewed as a type of noise. However, it is important to evaluate the problem of conflict-causing updates separately from other noisy label problems because the distribution of incorrect labels due to an update will be very different from most other sources of label noise. While not all noisy-label methods can directly apply to our task (as many are designed for classification as opposed to structured prediction) and may not take full advantage of the additional structure of our problem, we believe this line of work can still serve as a source of inspiration for future exploration on our task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 89, |
|
"text": "(Sukhbaatar et al., 2014;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 90, |
|
"end": 108, |
|
"text": "Veit et al., 2017;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 128, |
|
"text": "Jiang et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The experiments in this paper focus on a semantic parsing task where the goal is to generate a tree structure conditioned on an input sentence. We use the task formulation and data from Gupta et al. (2018) as the basis of our setup. Output trees are made up of intents and arguments (aka slots), where intents come from a fixed inventory of labels, and arguments consist of an argument-type label along with a value. Argument values may either be freeform text selected from the input sentence, or a nested intent to form a hierarchical structure. In this work, we represent these trees with a linearized form using nested brackets, which allows for the use of standard sequence-to-sequence models (see Section 4 for details of our base model and Figure 1 below for some example inputs and outputs).", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 205, |
|
"text": "Gupta et al. (2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 747, |
|
"end": 755, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preliminaries -Task-oriented Semantic Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this paper, we focus on the task of making a single update to the intent and argument structure output by a model, where the update is specified by collecting a small amount of additional data. 1 Accordingly, our task setup expects two sets of data: a large amount of data from before a change, which we will call the V1 (version one) set, and a small amount of data from after a change, which we call the V2 set. The V1 set represents the current state of the system with any data collected in the past, while the new V2 set is collected specifically for the purpose of introducing a particular update. For the purposes of testing methods in this paper, we will form these two sets synthetically, as described in Section 3.3 below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Updates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Because the V2 set is gathered explicitly for the purpose of introducing a particular update, we would like the input distribution of this data to be targeted rather than uniformly covering the full input space. Ideally, a substantial portion of this data should be examples whose labels will actually change after the update. We will call this portion of data the changed set. However, it is also useful to specify some examples that are not affected by a change, so that we can accurately determine the scope of a change. We will call inputs whose label would be the same under the new and old label scheme unchanged examples. We will include a set of unchanged examples in the V2 data to show that these labels have been confirmed under the V2 labeling scheme.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Updates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Many types of changes that we care about only apply to examples labeled in a particular way in the original V1 data. For example, a new argument can often only be used for specific intents, or a new intent may only apply to examples previously labeled as unsupported. By taking advantage of this information, we may be able to avoid some unwanted side effects of our model updates. To help us handle this information, we define a third category of examples: trivially-unchanged. The trivially-unchanged partition contains all examples which we can determine to be unaffected based solely on the original V1 labels and some simple hand-defined rule like a list of affected intents. By identifying these examples, we can directly include them in the updated training set without causing label conflicts. In the remainder of this paper, we reserve the term unchanged to refer specifically to unchanged examples that do not fall into the trivially-unchanged category. Thus, the unchanged partition represents the remaining hard examples that are difficult to distinguish from the changed set in the V1 data. See Figure 1 for examples of how our three data partitions apply to particular updates.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1108, |
|
"end": 1116, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Updates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In some instances, it is also useful to talk about the data in the original V1 set in terms of the three partitions (changed, unchanged, and triviallyunchanged) . In this case, these labels refer to whether an example would change if we had gathered new labels for them. In actuality, the changed subset will have out-of-date labels in V1, and we call these examples conflicting data. Our experiments show that conflicting data causes substantial problems when learning an update (see Section 5 and 7), but unfortunately, the examples that make up the conflicting set cannot be easily identified in the V1 data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 160, |
|
"text": "(changed, unchanged, and triviallyunchanged)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Updates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To enable exploration of the conflicting data problem, we demonstrate a method for easily simulating updates with conflicts on an existing publicly available dataset. We form our synthetic updates in the reverse direction -the data from the original dataset represents the final V2 form, and we modify some examples to represent a V1 (pre-update) form. We aim to make changes that can be done automatically in the reverse direction, while still being interesting to learn in the forward direction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Synthetic Changes", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To form our updates, we select a particular intent or argument from the dataset to simulate the introduction of. We sample a set of examples labeled with the selected intent or argument to form the primary part of the V2 training set. For other examples with this intent or argument, we will keep them as part of the V1 training set, but re-label them to some form they may have taken before the new intent or argument was introduced. These re-labeled examples act as conflicting data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Synthetic Changes", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For example, suppose we would like to simulate the creation of the intent GET_INFO_TRAFFIC, as shown at the top of Figure 1 . After selecting a subset of GET_INFO_TRAFFIC examples for the V2 training set, we form a conflicting set in the V1 data by re-labeling GET_INFO_TRAFFIC examples to use the related intent GET_INFO_ROAD_CONDITION. For this particular change, we kept the arguments the same when updating, but for others we remove Examples not labeled with one of the seven intents that allow the SL:OBSTRUCTION slot are trivially-unchanged. Figure 1 : Examples from different types of updates that we simulate. To simulate updates, we use the original dataset as the V2 form, and modify some examples to a V1 form in a way that is not easily reversed. We describe our data in terms of three partitions: changed, unchanged, and trivially-unchanged -as described in Section 3.2. Note that while trivially-unchanged examples can be easily identified from their V1 labels, changed and unchanged examples cannot be easily distinguished in V1. arguments or relabel them in the V1 set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 123, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 556, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Creating Synthetic Changes", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Because the examples with our \"new\" intent or argument are different between the V1 and V2 data, those examples make up the changed subset. Recall that we would also like to include some unchanged examples in the V2 training set. To select examples for this partition, we find other examples labeled with the intent that we re-labeled our conflicting set to have. For our GET_INFO_TRAFFIC intent example, this means finding some examples with the label GET_INFO_ROAD_CONDITION, since these examples look similar to the re-labeled conflicting examples but should have the same labels in V1 and V2. Since all of the examples where this update applies are labeled as GET_INFO_ROAD_CONDITION in the V1 training set, any example with a different V1 label can be considered trivially-unchanged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Synthetic Changes", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In total, we form five different synthetic updates to use in our experiments. 2 Figure 1 also shows examples for two other types of updates: introducing a new intent for previously unsupported inputs and introducing a new argument. The updates are formed from the TOP dataset, which contains over 40,000 English queries about navigation or events labeled with tree-based semantic parses (Gupta et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 387, |
|
"end": 407, |
|
"text": "(Gupta et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 88, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Creating Synthetic Changes", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The semantic parsing model we use for our experiments is a sequence-to-sequence model based on the transformer architecture (Vaswani et al., 2017) . We use a sequence-to-sequence model because of their flexibility and widespread use in semantic parsing (Jia and Liang, 2016; Dong and Lapata, 2016; Rongali et al., 2020) and NLP in general. Our model encodes a language input using a pretrained 12-layer BERT model (Devlin et al., 2019) , then decodes a parse tree flattened by depth-first traversal. At each step, the decoder can generate either 1) a labeled bracket representing an intent or argument label 2) a closing bracket or 3) an index of an input token to be copied. The hyperparameters of our model architecture and training can be found in Appendix A. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 146, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 274, |
|
"text": "(Jia and Liang, 2016;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 297, |
|
"text": "Dong and Lapata, 2016;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 319, |
|
"text": "Rongali et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 435, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Before we describe and test our methods for mitigating conflicts, this section will briefly explore how conflicting data affects learning. We evaluate model updates both with and without conflicting data, and compare accuracies as we vary the amount of new V2 data being introduced. The non-conflicting setting represents an oracle where all conflicting data is removed, which we can easily simulate in our synthetic data-creation process but is generally not achievable on real-world data without additional manual annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Effect of Conflicting Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "More precisely, when evaluating updates with conflicting data, we include 50 examples with outof-date labels in the changed category. We mix these examples with a full set of unchanged and trivially-unchanged examples to represent a V1 training set from before an update. We then introduce different amounts of changed examples with updated labels to act as the V2 training set, with sizes ranging from from half the conflicting set size (25) to four times as much (200). For each data size, we measure accuracy using an exact match metric, meaning the entire tree output by the model must match the reference to be considered correct. For the non-conflicting setting, we do not include the 50 examples with out-of-date labels, but otherwise use the same setup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Effect of Conflicting Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The results for this experiment are shown in Figure 2, after averaging over five different changes (a detailed breakdown of results can be found in Appendix C). To the left of the graph, we see that when the amount of conflicting data is greater than the size of the new data being added, we get less than half of the accuracy we would get without conflicting data. While the gap narrows somewhat with more data, even when we introduce four times as much new data as there are conflicting examples, the presence of conflicting data still leads to a loss of over 10% accuracy. These results show just how detrimental conflicting data can be to the learning of a model update.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 51, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effect of Conflicting Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this work, we consider three methods to alleviate the problems caused by conflicting data: finetuning, multi-task learning with separate decoder heads, and data filtering with a learned classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mitigation Methods", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our first and simplest method for handling conflicting data is fine-tuning. For this approach, we first train a model on only the V1 training data, then after training completes, we take the final parameters and use those as initialization for training with the V2 data. This approach can alleviate the conflicting data problem because the second stage of training does not include any conflicting data and the model will have less confusion about how to label changed examples. By first training on the V1 data, we are also able to benefit indirectly from the larger amount of data contained in it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-tuning", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "During the second stage of training, we train on the changed and unchanged data in the V2 training set, as well as the trivially-unchanged examples from the V1 training set. Trivially-unchanged data from the V1 data can be included in V2 training because it can be easily identified and is known to not conflict.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-tuning", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "For our next method, we use an approach from multi-task learning where multiple decoder heads are used for different sets of data (Caruana, 1997; Fan et al., 2017) . Our two \"tasks\" correspond to the V1 data and the V2 data, and we use a separate set of parameters for the final pre-softmax layer of the decoder for each of the versions (as illustrated in Figure 3 ). The V1 head is only trained with V1 data and the V2 head is only trained with V2 data, but the encoder layers and decoder transformer are shared between both. This way, the V2 head is never trained on conflicting examples from V1, but the overall V2 model can still benefit from Figure 3 : In our multi-task method, we feed the new V2 data to a separate decoder head to separate it from the possibly-conflicting data in the original V1 training set. The encoder layers and decoder transformer layers are shared between V1 and V2. some information in V1 data indirectly through the shared encoder and decoder transformer layers. After training, the V1 head can be discarded, and the V2 head is used to make decisions. We train on both versions simultaneously, with each batch containing some of both types of data. As with our other methods, the V2 head is also trained on trivially-unchanged data from the V1 data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 145, |
|
"text": "(Caruana, 1997;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 163, |
|
"text": "Fan et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 364, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 655, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-task", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The goal behind this approach is similar to that of fine-tuning: to avoid training the V2 model directly on the possibly-conflicting V1 data while still sharing some amount of information through model parameters. However, unlike fine-tuning, which is liable to forget information from V1 as training progresses, the simultaneous training for the multi-task method keeps the V1 information active for better sharing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-task", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The final method we explore in this work is classifier-based data selection. The idea behind the data selection strategy is to explicitly select examples from the original V1 data that we don't think will conflict. Using the small amount of V2 data, we train a classifier to predict whether an example will be changed or unchanged, and then apply this classifier to the V1 data, as illustrated in Figure 4 . We can then include the selected examples in our updated training set, allowing us to take advantage of more information from the original training set while filtering out many of the problematic conflicting examples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 405, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classifier-based Data Selection", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "We first train a classifier on the V2 data to learn a binary decision between changed and unchanged examples. This training requires that we can distinguish between which examples are changed and unchanged, which can either be specified as part of the annotation process, or can be estimated by Figure 4 : Our selection classification method uses a binary classifier to filter the original V1 data based on whether it is likely to be out-of-date.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 303, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classifier-based Data Selection", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "running an existing V1 model to predict old labels for the provided V2 examples (in this work we use the annotation method, as part of our synthetic data-creation process). Our classifier uses the same BERT encoder as our sequence-to-sequence parsing model and is initialized with parameters from a parsing model for the V1 data. Representations of this encoder are averaged across time before feeding into a small feedforward network with a hidden dimension of 512. After training on the V2 set, the binary classifier is run on the V1 training set (excluding the trivially-unchanged examples, which can be automatically included as-is for V2 training). This creates a categorization of predictedchanged and predicted-unchanged, which we hope will closely approximate the true changed and unchanged sets. For many of the changes we tested, the classifiers performed quite well, with accuracies above 90% on held-out data. For examples in predicted-unchanged, we will include them in the set of training examples used to train our updated model. For examples in predicted-changed, we do not want to directly include them, and consider two possible solutions: 1) remove them completely or 2) include the examples with an intent-only loss, as described below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier-based Data Selection", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "The idea of the intent-only loss variant of the data selection method is to try to take advantage of more information about the predicted-changed examples without requiring us to know the full form of the tree after a change. While we know that these examples are likely to have changed, in general we do not know if or how the full argument structure will change for all examples. However, it is usually possible to know what the intent should be for the changed examples. If the update is introducing a new intent, we can use this new intent for the predicted-changed examples. If the update only affects arguments, we can keep the original intents for the examples. Which of these cases applies can be specified manually (as we do in our experiments), or could likely be determined automatically by running a V1 model on the V2 data and comparing the intents. Once we have determined the new top-level intent for the predicted-changed examples, we will include them as special training examples that only receive a loss on their intent. Since the intent is the first token to be predicted by the sequence-to-sequence decoder, we can simply mask out the loss for the rest of the tokens in the sequence. With this masking, argument structure prediction will be unaffected by these examples, and the model must defer to other examples, such as those in the V2 training set, to learn argument labeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intent-only Loss", |
|
"sec_num": "6.3.1" |
|
}, |
|
{ |
|
"text": "One case that this approach does not currently handle are updates that introduce multiple new intents simultaneously, and we leave an exploration of that case to future work. To use an intent-only loss on those updates, more fine-grained classifications are needed to determine what the correct intent for the changed examples are.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intent-only Loss", |
|
"sec_num": "6.3.1" |
|
}, |
|
{ |
|
"text": "In this section, we describe the evaluation of our methods for mitigating the conflicting data problem. For each update, we form a test set by randomly selecting 100 examples from each of the three data partitions (changed, unchanged, and triviallyunchanged) , and a V2 training set by selecting 50 changed and 50 unchanged examples. All remaining examples are placed into the V1 training set and changed examples are relabeled appropriately. Note that unlike in Section 5 where we used a fixedsize conflicting set of 50 examples in V1, in this setup we use all remaining examples available after sampling a subset for V2, which results in larger conflicting sets ranging from hundreds to thousands of examples. The large sizes of the conflicting sets further amplifies the effect of the conflicting data. We report results in terms of exact match accuracy between the predicted and target parse structure. Results are aggregated across 5 different updates, giving us a total of 1500 test examples for each method (5 updates \u00d7 3 partitions \u00d7 100 examples). We also average across 5 different runs for each method to reduce variance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 258, |
|
"text": "(changed, unchanged, and triviallyunchanged)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We compare against three baselines: training only on the original V1 data, training only on the new V2 data, and directly mixing the two data versions together into a single dataset. 3 We also compare to an upper bound where the entire V1 training set is re-annotated with updated labels. For many updates, this upper bound requires thousands of new annotations, as compared to the one hundred labels used by our methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 184, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The results of our evaluation are shown in Table 1 . All of our methods substantially outperform the baselines. Our best method, the selection classifier with an intent-only loss on changed examples ( \u00a7 6.3), obtains an accuracy of 71.5%, covering 86% of the gap between the best baseline and the oracle upper bound.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 50, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "To see a more clear picture of what is happening, we also break down results by data partitions, as shown in Figure 5 (an even more detailed breakdown across different updates is provided in Appendix D). In this chart, we can see that the baseline that mixes the V1 and V2 data without accounting for conflicts performs extremely poorly on the changed examples, echoing our results in Section 5. On the other hand, training only on the small set of V2 data throws away all the information from the original V1 training set, limiting its performance (particularly on unchanged examples). Our methods provide an effective way to combine information in both datasets without overwhelming changed data with out-of-date labels.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 117, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "This work has shown that in order to make effective updates to the outputs of a neural semantic parsing model by adding new data, it is important to consider the effect of conflicting examples in the original data. Conflicting data is likely a problem in many different scenarios where outputs to a model must be updated, and we believe that further study into methods for mitigating the effects of conflicting data is an important direction to allow practitioners to handle the constantly-changing needs of real-world machine learning applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The following table breaks down our main results across the different updates tested. These results are described in Section 7 and summarized in Figure 5 . Update key: A: New intent from unsupported, B: New intent from related with argument relabeling, C: New argument, D: New intent from related with same arguments, E: New intent from multiple intents in V1", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 153, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "D Main Results Detail", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While handling a stream of updates may pose additional challenges, we leave an investigation of that scenario to future work. What qualifies as a single update is somewhat open to interpretation, but our methods are not overly sensitive to how it is defined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Data for our synthesized updates can be downloaded from https://github.com/google/ overcoming-conflicting-data/, and Appendix B summarizes the partition sizes for each of these changes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also tried a variant of the direct mixing baseline where the V2 data is upsampled to try to account for differences in size, but this obtained almost identical results, indicating that upsampling is not an effective method for overcoming conflicting data (not shown in main results; see Appendix D).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "These hyperparameters were kept constant across all experiments and were selected based on defaults from an existing implementation. Model parameter counts are dominated by the BERT encoder, with approximately 100 million parameters. Training was performed on TPUs and took several hours per run.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The table below briefly describes the five updates we test on with the sizes of each data partition changed, unchanged, and trivially-unchanged. For our primary experiments, 100 examples from each partition are placed in the test set, 50 examples from changed and unchanged are placed in the V2 training set, and the remainder are used for the V1 training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B List of changes with data sizes", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Changed The table below details the results for the experiment described in Section 5 and summarized in Figure 2 . We vary the size of updated data in the V2 changed partition while holding constant a set of 50 conflicting examples in the original V1 data. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 112, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Update type", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Multitask learning. Machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "41--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "End-to-end incremental learning", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Francisco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuel", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Castro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicol\u00e1s", |
|
"middle": [], |
|
"last": "Mar\u00edn-Jim\u00e9nez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cordelia", |
|
"middle": [], |
|
"last": "Guil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karteek", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Alahari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the European conference on computer vision (ECCV)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "233--248", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francisco M Castro, Manuel J Mar\u00edn-Jim\u00e9nez, Nicol\u00e1s Guil, Cordelia Schmid, and Karteek Alahari. 2018. End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV), pages 233-248.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Transfer learning for sequence labeling using source model and target data", |
|
"authors": [ |
|
{ |
|
"first": "Lingzhen", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "6260--6267", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lingzhen Chen and Alessandro Moschitti. 2019. Trans- fer learning for sequence labeling using source model and target data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6260-6267.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Language to logical form with neural attention", |
|
"authors": [ |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "33--43", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li Dong and Mirella Lapata. 2016. Language to logi- cal form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 33-43, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Transfer learning for neural semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emilio", |
|
"middle": [], |
|
"last": "Monti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lambert", |
|
"middle": [], |
|
"last": "Mathias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--56", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-2607" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer learning for neural seman- tic parsing. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 48-56, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semantic parsing for task oriented dialog using hierarchical representations", |
|
"authors": [ |
|
{ |
|
"first": "Sonal", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rushin", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mrinal", |
|
"middle": [], |
|
"last": "Mohit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2787--2792", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1300" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Ku- mar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representa- tions. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787-2792, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Data recombination for neural semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "12--22", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 12-22, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels", |
|
"authors": [ |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengyuan", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Leung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lu Jiang, Zhengyuan Zhou, T. Leung, L. Li, and Li Fei- Fei. 2018. Mentornet: Learning data-driven curricu- lum for very deep neural networks on corrupted la- bels. In ICML.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Overcoming catastrophic forgetting in neural networks", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Rabinowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Veness", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Desjardins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Rusu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kieran", |
|
"middle": [], |
|
"last": "Milan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Quan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiago", |
|
"middle": [], |
|
"last": "Ramalho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Agnieszka", |
|
"middle": [], |
|
"last": "Grabska-Barwinska", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the national academy of sciences", |
|
"volume": "114", |
|
"issue": "", |
|
"pages": "3521--3526", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, et al. 2017. Over- coming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning without forgetting", |
|
"authors": [ |
|
{ |
|
"first": "Zhizhong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Hoiem", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "European Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "614--629", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhizhong Li and Derek Hoiem. 2016. Learning with- out forgetting. In European Conference on Com- puter Vision, pages 614-629. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Learning under concept drift: A review", |
|
"authors": [ |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anjin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fan", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feng", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joao", |
|
"middle": [], |
|
"last": "Gama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guangquan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "31", |
|
"issue": "12", |
|
"pages": "2346--2363", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jie Lu, Anjin Liu, Fan Dong, Feng Gu, Joao Gama, and Guangquan Zhang. 2018. Learning under concept drift: A review. IEEE Transactions on Knowledge and Data Engineering, 31(12):2346-2363.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Subendhu", |
|
"middle": [], |
|
"last": "Rongali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Soldaini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emilio", |
|
"middle": [], |
|
"last": "Monti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wael", |
|
"middle": [], |
|
"last": "Hamza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The Web Conference 2020, WWW '20", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2962--2968", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3366423.3380064" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don't parse, generate! a se- quence to sequence architecture for task-oriented se- mantic parsing. In Proceedings of The Web Confer- ence 2020, WWW '20, page 2962-2968, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Andrei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rusu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Neil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Rabinowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hubert", |
|
"middle": [], |
|
"last": "Desjardins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Soyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.04671" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrei A Rusu, Neil C Rabinowitz, Guillaume Des- jardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Training convolutional networks with noisy labels", |
|
"authors": [ |
|
{ |
|
"first": "Sainbayar", |
|
"middle": [], |
|
"last": "Sukhbaatar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joan", |
|
"middle": [], |
|
"last": "Bruna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manohar", |
|
"middle": [], |
|
"last": "Paluri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lubomir", |
|
"middle": [], |
|
"last": "Bourdev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Fergus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1406.2080" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. 2014. Train- ing convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The problem of concept drift: definitions and related work", |
|
"authors": [ |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Tsymbal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "106", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexey Tsymbal. 2004. The problem of concept drift: definitions and related work. Computer Science De- partment, Trinity College Dublin, 106(2):58.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Learning from noisy large-scale datasets with minimal supervision", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Veit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Alldrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gal", |
|
"middle": [], |
|
"last": "Chechik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ivan Krasin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Belongie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6575--6583", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, A. Gupta, and Serge J. Belongie. 2017. Learning from noisy large-scale datasets with minimal super- vision. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6575-6583.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Error-driven incremental learning in deep convolutional neural network for large-scale image classification", |
|
"authors": [ |
|
{ |
|
"first": "Tianjun", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiaxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuiyuan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxin", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 22nd ACM international conference on Multimedia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianjun Xiao, Jiaxing Zhang, Kuiyuan Yang, Yuxin Peng, and Zheng Zhang. 2014. Error-driven incre- mental learning in deep convolutional neural net- work for large-scale image classification. In Pro- ceedings of the 22nd ACM international conference on Multimedia, pages 177-186.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "New intent from related Changed Query: Where is there construction on the highway? V1 Label: V2 Label: (IN:GET_INFO_ROAD_CONDITION (IN:GET_INFO_TRAFFIC (SL:LOCATION \"the highway\" ) ) (SL:LOCATION \"the highway\" ) ) Unchanged Query: Are roads icy? V1 Label: V2 Label: (IN:GET_INFO_ROAD_CONDITION (IN:GET_INFO_ROAD_CONDITION (SL:ROAD_CONDITION \"icy\" ) ) (SL:ROAD_CONDITION \"icy\" ) ) Trivially-unchanged Examples not labeled with IN:GET_INFO_ROAD_CONDITION in V1 are trivially-unchanged. New intent from unsupported Changed Query: If I leave right now, can I get to New York City before one o'clock PM? V1 Label: V2 Label: (IN:UNSUPPORTED_NAVIGATION ) (IN:GET_ESTIMATED_ARRIVAL (SL:DATE_TIME_DEPARTURE \"right now\" ) (SL:DESTINATION \"New York City\" ) ) Unchanged Query: What major city has the worst traffic? Examples not labeled with IN:UNSUPPORTED_NAVIGATION in V1 are trivially-unchanged.New argumentChangedQuery: Which route to work has less traffic? DESTINATION \"work\" ) ) (SL:DESTINATION \"work\" ) (SL:OBSTRUCTION \"traffic\" ) ) Unchanged Query: What is the best route to get to Atlanta to see my brother Mark?", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Accuracy as a function of data size with conflicting data compared to accuracy when an oracle removes the conflicts, averaged across five different updates.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "A break down of results across the three data partitions.", |
|
"type_str": "figure", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |