|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:33:21.193737Z" |
|
}, |
|
"title": "Domain-specific knowledge distillation yields smaller and better models for conversational commerce", |
|
"authors": [ |
|
{ |
|
"first": "Kristen", |
|
"middle": [ |
|
"Howell" |
|
], |
|
"last": "Liveperson", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "jwang@liveperson.com" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Bradley", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "jbradley@liveperson.com" |
|
}, |
|
{ |
|
"first": "Xi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "xchen@liveperson.com" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Dunn", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "mdunn@liveperson.com" |
|
}, |
|
{ |
|
"first": "Beth", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Hockey", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "bhockey@liveperson.com" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Maurer", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "abmaurer@amazon.com" |
|
}, |
|
{ |
|
"first": "Dominic", |
|
"middle": [], |
|
"last": "Widdows", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "widdows@ionq.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In the context of conversational commerce, where training data may be limited and low latency is critical, we demonstrate that knowledge distillation can be used not only to reduce model size, but to simultaneously adapt a contextual language model to a specific domain. We use Multilingual BERT (mBERT; Devlin et al., 2019) as a starting point and follow the knowledge distillation approach of Sanh et al. (2019) to train a smaller multilingual BERT model that is adapted to the domain at hand. We show that for in-domain tasks, the domainspecific model shows on average 2.3% improvement in F1 score, relative to a model distilled on domain-general data. Whereas much previous work with BERT has fine-tuned the encoder weights during task training, we show that the model improvements from distillation on in-domain data persist even when the encoder weights are frozen during task training, allowing a single encoder to support classifiers for multiple tasks and languages.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In the context of conversational commerce, where training data may be limited and low latency is critical, we demonstrate that knowledge distillation can be used not only to reduce model size, but to simultaneously adapt a contextual language model to a specific domain. We use Multilingual BERT (mBERT; Devlin et al., 2019) as a starting point and follow the knowledge distillation approach of Sanh et al. (2019) to train a smaller multilingual BERT model that is adapted to the domain at hand. We show that for in-domain tasks, the domainspecific model shows on average 2.3% improvement in F1 score, relative to a model distilled on domain-general data. Whereas much previous work with BERT has fine-tuned the encoder weights during task training, we show that the model improvements from distillation on in-domain data persist even when the encoder weights are frozen during task training, allowing a single encoder to support classifiers for multiple tasks and languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Encoders and language models such as BERT (Devlin et al., 2019) , RoBERTa and ELMo (Peters et al., 2017) are the backbone of many NLP technologies. They are typically trained on data from Wikipedia, CommonCrawl, or large homogeneous collections of text; however, language varies widely in real-world settings and the type of language used in some contexts is not well represented in the data used to train these models. In particular, the language used in e-commerce, and more specifically, conversational commerce, such as conversations pertaining to customer service in the context of online shopping or banking, exhibits both syntactic structures and vocabulary that are under-represented in the Wikipedia data used to train multilingual BERT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 63, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 78, |
|
"end": 104, |
|
"text": "ELMo (Peters et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "At the same time, these models are too large to deploy in many industry settings, where computational resources and inference-speed are concerns. Model size is often reduced using methods such as quantization (Whittaker and Raj, 2001; Shen et al., 2020) , pruning (Han et al., 2015 (Han et al., , 2016 and knowledge distillation (Hinton et al., 2015; Sanh et al., 2019) . However, even leveraging these techniques, the memory footprint of the typical encoder can easily be three orders of magnitude greater than that of the typical classifier, and it follows that encoding is much more time-intensive than classification. 1 In conversational commerce, a variety of classifiers are required to model different aspects of the conversation. In this case, it is beneficial for efficiency, to use a single encoder for all of the classifiers as illustrated in Figure 1 (left), rather than using a separate encoder for each classification task (Figure 1 , right). 2 Thus text can be encoded only once and passed to any number of downstream classifiers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 234, |
|
"text": "(Whittaker and Raj, 2001;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 253, |
|
"text": "Shen et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 281, |
|
"text": "(Han et al., 2015", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 301, |
|
"text": "(Han et al., , 2016", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 350, |
|
"text": "(Hinton et al., 2015;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 369, |
|
"text": "Sanh et al., 2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 622, |
|
"end": 623, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 854, |
|
"end": 862, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 937, |
|
"end": 946, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Typically, domain adaptation with language models is accomplished using back-propagation during task training (see inter alia Devlin et al., 2019; Sanh et al., 2019) . However, this approach requires a separate encoder for each classifier. Instead, we adapt the encoder to a particular domain before classifier training. We show that knowledge distillation, a common approach for reducing model size, is very adept for domain adaptation. This allows us to accomplish two goals, Figure 1 : The difference in architecture between using one encoder for multiple tasks vs. one encoder per task size reduction and domain adaptation, with a single training process. Evaluating on five languages and two domains, we show that distilling on unlabeled data from the domain of interest results in a smaller model that is domain-specific and outperforms the F1-score of a model distilled on domain-general data by 2.3% on average and the larger teacher model by an F1 of 1.2%. The improvement in performance persists even when relatively little training data is used. We show that the domain-adapted encoder performs better than the domain-general model both when encoder weights are fine-tuned, as in previous work, and when they are frozen, leaving them task agnostic. Furthermore, the boost in performance from distillation on in-domain data is greater than the improvement from fine-tuning the encoder during task training. We begin with an overview of previous work in domain adaptation and knowledge distillation, highlighting the benefit of doing both at once ( \u00a72). This is followed by a description of the domains, data and tasks with which we evaluate domain adaptation through knowledge distillation ( \u00a73). We detail our approach in Section 4, investigating how much data is necessary ( \u00a75) and examining the impact of domain adaptation on sentence embeddings ( \u00a76). We evaluate on two domains and five languages in Section 8, considering both training scenarios where encoder weights are frozen for task training and where they are fine-tuned.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 146, |
|
"text": "Devlin et al., 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 165, |
|
"text": "Sanh et al., 2019)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 478, |
|
"end": 486, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Many state-of-the-art NLP models have achieved high performance with increased parameters and layers, and in doing so have become too computationally expensive for some applications. Knowledge distillation addresses this problem with a \"teacher-student\" training approach in which a smaller \"student\" model learns to mimic a larger \"teacher\" model (Sanh et al., 2019) or an ensemble of models (Bucil\u01ce et al., 2006; Hinton et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 348, |
|
"end": 367, |
|
"text": "(Sanh et al., 2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 393, |
|
"end": 414, |
|
"text": "(Bucil\u01ce et al., 2006;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 435, |
|
"text": "Hinton et al., 2015)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge distillation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In the context of reducing model size with BERT, task-specific distillation has been successful (Tang et al., 2019; Chatterjee, 2019) as has distillation of the encoder during pre-training (Sanh et al., 2019) . Distillation of the pre-trained encoder is particularly beneficial as the distilled model can be applied to any number of downstream tasks. Sanh et al. (2019) released a distilled version of English BERT (DistilBERT), which is 40% smaller and 60% faster than the original model, while retaining 97% of its NLU capabilities. This was followed by Distilm-BERT, distilled from mBERT using data from 104 languages, which is 24% smaller and 38% faster than its teacher. In both cases, the same data was used for knowledge distillation as for pre-training the original models. We adopt this approach, limiting our training data to the languages and domain of interest to demonstrate that less data can be used to distill a model for a specific setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 115, |
|
"text": "(Tang et al., 2019;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 116, |
|
"end": 133, |
|
"text": "Chatterjee, 2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 208, |
|
"text": "(Sanh et al., 2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 369, |
|
"text": "Sanh et al. (2019)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge distillation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "When sufficient data is not available to train a model from scratch, a smaller amount of data can be used to adapt a domain-general model. In the context of BERT, domain adaption of the encoder through continued pre-training on in-domain data followed by task-specific fine-tuning has improved performance on domain-specific applications (Han and Eisenstein, 2019; Gururangan et al., 2020; Rietzler et al., 2020; Whang et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 364, |
|
"text": "(Han and Eisenstein, 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 389, |
|
"text": "Gururangan et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 412, |
|
"text": "Rietzler et al., 2020;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 432, |
|
"text": "Whang et al., 2020)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain adaptation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Previous work suggests that the teacher-student approach used for knowledge distillation is well suited to domain adaptation. In ASR, it has been applied to adapt models trained on clean speech to handle noisy speech, models for speech from headset mics to work for distant mics (Manohar et al., 2018) , and for speaker adaptation (Yu et al., 2013) . In neural machine translation, multidomain models have been distilled from single-domain specialist models (see inter alia Currey et al. 2020; Mghabbar and Ratnamogan 2020) . In the context of sentiment analysis, Ruder et al. (2017) use an ensemble of models to train a domain-adapted model on unlabeled in-domain data and Ryu and Lee (2020) combine distillation with adversarial domain adaptation to mitigate over-fitting from task fine-tuning, rather than to reduce model size.", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 301, |
|
"text": "(Manohar et al., 2018)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 348, |
|
"text": "(Yu et al., 2013)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 474, |
|
"end": 493, |
|
"text": "Currey et al. 2020;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 523, |
|
"text": "Mghabbar and Ratnamogan 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 583, |
|
"text": "Ruder et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 692, |
|
"text": "Ryu and Lee (2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain adaptation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We show that knowledge distillation can simultaneously reduce the size of the model and adapt it to a domain. While, some degree of performance loss during distillation is typical, we show that focusing the training objective on in-domain data can eliminate performance loss and even improve model performance in the domain of interest. Our training objective does not require labeled data, and because we do this before task fine-tuning, the resulting model can be used for any number of in-domain tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain adaptation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3 Use-cases and datasets", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain adaptation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our first use case is in conversational commerce (hereafter CC), which involves messaging between customers and agents (human or automated) in a commercial customer service setting. Within CC, there are sub-domains for commercial industries, such as retail, financial services, airlines, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The conversational commerce use-case", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Unlike the Wikipedia data used to train mBERT and DistilmBERT, CC is marked by questions, first and second person phrases, short responses, frequent typos and other textual and linguistic features that are more common in typed conversation. In addition to structural variation, CC data contains many product and service names that may not be common in Wikipedia data. These differences make CC a strong candidate for domain adaptation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The conversational commerce use-case", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our test-case is to classify customer messages as intentful, meaning that the message contains some actionable request, or not intentful. In CC, this is an important triage step that can be applied across sub-domains before sending messages to downstream classifiers. Because this classification task is applied to different sub-domains and customers say some surprising things, this task is rather challenging. Intentful messages can vary widely from requests for information, attempts to place orders or change account details, and disputes or complaints. Non-intentful messages include greetings, pleasantries, slot information that relies on a previ- ous message for context, etc. After this triage step, intentful messages can be sent to downstream classifiers, which are specific to the industry or company, that predict specific intents such as \"check order status\" or \"schedule appointment\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The conversational commerce use-case", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use a proprietary dataset from a variety of companies that use a particular conversational commerce platform. Table 1 . This data came from 25 companies that span the retail, telecommunications, financial and airlines sub-domains. To verify that the encoder generalizes beyond these companies, we sampled data for the classification task from an additional 14 companies that were not used in encoder training as well as 12 that were. Annotations classification were provided by native speakers of each language, who were trained on the task. Due to limited access to data and annotators for Japanese, Portuguese and Spanish, we supplemented natural language training data with machine-translated data from English. For evaluation, we used only naturally produced data from each language (see Table 2 ). The complete breakdown by class for evaluation is in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 120, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 795, |
|
"end": 802, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 859, |
|
"end": 866, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "The majority of the English, Portuguese and Spanish data is from the Americas and the remainder from Australia and Europe, while the Japanese data is primarily from Japan. Because data-use agreements and laws vary by company and country, we could not sample evenly across regions; still, we sampled from as diverse a range of countries and company types as possible, in an effort to maximize representation of different speaker communities. 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "Because our first dataset is proprietary, we repeat the experiments using data from online forums for technical discussions about programming (hereafter TD). This domain is marked by technical jargon, which includes many words that have nontechnical homonyms, as we describe in Section 6. These forums also include code and urls, which can be useful for classification but are not common in the Wikipedia data used to train mBERT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The technical discussion use-case", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For evaluation we used a multi-label prediction task to automatically label posts with the appropriate tag or tags for the topic. 6 Because StackOverflow uses hundreds of tags, the task is limited to the ten most common, which are listed in Table 4 . 7 We created classification datasets by sampling posts that contain one or more of the ten tags targeted by the task. For our validation and held-out evaluation sets for English, Spanish, Portuguese and Russian, we sampled messages such that each tag occurred in the set at least 100 times for a total of 1000 posts per set. As each post can have multiple tags, tags can occur more than 100 times. The total posts per tag for evaluation are in Table 4 . 8 We then randomly selected 8000 training samples that contained at least one of the tags. Because the Japanese dump is much smaller, we created splits of half the size (500/500/4000). 5 We do not have access to demographic data for the users who produced the data, and cannot make any claims about how well the models generalize across speaker communities of various ages, genders, ethnicities or socio-economic groups.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 252, |
|
"text": "7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 705, |
|
"end": 706, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 890, |
|
"end": 891, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 248, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 702, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The technical discussion use-case", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "6 This task comes from https://github.com/theRajeshReddy/ StackOverFlow-Classification 7 https://archive.org/details/stackexchange 8 The number of posts per tag for the train and validation sets can be found in the dataset's readme. For encoder training, we sampled data from the remaining messages, including posts that did not contain the tags of interest. We assembled a 3GB training set (based on the results in Section 5) using all of the data for Japanese (0.055GB, 2% of the total), Portuguese (0.34GB, 11%), Spanish (0.34GB, 11%) and Russian (0.75GB. 24%), and 1.58GB (52%) for English to reach 3GB total.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "For knowledge distillation, we use the established and open-source approach of Sanh et al. 2019, 910 which follows Liu et al. 2019's proposed best practices for BERT training. These include dynamic masking, large batches to leverage gradient accumulation and training on the masked language modeling task but not next sentence prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 100, |
|
"text": "910", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Distillation Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Sanh et al.'s implementation of knowledge distillation trains the student model using the distillation loss of the soft target probabilities of the teacher. Because of this, the student model benefits from the the teacher model's full distribution during training. Due to this rich input, we expect that high performance can be achieved with less training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Distillation Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use mBERT 11 as the teacher model and our student models have the same general architecture, hidden-size dimension and number of word embeddings. We reduce the model size by removing the token-embeddings and pooling and reducing the number of layers from 12 to 6. This reduces the total number of parameters by 43 million or 24% and increases the inference speed by 38% (Table 5 ). 12 9 Detailed instructions for training with Huggingface's Distil* module can be found at https://github.com/huggingface/transformers/blob/ 783d7d2629e97c5f0c5f9ef01b8c66410275c204/examples/ research_projects/distillation/README.md.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 373, |
|
"end": 381, |
|
"text": "(Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Knowledge Distillation Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "10 Here we discuss the most relevant training details, but Ibid. provides a full account of the training procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Distillation Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "11 https://github.com/google-research/bert/blob/master/ multilingual.md", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Distillation Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "12 The change in model size and speed is equivalent to that Whereas Sanh et al. (2019) used the same training data as the teacher model, we use only data from the domain we are adapting to. Intuitively, a model that will be deployed in a single domain does not need to learn everything the base model can do -it only needs to learn what it can do for the domain at hand. This allows us to reduce the training data and time needed for distillation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 86, |
|
"text": "Sanh et al. (2019)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Distillation Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Because knowledge distillation takes advantage of an existing model, which was already trained on a large amount of data, we expect that distillation training will be relatively economical in its use of data. Furthermore, many research objectives focus on a single domain and do not require the breadth of NLU capability of a domain-general model, but instead benefit from a depth of capability in one domain. Here we attempt to establish how much data is enough for knowledge distillation for a single domain and where we reach diminishing returns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data requirements", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As a case-study, we use increasing quantities of StackOverflow English data for knowledge distillation and compare the performance of these models to both the teacher model (mBERT) and HuggingFace's multilingual distilled BERT model (DistilmBERT), which was distilled using the same approach. To measure the impact of domain adaptation from knowledge distillation alone, we freeze the encoder weights during task training and present the results in Table 6 and Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 456, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 469, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data requirements", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We distilled models with English StackOverflow data, using increments of 0.3GB. We found that a minimum of 1.5GB was needed for convergence, but 2.1GB was enough to outperform DistilmBERT and perform on par with the teacher mBERT. Improvements stop after 3GB. We conclude that 2.1 in Sanh et al. 2019 , however, that paper considers only the English BERT model, while we use multilingual BERT. Because the multilingual model has a significantly larger vocabulary (or number of word embeddings), which is not reduced by this distillation process, the proportionate difference in model size is for distilled mBERT models is less than for distilled BERT. Table 6 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 300, |
|
"text": "Sanh et al. 2019", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 652, |
|
"end": 659, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data requirements", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "GB is sufficient and 3GB is optimal for adaptation to the TD domain, while more data increases training time without improving performance. We contextualize this finding by considering the data quantity used to train mBERT and Dis-tilmBERT. While it is hard to ascertain the exact amount of data used to train these models, we estimated by following the data sampling procedure used by the creators of those models. 14 By our best estimate, roughly 222GB of uncompressed text was used. 15 In contrast, only 2.1GB of uncompressed text was needed to outperform DistilmBERT on our in-domain task. Thus, for distillation for a single domain and language, the required amount of training data is reduced by two orders of magnitude. 16 In this experiment, the new data matches on both domain and language. To test whether the language match is responsible for the improvement, we distilled a model on 3GB of English Wikipedia data. We sampled this data by randomly selecting 3000 1MB chunks of text from the English Wikipedia dump. This model (Wiki3.0) under-performs the one distilled on the same amount of StackOverflow data, showing that the language match alone is insufficient to explain the improved performance and suggests that the domain match is more important.", |
|
"cite_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 488, |
|
"text": "15", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 727, |
|
"end": 729, |
|
"text": "16", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data requirements", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To better understand the differences between an encoder trained on domain-general data versus indomain data, we compare sentence embeddings produced by the encoder that we adapted to the technical domain (TD3.0) and the encoder distilled on the same amount of domain-general data (Wiki3.0). The TD domain has lots of homonyms like 'python' and 'float' that have both a technical word-sense and a non-technical one. We expect models trained on the TD domain to pay attention to the dominant technical word-senses, and models trained on Wikipedia to pay greater attention to the non-technical word-senses. By extension, a distance function derived from a TD model is expected to be more sensitive to technical word-senses than a distance function derived from a Wikipedia model. Thus we expect the distance function for 'python' and a non-technical synonym (i.e., 'snake') to be closer when derived from a domain-general model and the distance function between 'python' and another programming language (i.e., 'PHP') to be closer when derived from a model trained on technical data. Because BERT embeddings are contextual, we provide a context for each word pair by creating sentences for each, such that either word may appear in the sentence. Sentences designed for technical word pairs are biased towards a technical context, and sentences for non-technical word pairs are biased towards a non-technical sense. 17 As an example, the technical sentences used to compare Java and C# are given below in list items 1 and 2 and the non-technical sentences used for java and coffee are given in list items 3 and 4.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1412, |
|
"end": 1414, |
|
"text": "17", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Vector changes under domain adaptation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "1. I can't find any code or post on how to get traffic data in Java for Windows Phone 8. 2. I can't find any code or post on how to get traffic data in C# for Windows Phone 8. 3. Jerry can't start his day without a cup of java. 4. Jerry can't start his day without a cup of coffee.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Vector changes under domain adaptation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Substituting each word from a pair into the sentence, we have a pair of sentences like items 1 and 2 and embed each with both the TD model and the domain-general model. We take the mean of the token embeddings as a representation of the sentence 18 and then take the cosine similarities between the sentence embedding produced by the TD model and the embedding from the domain-general model. These cosine similarities are given for both the technical and non-technical pairs in Table 7 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 478, |
|
"end": 485, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Vector changes under domain adaptation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We find that cosine distance is smaller for sentences that capture the general word-sense when they are encoded by the general model. Similarly, it is smaller for sentences that capture the technical word-sense when they are encoded with the TD model. This suggests that the TD model has adapted its representations for these homonyms to their technical meanings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Vector changes under domain adaptation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We compare the performance of two domainadapted encoders that were trained using the method described in Section 4. CC-DistilmBERT was trained with data from four languages from the CC domain ( \u00a73.1.1) and TD-DistilmBERT with data from five languages from the TD domain ( \u00a73.2.1). The amount of data used to distil each model is summarized in Table 1 and determined primarily based on availability, though informed by the findings in Section 5. For each language and domain, we used as much data as was available for distillation (see \u00a73 for details), except in the case of English TD data, in which case we had more than enough data and used only as much as was necessary to reach approximately 3GB in total.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 350, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For classifier training, we train a structured self- Table 7 : Each word in the first column has at least one technical and non-technical sense (e.g., 'Java') and is paired with two terms, one technical and one non-technical that can be used in the same context (e.g., 'coffee' and 'C#'). This table shows the cosine similarity between the embedding for a sentence containing the ambiguous word and the same sentences containing its technical and non-technical alternatives instead, using both the general encoder (Gen. Cosine) and domain-adapted encoder (Tech. Cosine). We show that in most cases the non-technical pairs have a greater cosine similarity when encoded with the general model and the technical pairs have a grater cosine similarity when encoded with the technical model. attention classifier head on each encoder used in evaluation. 19 We conduct two experiments: in the first, we fine-tune the encoder weights, as has been done in previous work such as Devlin et al. 2019; Sanh et al. 2019 ; in the second, we freeze the encoder weights to demonstrate that this approach can be used in contexts where the same underlying encoder is to be used by multiple classifiers, removing the need to encode a text every time it is classified by a different classifier. We compare our domain-adapted, distilled models using the tasks described in Section 3 with two baselines: the teacher model used for distillation (mBERT) and Sanh et al.'s domain-general distilled model (DistilmBERT). In each case, we train and evaluate separate classifiers for each language in the dataset. We evaluate model performance two ways, first fine-tuning the encoder weights during task training and second freezing the encoder weights to test the generalizability of a single encoder to multiple classifiers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 848, |
|
"end": 850, |
|
"text": "19", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 969, |
|
"end": 988, |
|
"text": "Devlin et al. 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 989, |
|
"end": 1005, |
|
"text": "Sanh et al. 2019", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 60, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The results for each language and encoder are broken down for each experiment in Table 8 , where encoder weights were fine-tuned or frozen during task training. On average for these two tasks, the domain-adapted model achieves an F1 score that is 1.2% greater respective to the teacher mBERT model (an absolute difference of 0.9 F1) and 2.3% better respective to the domain-general Distilm-BERT model (an absolute difference of 1.7 F1).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 88, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Performance is better for all models when the encoder weights are fine-tuned during task training. Still, domain-adapted models perform better relative to the baselines in both cases. The average absolute improvement of the domain-adapted models relative to the teacher model is 1.1% and the relative improvement over the domain-general distilled model is 2.1% when the encoder weights are tuned, while these improvements are 1.3% and 2.5% when the weights are frozen. This difference between the two training scenarios may be accounted for by the encoder undergoing some degree of domain adaptation during fine-tuning. Intuitively, the domain-general models would benefit from this more than the domain-adapted models, which are already tuned to the domain. Even so, the performance of domain-adapted models relative to the domain-general models when the encoder weights are fine-tuned demonstrates that domain adaptation is still beneficial in this scenario and contributes larger improvements in model performance than fine-tuning during task training alone.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Each result in Table 8 represents an average over 10 iterations of training and evaluation. We calculate the statistical significance of the improvement between the domain adapted models and baselines using the Kolmogorov-Smirnov (KS) test, as our data is non-normal (Brownlee, 2019) . We found that the improvement in F1 score due to domain adaptation for the CC-DistilmBERT model was statistically significant (p < .05) compared to both baselines on all languages except Japanese, despite the small size of our datasets. The TD-DistilmBERT model's improvement in F1 over the DistilmBERT baseline was also statistically significant on all languages except Japanese, although the support for each class was much smaller for the TD task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 267, |
|
"end": 283, |
|
"text": "(Brownlee, 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 22, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "For Japanese, in the case of the TD model, very little data was available for domain adaptation (0.06 GB; shown in Table 1 ). We can speculate that this Table 8 : Results for freezing and fine-tuning encoder weights during classifier training. F1, Precision and Recall are for the 'intentful' class in the CC binary classification task and are macro averages for the TD multi label task. Each result is an average across 10 iterations of training and evaluation. For the CC-DistilmBERT and TD-DistilmBERT models, \u22c6 indicates a statistically significant difference (p < 0.05) between that model and DistilmBERT and \u2020 indicates a statistically significant difference (p < 0.05) between that model and mBERT. lack of data made adaptation via knowledge distillation less effective. In this case, adaptation via fine-tuning was relatively more effective than adaptation via knowledge distillation. For the CC model, while somewhat more Japanese data was available (0.44 GB), it is still less relative to English and Spanish, so we again attribute the less significant results for Japanese to data scarcity. We note that for this domain, even less data was available for Portuguese (0.35 GB) than Japanese, although for Portuguese the domain adapted model did show a significant improvement over mBERT. In this case, we speculate that the domain adapted model's performance on Portuguese benefited from lexical overlap with the Spanish training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 122, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 160, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We addressed the problem of encoder scalability in the context of conversational commerce by showing that knowledge distillation with domainspecific data reduces model size, while simultaneously improving model performance. This approach allows for the training of an encoder that can be used across a variety of languages, is smaller than a state-of-the-art model like mBERT, and performs better on domain-specific tasks. Because our approach uses only training data for the domain and languages of interest, less data is necessary for training, reducing the time, cost and environmental impact of training, while accommodating limited data availability. A key advantage of domain adaptation during encoder rather than classifier training is that it allows for the deployment of a single encoder, which can serve multiple classifiers at runtime. This reduces storage and maintenance cost, and due to the much larger size of the encoders compared with the classifiers, provides dramatically better scalability in real-world, e-commerce applications that must support multiple languages and tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "For example, a BERT encoder has hundreds of millions of parameters (seeTable 7), while a self attention classifier like the one used in Section 8 has about 600,000.2Houlsby et al. (2019), inter alios, have proposed similar architectures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To be clarified after the anonymity period.4 For customer privacy, all personally identifiable information is masked before we use the data, but even after masking we cannot make the data or models publicly available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here and throughout the paper, reported results are the mean of 10 random initializations.14 The procedure for mBERT is detailed at https://github. com/google-research/bert/blob/master/multilingual.md and DistilmBERT used data sampled in the same way(Sanh, pc).15 The original numbers may have been smaller as our estimate is based on the wiki data dumps on Oct. 1, 2020 and the models were trained before that time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Training with 3 GB took < 2 days using 8 A100 GPUs.17 The full collection of sentences is in the appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The embedding of the <CLS> token is often used as a sentence embedding when using BERT. However, following, we distill models without using the next sentence prediction task, so the embedding for <CLS> is less likely to be a good representation of the sentence.18 Using Euclidean distance yielded similar results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Code and data for reproduction are included in supplementary materials and will be made publicly available upon publication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The sentences used to produce the embeddings compared in Table 8 ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 64, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Appendix: Sentence Pairs", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "How to use statistical significance tests to interpret machine learning results. Accessed", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Brownlee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Brownlee. 2019. How to use statistical signifi- cance tests to interpret machine learning results. Ac- cessed: October 11, 2021.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Model compression", |
|
"authors": [ |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Bucil\u01ce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandru", |
|
"middle": [], |
|
"last": "Niculescu-Mizil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "535--541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristian Bucil\u01ce, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Pro- ceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535-541.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Making neural machine reading comprehension faster", |
|
"authors": [ |
|
{ |
|
"first": "Debajyoti", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.00796" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Debajyoti Chatterjee. 2019. Making neural ma- chine reading comprehension faster. arXiv preprint arXiv:1904.00796.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Distilling multiple domains for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Currey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prashant", |
|
"middle": [], |
|
"last": "Mathur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georgiana", |
|
"middle": [], |
|
"last": "Dinu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4500--4511", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anna Currey, Prashant Mathur, and Georgiana Dinu. 2020. Distilling multiple domains for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 4500-4511.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. Proceedings of NAACL-HLT, pages 4171- -4186.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "2020. Don't stop pretraining: adapt language models to domains and tasks", |
|
"authors": [ |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Suchin Gururangan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Marasovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Downey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.10964" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Eie: efficient inference engine on compressed deep neural network", |
|
"authors": [ |
|
{ |
|
"first": "Song", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xingyu", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huizi", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Pu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ardavan", |
|
"middle": [], |
|
"last": "Pedram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Horowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dally", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ACM SIGARCH Computer Architecture News", |
|
"volume": "44", |
|
"issue": "3", |
|
"pages": "243--254", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pe- dram, Mark A Horowitz, and William J Dally. 2016. Eie: efficient inference engine on compressed deep neural network. ACM SIGARCH Computer Architec- ture News, 44(3):243-254.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning both weights and connections for efficient neural network", |
|
"authors": [ |
|
{ |
|
"first": "Song", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Pool", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Dally", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "1135--1143", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in neural infor- mation processing systems, 28:1135-1143.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Unsupervised domain adaptation of contextualized embeddings for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Xiaochuang", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4238--4248", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaochuang Han and Jacob Eisenstein. 2019. Unsu- pervised domain adaptation of contextualized em- beddings for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4238-4248.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Distilling the knowledge in a neural network", |
|
"authors": [ |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1503.02531" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Parameter-efficient transfer learning for nlp", |
|
"authors": [ |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Houlsby", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Giurgiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanislaw", |
|
"middle": [], |
|
"last": "Jastrzebski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruna", |
|
"middle": [], |
|
"last": "Morrone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quentin", |
|
"middle": [], |
|
"last": "De Laroussilhe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Gesmundo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Attariyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Gelly", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2790--2799", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In In- ternational Conference on Machine Learning, pages 2790-2799. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "RoBERTa: A robustly optimized BERT pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A teacher-student learning approach for unsupervised domain adaptation of sequence-trained asr models", |
|
"authors": [ |
|
{ |
|
"first": "Vimal", |
|
"middle": [], |
|
"last": "Manohar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pegah", |
|
"middle": [], |
|
"last": "Ghahremani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Povey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "2018 IEEE Spoken Language Technology Workshop (SLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "250--257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vimal Manohar, Pegah Ghahremani, Daniel Povey, and Sanjeev Khudanpur. 2018. A teacher-student learn- ing approach for unsupervised domain adaptation of sequence-trained asr models. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 250- 257. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Building a multi-domain neural machine translation model using knowledge distillation", |
|
"authors": [ |
|
{ |
|
"first": "Idriss", |
|
"middle": [], |
|
"last": "Mghabbar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pirashanth", |
|
"middle": [], |
|
"last": "Ratnamogan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.07324" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Idriss Mghabbar and Pirashanth Ratnamogan. 2020. Building a multi-domain neural machine translation model using knowledge distillation. arXiv preprint arXiv:2004.07324.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semi-supervised sequence tagging with bidirectional language models", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Waleed", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Russell", |
|
"middle": [], |
|
"last": "Bhagavatula", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Power", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1705.00108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E Peters, Waleed Ammar, Chandra Bhaga- vatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. arXiv preprint arXiv:1705.00108.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rietzler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Stabinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Opitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Engl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4933--4941", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2020. Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 4933-4941.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Knowledge adaptation: Teaching to adapt", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Parsa", |
|
"middle": [], |
|
"last": "Ghaffari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John G", |
|
"middle": [], |
|
"last": "Breslin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1702.02052" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2017. Knowledge adaptation: Teaching to adapt. arXiv preprint arXiv:1702.02052.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Knowledge distillation for BERT unsupervised domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Minho", |
|
"middle": [], |
|
"last": "Ryu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kichun", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.11478" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minho Ryu and Kichun Lee. 2020. Knowledge dis- tillation for BERT unsupervised domain adaptation. arXiv preprint arXiv:2010.11478.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NeurIPS EMC 2 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. In NeurIPS EMC 2 Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhen", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiayu", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linjian", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhewei", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Gholami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kurt", |
|
"middle": [], |
|
"last": "Mahoney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Keutzer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8815--8821", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT. In AAAI, pages 8815-8821.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Distilling taskspecific knowledge from BERT into simple neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yao", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linqing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Mou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Vechtomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1903.12136" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task- specific knowledge from BERT into simple neural networks. arXiv preprint arXiv:1903.12136.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "An effective domain adaptive post-training method for bert in response selection", |
|
"authors": [ |
|
{ |
|
"first": "Taesun", |
|
"middle": [], |
|
"last": "Whang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyub", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chanhee", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kisu", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongsuk", |
|
"middle": [], |
|
"last": "Oh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heuiseok", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "1585--1589", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuiseok Lim. 2020. An effective domain adaptive post-training method for bert in re- sponse selection. Interspeech 2020, page 1585-1589.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Quantization-based language model compression", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Edward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhiksha", |
|
"middle": [], |
|
"last": "Whittaker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Raj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Seventh European Conference on Speech Communication and Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward WD Whittaker and Bhiksha Raj. 2001. Quantization-based language model compression. In Seventh European Conference on Speech Communi- cation and Technology.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Kl-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaisheng", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Seide", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "2013 IEEE International Conference on Acoustics, Speech and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7893--7897", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dong Yu, Kaisheng Yao, Hang Su, Gang Li, and Frank Seide. 2013. Kl-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition. In 2013 IEEE International Con- ference on Acoustics, Speech and Signal Processing, pages 7893-7897. IEEE.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Training set size vs Macro F1 (see", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Total amount of data used to distil each domainadapted model in GB of uncompressed text", |
|
"num": null, |
|
"content": "<table><tr><td>model CC-Distil -mBERT TD-Distil -mBERT</td><td>Data per language (GB) esp jap por 0.95 0.60 0.44 0.35 0.00 eng rus (GB) Total 2.34 1.58 0.34 0.06 0.34 0.75 3.07</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">: # natural (N) and translated (T) messages per split for the CC classification task</td></tr><tr><td>Split Train N Data T Val N T Test N</td><td>eng 4951 1364 1810 1845 esp jap por 4306 4306 4306 1236 255 286 323 1020 1020 1020 10078 719 908 909</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "# messages per label in the CC evaluation set", |
|
"num": null, |
|
"content": "<table><tr><td>label intentful not intentful 6030 298 435 373 eng esp jap por 4050 423 475 537</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "The data for this task comes from the anonymized dumps for the StackOverflow topics in English [eng], Japanese [jap], Portuguese [por], Russian [rus] and Spanish [esp].", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "# posts per label in the TD evaluation set", |
|
"num": null, |
|
"content": "<table><tr><td>label c# java php javascript 168 206 108 197 182 eng esp jap por rus 111 110 56 120 109 118 127 57 148 174 126 138 57 144 168 android 110 115 71 115 108 jquery 132 114 53 136 106 python 107 106 51 104 101 html 109 103 50 110 118 c++ 104 104 51 101 116 ios 110 100 53 102 102</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Model size and and average inference speed on single-thread CPU with a batch size of 1", |
|
"num": null, |
|
"content": "<table><tr><td>Model mBERT DistilmBERT TD-DistilmBERT CC-DistilmBERT</td><td># Params Inf time per message (millions) (milliseconds) 178 305 135 188 135 189 135 189</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Macro Precision, Recall and F1 on TD evaluation task for models distilled with increasing data quantities. The number in each TD model name corresponds to the GB of uncompressed text used for training.13", |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"3\">Model mBERT * DistilmBERT * Wiki3.0* TD1.5 TD1.8 TD2.1 TD2.4 TD2.7 TD3.0 TD3.3 TD3.6 TD3.9 TD4.2 TD4.5</td><td colspan=\"3\">Precision Recall 0.796 0.626 0.692 F1 0.796 0.602 0.679 0.778 0.572 0.651 0.789 0.530 0.627 0.788 0.500 0.605 0.808 0.629 0.700 0.809 0.651 0.718 0.807 0.629 0.705 0.817 0.664 0.727 0.823 0.658 0.728 0.814 0.661 0.724 0.821 0.636 0.710 0.817 0.648 0.718 0.816 0.647 0.714</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">*=baseline model</td></tr><tr><td/><td>0.75</td><td/><td/><td>TD3.0</td><td/></tr><tr><td/><td>0.7</td><td/><td/><td/><td/></tr><tr><td>F1 Score</td><td>0.65</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>F1 Score</td></tr><tr><td/><td>0.6</td><td/><td/><td/><td>mBERT</td></tr><tr><td/><td/><td/><td/><td/><td>DistilmBERT</td></tr><tr><td/><td/><td/><td/><td/><td>Wiki3.0</td></tr><tr><td/><td>0.55</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td/><td/><td/><td colspan=\"3\">Training set size (GB)</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |