{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:18:29.701504Z" }, "title": "Cost-effective Deployment of BERT Models in a Serverless Environment", "authors": [ { "first": "Katar\u00edna", "middle": [], "last": "Bene\u0161ov\u00e1", "suffix": "", "affiliation": {}, "email": "kbenesova@slido.com" }, { "first": "Andrej", "middle": [], "last": "\u0160vec", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Marek", "middle": [], "last": "\u0160uppa", "suffix": "", "affiliation": {}, "email": "msuppa@slido.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this study we demonstrate the viability of deploying BERT-style models to serverless environments in a production setting. Since the freely available pre-trained models are too large to be deployed in this way, we utilize knowledge distillation and fine-tune the models on proprietary datasets for two real-world tasks: sentiment analysis and semantic textual similarity. As a result, we obtain models that are tuned for a specific domain and deployable in serverless environments. The subsequent performance analysis shows that this solution results in latency levels acceptable for production use and that it is also a cost-effective approach for small-to-medium size deployments of BERT models, all without any infrastructure overhead.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this study we demonstrate the viability of deploying BERT-style models to serverless environments in a production setting. Since the freely available pre-trained models are too large to be deployed in this way, we utilize knowledge distillation and fine-tune the models on proprietary datasets for two real-world tasks: sentiment analysis and semantic textual similarity. As a result, we obtain models that are tuned for a specific domain and deployable in serverless environments. The subsequent performance analysis shows that this solution results in latency levels acceptable for production use and that it is also a cost-effective approach for small-to-medium size deployments of BERT models, all without any infrastructure overhead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine learning models are notoriously hard to bring to production environments. One of the reasons behind is the large upfront infrastructure investment it usually requires. This is particularly the case with large pre-trained language models, such as BERT (Devlin et al., 2018) or GPT (Radford et al., 2019) whose size requirements make them difficult to deploy even when infrastructure investment is not of concern.", "cite_spans": [ { "start": 259, "end": 280, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF9" }, { "start": 288, "end": 310, "text": "(Radford et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At the same time, the serverless architecture with minimal maintenance requirements, automatic scaling and attractive cost, is becoming more and more popular in the industry. It is very well suited for stateless applications such as model predictions, especially in cases when the prediction load is unevenly distributed. Since the serverless platforms have strict limits, especially on the size of the deployment package, it is not immediately obvious it may be a viable platform for deployment of models based on large pre-trained language models. * Equal contribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we describe our experience with deploying BERT-based models to serverless environments in a production setting. We consider two tasks: sentiment analysis and semantic textual similarity. While the standard approach would be to fine-tune the pre-trained models, this would not be possible in our case, as the resulting models would be too large to fit within the limits imposed by serverless environments. Instead, we adopt a knowledge distillation approach in combination with smaller BERT-based models. We show that for some of the tasks we are able to train models that are an order of magnitude smaller while reporting performance similar to that of the larger ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, we also evaluate the performance of the deployed models. Our experiments show that their latency is acceptable for production environments. Furthermore, the reported costs suggest it is a very cost-effective option, especially when the expected traffic is small-to-medium in size (a few requests per second) and potentially unevenly distributed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite a number of significant advances in various NLP approaches over the recent years, one of the limiting factors hampering their adoption is the large number of parameters that these models have, which leads to large model size and increased inference time. This may limit their use in resourceconstrained mobile devices or any other environment in which model size and inference time is the limiting factor, while negatively affecting the environmental costs of their use (Strubell et al., 2019) .", "cite_spans": [ { "start": 478, "end": 501, "text": "(Strubell et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "This has led to a significant body of work focusing on lowering both the model size and inference time, while incurring minimal performance penalty. One of the most prominent approaches include Knowledge Distillation (Bucilu\u01ce et al., 2006; Hinton et al., 2015) , in which a smaller model (the \"student\") is trained to reproduce the behavior of a larger model (the \"teacher\"). It was used to produce smaller BERT alternatives, such as:", "cite_spans": [ { "start": 217, "end": 239, "text": "(Bucilu\u01ce et al., 2006;", "ref_id": "BIBREF7" }, { "start": 240, "end": 260, "text": "Hinton et al., 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "\u2022 TinyBERT (Jiao et al., 2019) , which appropriates the knowledge transfer method to the Transformer architecture and applies it in both the pretraining and downstream fine-tuning stage. The resulting model is more than 7x smaller and 9x faster in terms of inference.", "cite_spans": [ { "start": 11, "end": 30, "text": "(Jiao et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "\u2022 MobileBERT (Sun et al., 2020) , which only uses knowledge distilation in the pre-training stage and reduces the model's width (layer size) as opposed to decreasing the number of layers it consists of. The final task-agnostic model is more than 3x smaller and 5x faster than the original BERT BASE .", "cite_spans": [ { "start": 13, "end": 31, "text": "(Sun et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "When decreasing the model size leads to decreased latency, it can also have direct business impact. This has been demonstrated by Google, which found out that increasing web search latency from 100 ms to 400 ms reduced the number of searches per user by 0.2 % to 0.6 % (Brutlag, 2009) . A similar experiment done by Booking.com has shown that an increase in latency of about 30 % results in about 0.5 percentage points decrease in conversion rates, which the authors report as a \"relevant cost for our business\" (Bernardi et al., 2019) .", "cite_spans": [ { "start": 269, "end": 284, "text": "(Brutlag, 2009)", "ref_id": "BIBREF6" }, { "start": 512, "end": 535, "text": "(Bernardi et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Each serverless platform has its specifics, which can have different impact on different use cases. Various works, such as (Back and Andrikopoulos, 2018; Wang et al., 2018; Lee et al., 2018) , provide a comparison of performance differences between the available platforms. In order to evaluate specific use cases, various benchmark suites have been introduced such as FunctionBench (Kim and Lee, 2019) , which includes language generation as well as sentiment analysis test case.", "cite_spans": [ { "start": 123, "end": 153, "text": "(Back and Andrikopoulos, 2018;", "ref_id": "BIBREF2" }, { "start": 154, "end": 172, "text": "Wang et al., 2018;", "ref_id": "BIBREF26" }, { "start": 173, "end": 190, "text": "Lee et al., 2018)", "ref_id": "BIBREF13" }, { "start": 383, "end": 402, "text": "(Kim and Lee, 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Possibly the closest published work comparable to ours is (Tu et al., 2018) , in which the authors demonstrate the deployment of neural network models, trained for short text classification and similarity tasks in a serverless context. Since at the time of its publication the PyTorch deployment ecosystem has been in its nascent stages, the authors had to build it from source, which complicates practical deployment.", "cite_spans": [ { "start": 58, "end": 75, "text": "(Tu et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "To the best of our knowledge, our work is the first to show the viability of deploying large pretrained language models (such as BERT and its derivatives) in the serverless environment. Media, Inc, 2019) shows that the adoption of serverless was successful for the majority of the respondents' companies. They recognize reduced operational costs, automatic scaling with demand and elimination of concerns for server maintenance as the main benefits.", "cite_spans": [ { "start": 186, "end": 203, "text": "Media, Inc, 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Since the functions deployed in a serverless environment share underlying hardware, OS and runtime (Lynn et al., 2017) , there are naturally numerous limitations to what can be run in such environment. The most pronounced ones include:", "cite_spans": [ { "start": 99, "end": 118, "text": "(Lynn et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "AWS", "sec_num": null }, { "text": "\u2022 Maximum function size, mostly limited to a few hundreds of MBs (although some providers do not have this limitation). In the context of deployment of a machine learning model, this can significantly limit the model size as well as the selection of libraries to be used to execute the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AWS", "sec_num": null }, { "text": "\u2022 Maximum memory of a few GBs slows down or makes it impossible to run larger models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AWS", "sec_num": null }, { "text": "\u2022 No acceleration. Serverless environments do not support GPU or TPU acceleration which can significantly increase the inference time for larger models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AWS", "sec_num": null }, { "text": "A more detailed list of the main limitations of the three most common serverless providers can be found in Table 1 . It suggests that any model deployed in this environment will need to be small in size and have minimal memory requirements. These requirements significantly limit the choice of models appropriate for this environment and warrants a specific training regimen, which we describe in the next section. Figure 1 : Schema of the distillation pipeline of BERT BASE for sentiment analysis. BERT BASE_CLS is fine-tuned on the gold dataset and then used for labelling a large amount of data (silver dataset) that serves as a training set for distillation to TinyBERT. The distilled model is exported to the ONNX format and deployed to AWS Lambda (see Section 5). The same pipeline was executed for MobileBERT.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 415, "end": 423, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "AWS", "sec_num": null }, { "text": "In the two case studies presented in this section, we first consider BERT-provided classification token ([CLS] token) an aggregate representation of a short text (up to 300 characters) for the sentiment analysis task. Secondly, we utilize the embeddings produced by Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) for estimating the semantic similarity of a pair of short texts.", "cite_spans": [ { "start": 288, "end": 316, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4" }, { "text": "Since deploying even the smaller BERT BASE with over 400MB in size is not possible in our setup, in the following cases studies we explore several alternative approaches, such as knowledge distillation into smaller models or training a smaller model directly. To do so, we use TinyBERT (Jiao et al., 2019) and MobileBERT (Sun et al., 2020) having about 56 MB and 98 MB in size, respectively.", "cite_spans": [ { "start": 286, "end": 305, "text": "(Jiao et al., 2019)", "ref_id": "BIBREF11" }, { "start": 321, "end": 339, "text": "(Sun et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Model training", "sec_num": "4" }, { "text": "One of the direct applications of the special [CLS] token of BERT is the analysis of sentiment (Li et al., 2019) . We formulate this problem as classification into three categories: Positive, Negative and Neutral.", "cite_spans": [ { "start": 95, "end": 112, "text": "(Li et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "BERT for sentiment analysis", "sec_num": "4.1" }, { "text": "The task is divided into two stages: first, we finetune BERT BASE using a labelled domain-specific dataset of 68K training examples and 9K examto deploy a container of size up to 10 GB. ples for validation. Then we proceed with knowledge distillation into a smaller model with faster inference: we label a large amount of data by the fine-tuned BERT BASE and use the dataset to train a smaller model with a BERT-like architecture. The distillation pipeline is illustrated in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 475, "end": 483, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "BERT for sentiment analysis", "sec_num": "4.1" }, { "text": "To utilize BERT BASE for a classification task, an additional head must be added on top of the Transformer blocks, i.e. a linear layer on top of the pooled output. The additional layer typically receives only the representation of the special [CLS] token as its input. To obtain the final prediction, the output of this layer is passed through a Softmax layer producing the probability distribution over the predicted classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT BASE", "sec_num": "4.1.1" }, { "text": "We fine-tuned BERT BASE for sequence classification (BERT BASE_CLS ) with this adjusted architecture for our task using a labelled dataset of size 68K consisting of domain-specific data. We trained the model for 8 epochs using AdamW optimizer with small learning rate 3 \u00d7 10 \u22125 , L2 weight decay of 0.01 and batch size 128.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT BASE", "sec_num": "4.1.1" }, { "text": "To cope with the significant class imbalance 2 and to speed up the training, we sampled class-balanced batches in an under-sampling fashion, while putting the examples of similar length together (for the sake of a more effective processing of similarly padded data). Using this method, we were able to at least partially avoid over-fitting on the largest class and reduce the training time about 2.5 times.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT BASE", "sec_num": "4.1.1" }, { "text": "We also tried an alternative fine-tuning approach by freezing BERT BASE layers and attaching a small trainable network on top of it. For the trainable part, we experimented with 1-layer bidirectional GRU of size 128 with dropout of 0.25 plus a linear layer and Softmax output. BERT BASE_CLS outperformed this approach significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT BASE", "sec_num": "4.1.1" }, { "text": "The accuracy evaluation of both fine-tuned BERT BASE models on the validation dataset can be found in Table 2 . In order to meet the function size requirements of the target serverless environments, we proceed to the knowledge distillation stage.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Fine-tuning BERT BASE", "sec_num": "4.1.1" }, { "text": "Having access to virtually unlimited supply of unlabelled domain-specific examples, we labelled almost 900K of them by the fine-tuned BERT BASE_CLS \"teacher\" model and used them as ground truth labels for training a smaller \"student\" model. We experimented with MobileBERT and even smaller TinyBERT as the student models since these are, in comparison to BERT BASE , 3 and 7 times smaller in size, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge distillation to smaller BERT models", "sec_num": "4.1.2" }, { "text": "During training, we sampled the batches in the same way as in Section 4.1.1, except for a smaller batch size of 64. We trained the model for a small number of epochs using AdamW optimizer with learning rate 2 \u00d7 10 \u22125 , weight decay 0.01 and early stopping after 3 epochs in case of TinyBERT and one epoch for MobileBERT (in the following epochs the models no longer improved on the validation set).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge distillation to smaller BERT models", "sec_num": "4.1.2" }, { "text": "For evaluation we used the same validation dataset as for the fine-tuned BERT BASE_CLS described in 4.1. The performance comparison is summarized in Table 2 . We managed to distill the model knowledge into the significantly smaller TinyBERT with only 0.02 points decrease in F1 score (macro-averaged). In case of Mobile-BERT we were able to match the performance of BERT BASE_CLS . These results suggest that the large language models might not be necessary for classification tasks in a real-life scenario.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Knowledge distillation to smaller BERT models", "sec_num": "4.1.2" }, { "text": "Size (MB) F1 BERT BASE + GRU 426 0.75 BERT BASE_CLS 420 0.84 TinyBERT (distilled) 56 0.82 MobileBERT (distilled) 98 0.84 Table 2 : Comparison of fine-tuned BERT models and smaller distilled models on the validation dataset (macro-averaged F1 score). The slight decrease in Tiny-BERT's performance is an acceptable trade-off for the significant size reduction.", "cite_spans": [ { "start": 70, "end": 81, "text": "(distilled)", "ref_id": null }, { "start": 101, "end": 112, "text": "(distilled)", "ref_id": null } ], "ref_spans": [ { "start": 121, "end": 128, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "The goal of our second case study was to train a model that would generate dense vectors usable for semantic textual similarity (STS) task in our specific domain and be small enough to be deployed in a serverless environment. The generated vectors would then be indexed and queried as part of a duplicate text detection feature of a real-world web application. To facilitate this use-case, we use Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) . While the SBERT architecture currently reports state-of-the-art performance on the sentence similarity task, all publicly available pre-trained SBERT models are too large for serverless deployment. The smallest one available is SDistilBERT BASE with on-disk size of 255 MB. We therefore had to train our own SBERT model based on smaller BERT alternatives. We created the smaller SBERT models by employing the TinyBERT and Mobile-BERT into the SBERT architecture, i.e. by adding an embedding averaging layer on top of the BERT model.", "cite_spans": [ { "start": 419, "end": 447, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence-BERT for semantic textual similarity", "sec_num": "4.2" }, { "text": "In order to make the smaller SBERT models perform on the STS task, we fine-tune them in two stages. Firstly, we fine-tune them on standard datasets to obtain a smaller version of the generic SBERT model and then we fine-tune them further on the target domain data. The fine-tuning pipeline is visualized in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 315, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Sentence-BERT for semantic textual similarity", "sec_num": "4.2" }, { "text": "To obtain a smaller version of SBERT, we followed the the SBERT training method as outlined in (Reimers and Gurevych, 2019) . We first finetuned a smaller SBERT alternative on a combination of SNLI (Bowman et al., 2015) (dataset of sentence pairs labeled for entailment, contradiction, and semantic independence) and Multi-Genre NLI Figure 2 : Schema of the fine-tuning pipeline of STinyBERT for STS task. In the first stage, STinyBERT is finetuned on NLI and STSb datasets to obtain Generic STinyBERT. In the second phase, the model is trained further on the target-domain dataset, exported to the ONNX format and deployed to AWS Lambda (see Section 5). The same pipeline was executed for SMobileBERT. SBERT BASE was only fine-tuned on target domain dataset. (Williams et al., 2018 ) (dataset of both written and spoken speech in a wide range of styles, degrees of formality, and topics) datasets.", "cite_spans": [ { "start": 95, "end": 123, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF19" }, { "start": 760, "end": 782, "text": "(Williams et al., 2018", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 333, "end": 341, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Generic SBERT fine-tuning", "sec_num": "4.2.1" }, { "text": "We observed the best results when fine-tuning the model for 4 epochs with early stopping based on validation set performance, batch size 16, using Adam optimizer with learning rate 2 \u00d7 10 \u22125 and a linear learning rate warm-up over 10 % of the total training batches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generic SBERT fine-tuning", "sec_num": "4.2.1" }, { "text": "Next, we continued fine-tuning the model on the STSbenchark (STSb) dataset (Cer et al., 2017) using the same approach, except for early stopping based on STSb development set performance and a batch size of 128.", "cite_spans": [ { "start": 75, "end": 93, "text": "(Cer et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Generic SBERT fine-tuning", "sec_num": "4.2.1" }, { "text": "Once we obtained a small enough generic SBERT model, we proceeded to fine-tune it on examples from the target domain. We experimented with two approaches: fine-tuning the model on a small gold dataset and generating a larger silver dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target domain fine-tuning", "sec_num": "4.2.2" }, { "text": "Dataset. We worked with a balanced training set of 2856 pairs. Each pair was assigned to one of three classes: duplicate (target cosine similarity 1), related (0.5) or unrelated (0). The classes were assigned semi-automatically. Duplicate pairs were created by back-translation (Sennrich et al., 2016) using the translation models released as part of the OPUS-MT project (Tiedemann and Thottingal, 2020) . Related pairs were pre-selected and expertly annotated and unrelated pairs were formed by pairing random texts together.", "cite_spans": [ { "start": 278, "end": 301, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF20" }, { "start": 371, "end": 403, "text": "(Tiedemann and Thottingal, 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Target domain fine-tuning", "sec_num": "4.2.2" }, { "text": "Validation and test sets were composed of 665 and 696 expertly annotated pairs, respectively. These sets were not balanced due to the fact that finding duplicate pairs manually is far more difficult than finding related or unrelated pairs, which stems from the nature of the problem. That is why duplicate class forms only approximately 13 % of the dataset, whereas related and unrelated classes each represent roughly 43 %.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target domain fine-tuning", "sec_num": "4.2.2" }, { "text": "Fine-tuning on plain dataset. We first experimented with fine-tuning the generic SBERT model on the train set of the target domain dataset. We call the output model SBERT target. We fine-tuned it for 8 epochs with early stopping based on validation set performance, batch size 64, Adam optimizer with learning rate 2 \u00d7 10 \u22125 and a linear learning rate warm-up over 10 % of the total training batches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target domain fine-tuning", "sec_num": "4.2.2" }, { "text": "Extending the dataset. Since we had a lot of data without annotations available, we also experimented with extending the dataset and fine-tuning Augmented SBERT (Thakur et al., 2020) .", "cite_spans": [ { "start": 161, "end": 182, "text": "(Thakur et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Target domain fine-tuning", "sec_num": "4.2.2" }, { "text": "We pre-selected 379K duplicate candidates using BM25 (Amati, 2009) and annotated them using a pre-trained cross-encoder based on RoBERTa LARGE . In the annotated data, low similarity values were majorly prevalent (median similarity was 0.18). For this reason, we needed to balance the dataset by undersampling the similarity bins with higher number of samples to get to a final balanced dataset of 32K pairs. We refer to the original expert annotations as gold data and to the cross-encoder annotations as silver data.", "cite_spans": [ { "start": 53, "end": 66, "text": "(Amati, 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Target domain fine-tuning", "sec_num": "4.2.2" }, { "text": "After creating the silver dataset, we first finetuned the model on the silver data and then on the gold data. We call the model fine-tuned on augmented target dataset AugSBERT. Correct hyperparameter selection was crucial for a successful fine-tuning. It was especially necessary to lower the learning rate for the final fine-tuning on the gold data and set the right batch sizes. For the silver dataset we used a learning rate of 2 \u00d7 10 \u22125 and batch size of 64. For the final fine-tuning on the gold dataset we used a lower learning rate of 2 \u00d7 10 \u22126 and a batch size of 16.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target domain fine-tuning", "sec_num": "4.2.2" }, { "text": "As we can see in Table 3 , smaller BERT alternatives can compete with SBERT BASE . AugSMobile-BERT manages to reach 93 % of the performance of SBERT BASE on the target dataset while being more than 3 times smaller in size.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.2.3" }, { "text": "We believe that the lower performance of smaller models is not only caused by the them having less parameters, but it also essentially depends on the size of the model's output dense vector. Tiny-BERT's output embedding size is 312 and Mo-bileBert's is 512, whereas BERT BASE outputs embeddings of size 768. This would in line with the findings published in (Wieting and Kiela, 2019) which state that even random projection to a higher dimension leads to increased performance.", "cite_spans": [ { "start": 358, "end": 383, "text": "(Wieting and Kiela, 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2.3" }, { "text": "As described in Section 3, numerous limitations must be satisfied when deploying a model to a serverless environment, among which the size of the deployment package is usually the major one. The deployment package consists of the function code, runtime libraries and in our case a model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deployment", "sec_num": "5" }, { "text": "STSb Target STinyBERT NLI Table 3 : Spearman rank correlation between the cosine similarity of dense vectors and true labels measured for individual models on the test set of the STSbenchmark dataset (STSb column) and on the test set of the target domain dataset (Target column). The values are multiplied by 100 for convenience. We also present SBERT BASE performance as baseline. The model with the best performance on the target domain dataset, that is also deployable in serverless environment, is highlighted.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 26, "text": "Target STinyBERT NLI", "ref_id": null }, { "start": 27, "end": 34, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "In order to fit all of the above in a few hundreds of MBs allowed in the serverless environments, standard deep learning libraries cannot be used: the standard PyTorch wheel has 400 MB (Paszke et al., 2019) and TensorFlow is 850 MB in size (Abadi et al., 2015) .", "cite_spans": [ { "start": 185, "end": 206, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF17" }, { "start": 240, "end": 260, "text": "(Abadi et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Model inference engine", "sec_num": "5.1" }, { "text": "ONNX Runtime. We therefore used a smaller model interpreter library called ONNX Runtime (Bai et al., 2019) , which is mere 14 MB in size, leaving a lot of space for the model. Prior to executing the model by the ONNX Runtime library, it needs to be converted to the ONNX format. This can be done using off-the-shelf tools, for instance the Hugging Face transformers library (Wolf et al., 2020) is shipped with a simple out-of-the-box script to convert BERT models to ONNX.", "cite_spans": [ { "start": 88, "end": 106, "text": "(Bai et al., 2019)", "ref_id": "BIBREF3" }, { "start": 374, "end": 393, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Model inference engine", "sec_num": "5.1" }, { "text": "TensorFlow Lite. It is also possible to use the TensorFlow Lite interpreter library (Abadi et al., 2015) , which is 6 MB in size. However, we only used ONNX in our deployments as we had problems converting more complex BERT models to TensorFlow Lite format. Table 4 : Performance comparison between the Amazon Web Services (AWS) and Google Cloud Platform (GCP) serverless environments. Numbers denote execution time in miliseconds with 1GB of RAM allocated for the deployed function. q50, q95 and q99 denote the 0.5, 0.95 and 0.99 quantiles, respectively.", "cite_spans": [ { "start": 84, "end": 104, "text": "(Abadi et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 258, "end": 265, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Model inference engine", "sec_num": "5.1" }, { "text": "After training the models and converting them into the ONNX format, we deployed them to different serverless environments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Serverless deployment", "sec_num": "5.2" }, { "text": "We measured the performance of deployed models in scenarios with various amounts of allocated memory by making them predict on more than 5000 real-world examples. Before recording measurements we let the deployed model evaluate a small subsample of data in order to keep the infrastructure in a \"warm\" state. This was done in order to estimate the real-life inference time, i.e. to avoid biasing the inference results by initialization time of the service itself. From the results described in Table 4 we can see that using both the AWS and GCP platforms, we can easily reach the 0.99 quantile of execution time on the order of 100 ms for both tasks and models. Figure 3 also lets us observe that the execution time in AWS Lambda decreases with increasing RAM. This is expected, as both AWS Lambda and GCP Cloud Functions automatically allocate more vCPU with more RAM.", "cite_spans": [], "ref_spans": [ { "start": 494, "end": 501, "text": "Table 4", "ref_id": null }, { "start": 662, "end": 670, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Deployment evaluation", "sec_num": "6" }, { "text": "The serverless deployments are also costeffective. The total costs of 1M predictions, taking 100 ms each and using 1 GB of RAM, are around $2 on both AWS and GCP, whereas the cheapest AWS EC2 virtual machine with 1 GB of RAM costs $8 per month.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deployment evaluation", "sec_num": "6" }, { "text": "We present a novel approach of deploying domainspecific BERT-style models in a serverless environment. To fit the models within its limits, we use knowledge distillation and fine-tune them on domain-specific datasets. Our experiments show that using this process we are able to produce much smaller models at the expense of a minor decrease in their performance. The evaluation of the deployment of these models shows that it can reach latency levels appropriate for production environments, while being cost-effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Although there certainly exist platforms and deployments that can handle much higher load (often times with smaller operational cost (Zhang et al., 2019) ), the presented solution requires minimal infrastructure effort, making the team that trained these models completely self-sufficient. This makes it ideal for smaller-scale deployments, which can be used to validate the model's value. The smaller, distilled models created in the process can then be used in more scalable solutions, should the cost or throughput prove inadequate during test deployments.", "cite_spans": [ { "start": 133, "end": 153, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Recently, a new way of deployment was added, allowing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "About 82% of the dataset were Neutral examples, 10% Negative and 8% Positive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi et al. 2015. TensorFlow: Large-scale ma- chine learning on heterogeneous systems. Software available from tensorflow.org.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Using a microbenchmark to compare function as a service solutions", "authors": [ { "first": "Timon", "middle": [], "last": "Back", "suffix": "" }, { "first": "Vasilios", "middle": [], "last": "Andrikopoulos", "suffix": "" } ], "year": 2018, "venue": "European Conference on Service-Oriented and Cloud Computing", "volume": "", "issue": "", "pages": "146--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timon Back and Vasilios Andrikopoulos. 2018. Us- ing a microbenchmark to compare function as a ser- vice solutions. In European Conference on Service- Oriented and Cloud Computing, pages 146-160. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Onnx: Open neural network exchange", "authors": [ { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Bai et al. 2019. Onnx: Open neural network exchange. https://github.com/onnx/on nx.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "150 successful machine learning models: 6 lessons learned at booking. com", "authors": [ { "first": "Lucas", "middle": [], "last": "Bernardi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "1743--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucas Bernardi et al. 2019. 150 successful machine learning models: 6 lessons learned at booking. com. In Proceedings of the 25th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining, pages 1743-1751.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/D15-1075" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Speed matters for google web search", "authors": [ { "first": "Jake", "middle": [], "last": "Brutlag", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jake Brutlag. 2009. Speed matters for google web search.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Model compression", "authors": [ { "first": "Cristian", "middle": [], "last": "Bucilu\u01ce", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "Alexandru", "middle": [], "last": "Niculescu-Mizil", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "535--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristian Bucilu\u01ce, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Pro- ceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data min- ing, pages 535-541.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Semeval-2017 task 1: Semantic textual similarity -multilingual and cross-lingual focused evaluation", "authors": [ { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Mona", "middle": [ "T" ], "last": "Cer", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Diab", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel M. Cer, Mona T. Diab, Eneko Agirre, I\u00f1igo Lopez-Gazpio, and Lucia Specia. 2017. Semeval- 2017 task 1: Semantic textual similarity -multilin- gual and cross-lingual focused evaluation. CoRR, abs/1708.00055.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin et al. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.02531" ] }, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Tinybert: Distilling bert for natural language understanding", "authors": [ { "first": "Xiaoqi", "middle": [], "last": "Jiao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.10351" ] }, "num": null, "urls": [], "raw_text": "Xiaoqi Jiao et al. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Functionbench: A suite of workloads for serverless cloud function service", "authors": [ { "first": "Jeongchul", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Kyungyong", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE 12th International Conference on Cloud Computing (CLOUD)", "volume": "", "issue": "", "pages": "502--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeongchul Kim and Kyungyong Lee. 2019. Function- bench: A suite of workloads for serverless cloud function service. In 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pages 502-504. IEEE.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Evaluation of production serverless computing environments", "authors": [ { "first": "Hyungro", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE 11th International Conference on Cloud Computing (CLOUD)", "volume": "", "issue": "", "pages": "442--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hyungro Lee et al. 2018. Evaluation of production serverless computing environments. In 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), pages 442-450. IEEE.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Exploiting bert for end-to-end aspect-based sentiment analysis", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.00883" ] }, "num": null, "urls": [], "raw_text": "Xin Li et al. 2019. Exploiting bert for end-to-end aspect-based sentiment analysis. arXiv preprint arXiv:1910.00883.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A preliminary review of enterprise serverless cloud computing (function-as-aservice) platforms", "authors": [ { "first": "Theo", "middle": [], "last": "Lynn", "suffix": "" } ], "year": 2017, "venue": "IEEE CloudCom", "volume": "", "issue": "", "pages": "162--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theo Lynn et al. 2017. A preliminary review of en- terprise serverless cloud computing (function-as-a- service) platforms. In 2017 IEEE CloudCom, pages 162-169. IEEE.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "O'Reilly serverless survey 2019: Concerns, what works, and what to expect", "authors": [ { "first": "O'reilly", "middle": [], "last": "Media", "suffix": "" }, { "first": "", "middle": [], "last": "Inc", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "2021--2022", "other_ids": {}, "num": null, "urls": [], "raw_text": "O'Reilly Media, Inc. 2019. O'Reilly serverless survey 2019: Concerns, what works, and what to expect. https://www.oreilly.com/radar/orei lly-serverless-survey-2019-concern s-what-works-and-what-to-expect/. Accessed: 2021-01-12.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wal- lach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 32, pages 8024- 8035. Curran Associates, Inc.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.10084" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": { "DOI": [ "10.18653/v1/P16-1009" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Energy and policy considerations for deep learning in nlp", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.02243" ] }, "num": null, "urls": [], "raw_text": "Emma Strubell et al. 2019. Energy and policy con- siderations for deep learning in nlp. arXiv preprint arXiv:1906.02243.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Mobilebert: a compact taskagnostic bert for resource-limited devices", "authors": [ { "first": "Zhiqing", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.02984" ] }, "num": null, "urls": [], "raw_text": "Zhiqing Sun et al. 2020. Mobilebert: a compact task- agnostic bert for resource-limited devices. arXiv preprint arXiv:2004.02984.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Augmented sbert: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks", "authors": [ { "first": "Nandan", "middle": [], "last": "Thakur", "suffix": "" }, { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.08240" ] }, "num": null, "urls": [], "raw_text": "Nandan Thakur, Nils Reimers, Johannes Daxenberger, and Iryna Gurevych. 2020. Augmented sbert: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. arXiv preprint arXiv:2010.08240.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "OPUS-MT -Building open translation services for the World", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" }, { "first": "Santhosh", "middle": [], "last": "Thottingal", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT -Building open translation services for the World. In Proceedings of the 22nd Annual Con- ferenec of the European Association for Machine Translation (EAMT), Lisbon, Portugal.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Pay-per-request deployment of neural network models using serverless architectures", "authors": [ { "first": "Zhucheng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Mengping", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", "volume": "", "issue": "", "pages": "6--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhucheng Tu, Mengping Li, and Jimmy Lin. 2018. Pay-per-request deployment of neural network mod- els using serverless architectures. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Demonstrations, pages 6-10.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Peeking behind the curtains of serverless platforms", "authors": [ { "first": "Liang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "2018 {USENIX} Annual Technical Conference ({USENIX}{ATC} 18)", "volume": "", "issue": "", "pages": "133--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Wang et al. 2018. Peeking behind the curtains of serverless platforms. In 2018 {USENIX} Annual Technical Conference ({USENIX}{ATC} 18), pages 133-146.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "No training required: Exploring random encoders for sentence classification", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.10444" ] }, "num": null, "urls": [], "raw_text": "John Wieting and Douwe Kiela. 2019. No training required: Exploring random encoders for sentence classification. arXiv preprint arXiv:1901.10444.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Mark: Exploiting cloud services for costeffective, slo-aware machine learning inference serving", "authors": [ { "first": "Chengliang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Minchen", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2019, "venue": "2019 {USENIX} Annual Technical Conference ({USENIX}{ATC} 19)", "volume": "", "issue": "", "pages": "1049--1062", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chengliang Zhang, Minchen Yu, Wei Wang, and Feng Yan. 2019. Mark: Exploiting cloud services for cost- effective, slo-aware machine learning inference serv- ing. In 2019 {USENIX} Annual Technical Confer- ence ({USENIX}{ATC} 19), pages 1049-1062.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Results of performance tests of trained models deployed in AWS Lambda. Execution time is denoted in miliseconds (ms). TB stands for TinyBERT, MB for MobileBERT. q50, q95 and q99 denote the 0.5, 0.95 and 0.99 quantiles, respectively.", "num": null, "type_str": "figure" }, "TABREF1": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
: Limitations of the three main serverless
providers: Amazon Web Services (AWS), Microsoft
Azure (Azure) and Google Cloud Platform (GCP).
3 Serverless environments
Serverless environments offer a convenient and af-
fordable way of deploying a small piece of code.
A survey by O'Reilly Media (O'Reilly
" } } } }