ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.35.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:15:54.198399Z"
},
"title": "Training Language Models under Resource Constraints for Adversarial Advertisement Detection",
"authors": [
{
"first": "Shamanna",
"middle": [],
"last": "Eshwar",
"suffix": "",
"affiliation": {},
"email": "geshwar@amazon.com"
},
{
"first": "Shiv",
"middle": [],
"last": "Girishekar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Surya",
"suffix": "",
"affiliation": {},
"email": "shisurya@amazon.com"
},
{
"first": "",
"middle": [],
"last": "Nikhil",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dyut",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sil",
"middle": [
"Sumit"
],
"last": "Negi",
"suffix": "",
"affiliation": {},
"email": "suminegi@amazon.com"
},
{
"first": "Aruna",
"middle": [],
"last": "Rajan",
"suffix": "",
"affiliation": {},
"email": "rajarna@amazon.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Advertising on e-commerce and social media sites deliver ad impressions at web scale on a daily basis driving value to both shoppers and advertisers. This scale necessitates programmatic ways of detecting unsuitable content in ads to safeguard customer experience and trust. This paper focusses on techniques for training text classification models under resource constraints, built as part of automated solutions for advertising content moderation. We show how weak supervision, curriculum learning and multilingual training can be applied effectively to fine-tune BERT and its variants for text classification tasks in conjunction with different data augmentation strategies. Our extensive experiments on multiple languages show that these techniques detect adversarial ad categories with a substantial gain in precision at high recall threshold over the baseline.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Advertising on e-commerce and social media sites deliver ad impressions at web scale on a daily basis driving value to both shoppers and advertisers. This scale necessitates programmatic ways of detecting unsuitable content in ads to safeguard customer experience and trust. This paper focusses on techniques for training text classification models under resource constraints, built as part of automated solutions for advertising content moderation. We show how weak supervision, curriculum learning and multilingual training can be applied effectively to fine-tune BERT and its variants for text classification tasks in conjunction with different data augmentation strategies. Our extensive experiments on multiple languages show that these techniques detect adversarial ad categories with a substantial gain in precision at high recall threshold over the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "All advertisements on e-commerce and social media platforms must be moderated to ensure regulatory and ethical standards in countries where they are being served. A tiered moderation workflow with automated components like cached lookup, ML models, rule based annotators complement human experts to ensure reliable content moderation for ads created by advertisers while scaling to ecommerce advertising volumes. The advertising platform currently enables ads to be created in various media formats like text, images and videos. In this work, we focus on detecting adversarial ads in one broad class of ads, where engagement is driven primarily through text and images. Such ads on e-commerce site serve as a casing for the product being advertised. The casing includes product text and image attributes along with optional custom captions provided by the advertiser. It is under the purview of moderation to check whether * Work done when at Amazon an ad contains prohibited content. Any ad containing prohibited content can have an adverse impact on the shopper experience and hence needs to be prevented from showing up. See Section 2.1 for a broad overview of the adversarial ad categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on techniques we use to train NLP models built as a part of this system. Training any ML model requires a good quality dataset that is representative of the policy being enforced. The quality of data available to train models targeting a defect, say detection of \"adult and objectionable content\" depends on several factors. Typically occurrences of such products are rare but the impact of such an ad on shopper experience is adverse. The uncommonness of these violations makes curating large in-domain monolingual corpora difficult. This problem is compounded in low resource languages where there are limited linguistic resources and the rarity of these violations are even more skewed. Further, it is expensive and time consuming to gather more labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Through this paper, we show different ways to train generalised language models when we have limited labeled data. We suggest various ways for data augmentation and empirically provide evidence suggesting when each of the approaches works best. We explore how we can leverage the product catalogue and user behaviour in weak and semi-weak supervision, curriculum learning and multilingual training strategies to train generalised language models like BERT (Devlin et al., 2019) and its variants. Our experiments show :",
"cite_spans": [
{
"start": 456,
"end": 477,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Weak supervision for unlabelled data in the target domain provides an average gain of 10.88% in precision across languages. \u2022 Curriculum strategies to augment labeled data from resource rich language by translation improves average true negative rate(TNR) by 24.25% in low resource setting. \u2022 Multilingual training using labeled data in any available languages provides average gain of 24.32% in TNR over the baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Background: Content moderation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Online advertising platforms typically enable advertisers to create ads in various media formats like text, images and videos. Here we provide an overview of the broad categories which are generally restricted from advertising across these platforms. Sculley et al. (2011) describe some of the adversarial categories which can compromise the user safety. These include ads which promote unsafe and illegal content or products. In addition to these categories, promotion of adult, profane, hate inciting and tobacco related products/content are restricted as well. All of these adversarial categories are under the purview for content moderation.",
"cite_spans": [
{
"start": 251,
"end": 272,
"text": "Sculley et al. (2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scope of content moderation",
"sec_num": "2.1"
},
{
"text": "We primarily featurise the text attributes of the product in catalogue such as product title, description and optional custom text provided by the advertiser to detect aforementioned unsuitable content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scope of content moderation",
"sec_num": "2.1"
},
{
"text": "A very small fraction of ads belong to the restricted categories referenced in Section 2.1. We perform all experiments on 5 such semantic categories shown in Table 1 . For the positive class(defective ad), we consider all ads labelled by human experts. We split this data into train and validation set using multi-label stratification (Sechidis et al. (2011) ; Szyma\u0144ski and Kajdanowicz (2017)) on catalogue categorisation of the product. To enable training, we restrict the size of negative class by restricting the sample size to utmost 100 times the size of the positive class and augment it with 10% of hard negative samples that were caught by existing signals but approved by human experts. The validation set is used to tune model hyperparameters and determine the stopping criterion. We maintain a separate temporally distinct test set replicating production setting. A similar approach is taken when creating train and test set for low resource languages.",
"cite_spans": [
{
"start": 335,
"end": 358,
"text": "(Sechidis et al. (2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.2"
},
{
"text": "BERT and M-BERT For all the experiments we make use of BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) , a transformer based attention model that encodes an entire sequence at once using multiple attention based encoder layers. We use a linear classification layer applied on max-pooled version of last four attention layer outputs of BERT and finetune the model on limited labeled data. Because of the skew in the labels, we weight the binary cross entropy loss inversely based on label frequency and clip the scaling factor to improve stability of training. The model is trained using textual attributes of the products. Adam (Kingma and Ba, 2014) optimiser is used and the maximum sequence length is restricted to 512 during training and inference. For low resource languages we make use of M-BERT. We decide the hyper-parameters of the models by their performance on the validation set and maintain these hyper parameters across ablative experiments. Word embedding based text classifier In the multi-lingual setting, we use another baseline. This is a linear classifier based on word embeddings similar to the setup in (Shen et al., 2018) . We use fastext (Bojanowski et al., 2017) embeddings for German to get the word embeddings and combine them by taking a weighted average of the embeddings as described in Arora et al. (2017) . This removes the special direction to generate the sentence embedding. We also obtain max-pooled embeddings that extracts salient features along vector dimensions. This is later stacked to the reference weighted average embedding and used to train a logistic regression classifier with the limited labeled data. We refer to this model as BOE_LIN.",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 1161,
"end": 1180,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 1198,
"end": 1223,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1353,
"end": 1372,
"text": "Arora et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "2.3"
},
{
"text": "We explore various techniques that can be used to train generalised language models(GLM) like BERT and multilingual variants with significant performance gains over baseline models described in Section 2.3. We look at resource constraints during training of machine learning models in a supervised setting attributed to the following cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT under low resource constraints",
"sec_num": "3"
},
{
"text": "\u2022 Lack of labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT under low resource constraints",
"sec_num": "3"
},
{
"text": "\u2022 Lack of large in-domain monolingual corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT under low resource constraints",
"sec_num": "3"
},
{
"text": "\u2022 Linguistic resources insufficient for building reliable statistical NLP applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT under low resource constraints",
"sec_num": "3"
},
{
"text": "We leverage product catalog to source data for weak and semi-weak supervision training in monolingual setting. We also explore how curriculum strategies and multilingual training can benefit training text classifiers for low resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT under low resource constraints",
"sec_num": "3"
},
{
"text": "Our experiment show that generalised language models like BERT or multilingual variants like M-BERT can be trained using these techniques with significant performance gains over baseline model described in Section 2.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT under low resource constraints",
"sec_num": "3"
},
{
"text": "Semi-Weak-Supervision",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervision and",
"sec_num": "3.1"
},
{
"text": "We employ two approaches as described in Yalniz et al. (2019) One is the conventional semi-supervised approach using teacher-student paradigm. The teacher model is trained using the limited labelled data (or strong data) and then used to get predictions for the unlabelled data. Top k% of the predicted samples for each of the class are used to pre-train the new student model. The student model is further fine-tuned using the limited labelled data. The second approach is semi-weaklysupervised approach. Here, the sourced data associated with weak labels is used to pre-train the teacher model before fine-tuning on the limited labelled data. Again top k% predicted samples by the this teacher model is used to pre-train student network prior to fine-tuning on the strong data. Yalniz et al. (2019) apply these two techniques for image and video classification tasks and achieve SOTA results using semi-weak-supervision. We explore these approaches applied to text classification task using a GLM like BERT.",
"cite_spans": [
{
"start": 41,
"end": 61,
"text": "Yalniz et al. (2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervision and",
"sec_num": "3.1"
},
{
"text": "In this section we describe how we augment unlabelled/weakly labeled data. We leverage user behavioural data by using internal search engine to source products relevant to different categories from huge product catalog. We can query search using generic text phrases and pre-existing catalogue categorisation (CC). So we design relevant text phrases and pre-existing catalogue categorisation for a defect of interest. These attributes are filtered by a keyword list which is a combination of a curated list and word list sourced from models that use BoW as a feature. Table 1 provides the statistics of the proportion of number of products sourced using different approaches. We use the augmentation for only defective class since the class skew is several orders larger. Once we have the augmented data for the defective category we treat it as unlabelled for semi-supervised setup. The teacher model which is BERT is trained only on the strong data. In case of very limited data like CAT4-5 we make use of fasttext classifier as teacher. The teacher model is used to score the augmented samples. Top k% of the augmented data based on model scores are picked to pre-train the new student BERT model. Later the student BERT model is fine-tuned using the strong labelled data. When training both teacher and student models we validate the model after each epoch on the same validation set and use the validation score as the stopping criteria.",
"cite_spans": [],
"ref_spans": [
{
"start": 568,
"end": 575,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Semi-Supervised(SS) Methodology",
"sec_num": "3.1.1"
},
{
"text": "Here we treat the augmented data as weakly labeled data and use it to pre-train teacher model before fine-tuning it with strong labeled data. This teacher model is used to score the top k% samples of the weakly labeled data which is used to pre-train new student model which is later fine-tuned using strong data. Here again while pre-training and fine-tuning teacher and student models we validate the model after each epoch on the same validation set and use the validation score as the stopping criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Weak-Supervised(SWS) Methodology",
"sec_num": "3.1.2"
},
{
"text": "We take the exact same approach of augmenting data for low resource languages and train the M-BERT model. With low resource languages we face two challenges. First, labelled data available here is less compared to English(EN). In German(DE) and French(FR), the scale of the positive class is of order 0.02-0.15 compared to scale of different defect categories for EN reported in Table 1 . Second, keywords available for sourcing weakly labeled data is less which affects quality of sourcing weak data.",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Extension to low resource languages",
"sec_num": "3.1.3"
},
{
"text": "To address these challenges we explore curriculum learning and multilingual training for low resource setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to low resource languages",
"sec_num": "3.1.3"
},
{
"text": "In the above section we discussed augmenting data using weak signals. Here we explore how we can utilise large amounts of labeled data available in resource rich languages such as EN. We translate the ad creatives available in EN to the target language. Hence forth, this data is referred to as translated data. A trivial approach to utilise this data for tuning the model is to combine the strong and translated data and randomly sample mini-batches (B_T L RS ) from the unified set while training. Another possibility is to use the translated data to pretrain the classifier and fine-tune it with the strong data in target domain (B_T L F T ). Here, during every epoch, we initially train the model with the mini-batches sampled from the translated data followed by sampling mini-batches from strong data. This clearly has an advantage over the earlier approach as it helps model adapt to the target domain and avoid domain shift arising from the translation engine employed. We also explore an approach leveraging curriculum learning that is agnostic of the distinction between translated and strong data for training the M-BERT model. Curriculum learning (Hacohen and Weinshall, 2019) involves using the prior knowledge of the difficulty of the training samples to sample training mini-batch. To rank the difficulty of the training sample (x i , y i ) we need a scoring function. Scoring function f : X \u2192 R is any function which scores the difficulty of a given training sample. If f (x i , y i ) > f (x j , y j ) then (x i , y i ) is more difficult than (x j , y j ). We also use a pacing function (Hacohen and Weinshall, 2019) which determines the sequence of subsets X 1 , .., X m \u2286 X of size g i from which mini-batches {B i } M i=1 are sampled. These are generally monotonically increasing functions so the likelihood of the easier samples decrease over time.",
"cite_spans": [
{
"start": 1159,
"end": 1188,
"text": "(Hacohen and Weinshall, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum for leveraging resource rich domains",
"sec_num": "3.2"
},
{
"text": "In our case, we use BOE_LIN (See Section 2.3) as our scoring function-a proxy for hardness of the sample. Samples with confident predictions by BOE_LIN for positive and negative classes are considered easy while hardness increases as the samples are closer to boundary of separation. We initially pick the easier samples for the first x iterations. We augment the training samples with difficult samples progressively for every x iterations till all the data is seen by the model. In our case, we consider x = 2 and split the data into 5 sets of increasing difficulty. Iterations 1-2 are trained using the set having the most easy samples defined by the scoring function f . In iterations 3-4, we take the initial two sets of easy samples. In such a progression, the model sees the entire dataset in iterations 9-10. We use early stopping to choose the model at iteration i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum for leveraging resource rich domains",
"sec_num": "3.2"
},
{
"text": "In Section 3.1 -3.2, we explored methods of augmenting data from external sources for the same language i.e they were trained on monolingual data. However, in weak supervision, the quality of weak data is contingent on sourcing technique used. Using translated data from source domains risks introducing semantic drift due to inaccuracies in the translation engine used. Advertisers create ads for different markets and we have limited data in French(FR), Spanish(ES), Italian(IT) apart from English(EN) and German(DE). To mitigate these challenges, we explore multilingual training of M-BERT leveraging data from different languages to train a classifier for the target DE language thus avoiding sourcing technique to augment data. Pires et al. (2019) show that M-BERT is good at zero shot cross lingual transfer where task specific text in one language is used for fine-tuning the model for a different target language. They further show that the transfer is more pronounced when there is more lexical overlap between the languages. They also show that transfer works with zero lexical overlap when the two languages are typologically similar i.e the ordering of subject, object and verbs among other parts of speech in a sentence. In our experiments we mainly rely on the lexical similarity between languages for training M-BERT. Table 4 (Wikipedia contributors, 2004) provides the lexical similarity between the languages for which we have labeled data. Lexical similarity score of 1 would mean total overlap between vocabularies and 0 would mean no overlap between vocabularies.",
"cite_spans": [
{
"start": 733,
"end": 752,
"text": "Pires et al. (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 1333,
"end": 1341,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Multi-Lingual training of M-BERT",
"sec_num": "3.3"
},
{
"text": "From entries for lexical similarity in Table 4 , we observe that DE is lexically most similar to EN followed by FR. In case of missing values, we consider the corresponding languages as lexically farthest to the target language. Since M-BERT is trained on monolingual corpora and the abovementioned 5 languages are among them, the vocabulary of M-BERT would have all the alphabets from these languages. On the basis of results evidenced in Pires et al. (2019) , we hypothesise that the zero shot transfer is more likely among similar lexical languages and devise our multi-language training of M-BERT in the following manner. We take the labeled data available in 5 languages and sort them based on increasing lexical similarity with the target language. For target language DE, the ordering would be ES, IT, FR, EN, DE. We feed all the data in the aforementioned ordering and progressively drop the lexically farthest language every x iterations until we are only left with the target language. In our case we set x = 2 and train the M-BERT. We generally stop training the model after 10 iterations since we do not observe significant gains beyond this.",
"cite_spans": [
{
"start": 440,
"end": 459,
"text": "Pires et al. (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Multi-Lingual training of M-BERT",
"sec_num": "3.3"
},
{
"text": "In all experiments, we track model performance using precision and recall. Precision indicates the fraction of ads correctly rejected by model. Recall indicates the fraction of true defective products rejected by the model for a particular category. SS and SWS for EN Table 2 shows the improvement in precision for all the models built using the semi-supervision and semi-weak-supervised approaches. We see semi-supervision(B_SS) consistently perform better than the baseline, BERT finetuned with strong data, across all categories. For CAT1-2, we observe a substantial lift in precision over baseline compared to other categories. This is attributed to strong sourcing characteristics for these categories observed in Table 1 . We observe significant gains by SWS(B_SWS) models especially in low resource categories like CAT3-5. For CAT3, CAT4 and CAT5 we see 6-8% better precision respectively. Results for low resource languages In case of low resource languages the amount of defective ads is much lesser and is of order 0.02-0.15 as called out earlier. Since the quantity of positive class is drastically low, precision does not always indicate the true gains seen by our models. Hence we also report true negative rate(TNR) which is the % of non-defective ads rightly approved by our models. Table 3 provides the relative improvements in metrics of all the models in comparison to baseline BOE_LIN. The complex and heavily parameterised M-BERT(B) model achieves a significant increase in TNR despite dearth of training data. From performance numbers in Table 3 , we see that fine-tuning(B_T L F T ) the model with target domain after pre-training with translated data is better than random sampling(B_T L RS ) of mini-batches across strong and translated data. Plain augmentation of data through translation without any curriculum during training the model might not always show gains as indicated by M-BERT's performance. However, introducing a curriculum(B_T L CL ) based on the difficulty of the training samples outperforms the initial two approaches. Table 3 also shows performance of weak supervision techniques ( see Section 4.1). Models trained using both SS(B_SS) and SWS(B_SW S) approaches outperform the model which was trained only using the strong data.",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 719,
"end": 726,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1298,
"end": 1305,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1559,
"end": 1566,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 2062,
"end": 2069,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We observe the best performance for the model (B_M L LEX ) leveraging data from multiple languages and trained in lexical order fashion. Since DE is lexically similar to EN, the larger training data in EN aided the model performance in this setting. We also rerun the experiments with FR with same setting and results are provided in Table 3 . If we observe the lexical similarity in Table 4 , FR is most similar to IT and ES and farther away from EN which has the most amount of labeled data. Hence, we do not see the similar kind of gains for FR as seen in DE which is lexically closer to EN. For FR the model trained using curriculum (B_T L CL ) based on the hardness of the sample performs the best. We observe a similar trend in FR for rest of the approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 342,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 385,
"end": 392,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We ablate the effects of curriculum learning based on increasing difficulty using models trained in two control conditions. (a) Anti-curriculum learning (B_T L ACL ) using scoring function f = \u2212f where harder samples are fed first and (b) random curriculum (B_T L RCL ) where scoring function randomly scores the training samples. As seen from the Table 3 anti-curriculum and random curriculum are not as effective as the curriculum of increasing hardness. Further, random scoring function results in significant degradation of performance when compared to approaches employing a curriculum. Similar trends are observed for respective models trained in FR as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Ablations",
"sec_num": "4.1"
},
{
"text": "We further conduct ablations to rule out any other factors contributing to the gain in recall from curriculum based on lexical similarity. We perform two other experiments where we train the model in similar manner but feed the languages in reverse lexical similarity order(B_M L REV LEX ) and random order (B_M L RAN D LEX ). However, in both the experiments we feed the target language at the end to minimise domain shift. We see that the model trained in the lexical similarity order beats the performance of the other two models in Table 3 . We validate statistical significance of gains from both lexical and hardness curricula using the Mc-Nemar's Test (Dietterich, 1998; McNemar, 1947) (Raschka, 2018) . The gains through both curriculum are statistically significant as p-value is < 0.05 for both DE and FR.",
"cite_spans": [
{
"start": 644,
"end": 678,
"text": "Mc-Nemar's Test (Dietterich, 1998;",
"ref_id": null
},
{
"start": 679,
"end": 693,
"text": "McNemar, 1947)",
"ref_id": "BIBREF13"
},
{
"start": 694,
"end": 709,
"text": "(Raschka, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 536,
"end": 544,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Ablations",
"sec_num": "4.1"
},
{
"text": "We have explored multiple ways of training a GLM and it's multilingual variant in low resource settings. When large in-domain monolingual corpora is present but labeled data is limited, sourcing weak data applied in semi and semi-weak supervision training improves model performance consistently. Curricula are useful in resource constrained settings. Multilingual training on a lexical similarity based curriculum is useful when target language is lexically closer to resource rich languages. Alternate curriculum like sample hardness is useful in low resource languages which are lexically distant to resource rich language such as EN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Lately, there has been exponential progress in generating efficient embeddings for various natural language processing(NLP) tasks using language models (Radford et al. (2019) ; Liu et al. (2019) ). BERT (Devlin et al., 2019) based embeddings achieved SOTA results in eleven NLP tasks at the time of its release. Devlin et al. (2019) also release a multilingual version of BERT(M-BERT), pre-trained using monolingual corpora of 104 different languages. M-BERT is also surprisingly good at zero shot transfer between languages as shown by Pires et al. (2019) . Prior to and in parallel to M-BERT multiple works have been done for multilingual NLP tasks (Ruder et al., 2019) . LASER described in Artetxe and Schwenk (2019) achieve language independent representation by having a single encoder and decoder which are shared by all language pairs for the translation task. Conneau and Lample (2019) propose using parallel data to train translation language model as an extension to M-BERT. release XLM-R which is pretrained using 100 languages using much larger corpus compared to M-BERT.",
"cite_spans": [
{
"start": 152,
"end": 174,
"text": "(Radford et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 177,
"end": 194,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 203,
"end": 224,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 312,
"end": 332,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 537,
"end": 556,
"text": "Pires et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 651,
"end": 671,
"text": "(Ruder et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 868,
"end": 893,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Most of the recently launched language models have millions of parameters which demands huge amount of labelled data for training robust models. However, obtaining large amount of labeled data is a laborious and expensive process. Semisupervised approaches involve efficiently incorporating huge quantity of unlabelled data along with limited labelled data. There has been a lot of work in this area in image and text domain. Yalniz et al. (2019) propose a teacher-student paradigm for incorporating both unlabelled and weakly labelled data for training a image classifier. Karamanolakis et al. (2019) also make use of teacher-student approach for leveraging weak signals for aspect detection in text. Variational auto encoders (Yang et al. (2017) ; Gururangan et al. (2019) ) and virtual adversarial training (Miyato et al., 2016) have been extensively used in semi-supervised setting. Recently interpolations in textual hidden space (Chen et al., 2020) have been used for semi-supervised learning as well.",
"cite_spans": [
{
"start": 426,
"end": 446,
"text": "Yalniz et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 574,
"end": 601,
"text": "Karamanolakis et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 728,
"end": 747,
"text": "(Yang et al. (2017)",
"ref_id": "BIBREF25"
},
{
"start": 750,
"end": 774,
"text": "Gururangan et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 810,
"end": 831,
"text": "(Miyato et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 935,
"end": 954,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Multiple prior works (Sculley et al. (2011) ; Sanzgiri et al. (2018)) detect adversarial ads in online advertising platforms. While Sculley et al. (2011) provide a holistic view of creating an adversarial ad detection system, Sanzgiri et al. (2018) look at techniques for detecting sensitive content in images.",
"cite_spans": [
{
"start": 21,
"end": 43,
"text": "(Sculley et al. (2011)",
"ref_id": "BIBREF20"
},
{
"start": 132,
"end": 153,
"text": "Sculley et al. (2011)",
"ref_id": "BIBREF20"
},
{
"start": 226,
"end": 248,
"text": "Sanzgiri et al. (2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Our work focuses on techniques we leverage to train state of the art language models for detecting adversarial advertising content in text. However, the uncommon nature of these violations pose a challenge, often compounded in low resource languages. We leverage related work in semi-weak supervision and curriculum learning to overcome these challenges. We also show how data available in multiple languages can be used for training classifiers for a given target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "597--610",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00288"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mixtext: Linguistically-informed interpolation of hidden space for semi-supervised text classification",
"authors": [
{
"first": "Jiaao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.12239"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. Mix- text: Linguistically-informed interpolation of hid- den space for semi-supervised text classification. arXiv preprint arXiv:2004.12239.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7059--7069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7059-7069.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Approximate statistical tests for comparing supervised classification learning algorithms",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1998,
"venue": "Neural Comput",
"volume": "10",
"issue": "7",
"pages": "1895--1923",
"other_ids": {
"DOI": [
"10.1162/089976698300017197"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas G. Dietterich. 1998. Approximate statistical tests for comparing supervised classification learn- ing algorithms. Neural Comput., 10(7):1895-1923.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Variational pretraining for semi-supervised text classification",
"authors": [
{
"first": "Suchin",
"middle": [],
"last": "Gururangan",
"suffix": ""
},
{
"first": "Tam",
"middle": [],
"last": "Dang",
"suffix": ""
},
{
"first": "Dallas",
"middle": [],
"last": "Card",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.02242"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Tam Dang, Dallas Card, and Noah A Smith. 2019. Variational pretraining for semi-supervised text classification. arXiv preprint arXiv:1906.02242.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On the power of curriculum learning in training deep networks",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Hacohen",
"suffix": ""
},
{
"first": "Daphna",
"middle": [],
"last": "Weinshall",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy Hacohen and Daphna Weinshall. 2019. On the power of curriculum learning in training deep net- works. CoRR, abs/1904.03626.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Leveraging just a few keywords for finegrained aspect detection through weakly supervised co-training",
"authors": [
{
"first": "Giannis",
"middle": [],
"last": "Karamanolakis",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.00415"
]
},
"num": null,
"urls": [],
"raw_text": "Giannis Karamanolakis, Daniel Hsu, and Luis Gravano. 2019. Leveraging just a few keywords for fine- grained aspect detection through weakly supervised co-training. arXiv preprint arXiv:1909.00415.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Note on the sampling error of the difference between correlated proportions or percentages",
"authors": [
{
"first": "Quinn",
"middle": [],
"last": "Mcnemar",
"suffix": ""
}
],
"year": 1947,
"venue": "Psychometrika",
"volume": "12",
"issue": "2",
"pages": "153--157",
"other_ids": {
"DOI": [
"10.1007/BF02295996"
]
},
"num": null,
"urls": [],
"raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adversarial training methods for semi-supervised text classification",
"authors": [
{
"first": "Takeru",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodfellow",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.07725"
]
},
"num": null,
"urls": [],
"raw_text": "Takeru Miyato, Andrew M Dai, and Ian Good- fellow. 2016. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1493"
]
},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mlxtend: Providing machine learning and data science utilities and extensions to python's scientific computing stack",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Raschka",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Open Source Software",
"volume": "3",
"issue": "24",
"pages": "",
"other_ids": {
"DOI": [
"10.21105/joss.00638"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Raschka. 2018. Mlxtend: Providing ma- chine learning and data science utilities and exten- sions to python's scientific computing stack. Jour- nal of Open Source Software, 3(24):638.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vulic",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Artificial Intelligence Research",
"volume": "65",
"issue": "",
"pages": "569--631",
"other_ids": {
"DOI": [
"10.1613/jair.1.11640"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vulic, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65:569-631.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Classifying sensitive content in online advertisements with deep learning",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Sanzgiri",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Austin",
"suffix": ""
},
{
"first": "Kannan",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Woodard",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Lissack",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Seljan",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)",
"volume": "",
"issue": "",
"pages": "434--441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Sanzgiri, Daniel Austin, Kannan Sankaran, Ryan Woodard, Amit Lissack, and Sam Seljan. 2018. Classifying sensitive content in online advertise- ments with deep learning. In 2018 IEEE 5th Inter- national Conference on Data Science and Advanced Analytics (DSAA), pages 434-441. IEEE.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Detecting adversarial advertisements in the wild",
"authors": [
{
"first": "D",
"middle": [],
"last": "Sculley",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"Eric"
],
"last": "Otey",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Pohl",
"suffix": ""
},
{
"first": "Bridget",
"middle": [],
"last": "Spitznagel",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Hainsworth",
"suffix": ""
},
{
"first": "Yunkai",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "274--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Sculley, Matthew Eric Otey, Michael Pohl, Brid- get Spitznagel, John Hainsworth, and Yunkai Zhou. 2011. Detecting adversarial advertisements in the wild. In Proceedings of the 17th ACM SIGKDD in- ternational conference on Knowledge discovery and data mining, pages 274-282.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On the stratification of multilabel data",
"authors": [
{
"first": "Konstantinos",
"middle": [],
"last": "Sechidis",
"suffix": ""
},
{
"first": "Grigorios",
"middle": [],
"last": "Tsoumakas",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Vlahavas",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "145--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantinos Sechidis, Grigorios Tsoumakas, and Ioan- nis Vlahavas. 2011. On the stratification of multi- label data. Machine Learning and Knowledge Dis- covery in Databases, pages 145-158.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms",
"authors": [
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Renqiang Min",
"suffix": ""
},
{
"first": "Qinliang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "440--450",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1041"
]
},
"num": null,
"urls": [],
"raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word-embedding-based models and associated pool- ing mechanisms. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440- 450, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "network perspective on stratification of multi-label data",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Szyma\u00b4",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nski",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Kajdanowicz",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications",
"volume": "74",
"issue": "",
"pages": "22--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Szyma\u00b4 A nski and Tomasz Kajdanowicz. 2017. network perspective on stratification of multi-label data. In Proceedings of the First International Work- shop on Learning with Imbalanced Domains: The- ory and Applications, volume 74 of Proceedings of Machine Learning Research, pages 22-35, ECML- PKDD, Skopje, Macedonia. PMLR. Wikipedia contributors. 2004. Plagiarism - Wikipedia, the free encyclopedia. [Online; ac- cessed Feb-2020].",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Billion-scale semi-supervised learning for image classification",
"authors": [
{
"first": "I",
"middle": [
"Zeki"
],
"last": "Yalniz",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "Kan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Manohar",
"middle": [],
"last": "Paluri",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Mahajan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Zeki Yalniz, Herv\u00e9 J\u00e9gou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. 2019. Billion-scale semi-supervised learning for image classification. CoRR, abs/1905.00546.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improved variational autoencoders for text modeling using dilated convolutions",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08139"
]
},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved varia- tional autoencoders for text modeling using dilated convolutions. arXiv preprint arXiv:1702.08139.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>DEFECT CATEGORY</td><td colspan=\"5\">CAT1 CAT2 CAT3 CAT4 CAT5</td></tr><tr><td>COUNT OF KEYWORDS</td><td>315</td><td>240</td><td>36</td><td>45</td><td>50</td></tr><tr><td>COUNT OF CC</td><td>26</td><td>111</td><td>27</td><td>1</td><td>20</td></tr><tr><td>SCALE OF DATA</td><td>10</td><td>100</td><td>5</td><td>2</td><td>1</td></tr></table>",
"type_str": "table",
"text": "Statistics of deny list keywords, catalogue categorisation labels (CC) and relative scale of data for each label category",
"num": null,
"html": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"6\">PRECISION IMPROVEMENT AT HIGH RECALL THRESHOLD</td></tr><tr><td/><td colspan=\"3\">OVER BASELINE</td><td/></tr><tr><td>METHOD</td><td>CAT1</td><td>CAT2</td><td colspan=\"3\">CAT3 CAT4 CAT5</td></tr><tr><td>B_SS</td><td>+40.46</td><td>+11.6</td><td colspan=\"3\">+ 2.12 +4.85 +2.93</td></tr><tr><td>B_SWS</td><td colspan=\"2\">+40.48 +10.99</td><td>+6.44</td><td>+8.02</td><td>+7.09</td></tr></table>",
"type_str": "table",
"text": "Precision over Baseline, BERT(B) trained with limited labeled data, at our high Recall threshold for all models across defects.",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"2\">MODEL TYPE</td><td/><td/><td/><td colspan=\"2\">ABLATION TYPE</td></tr><tr><td/><td/><td>B</td><td>B_SS</td><td colspan=\"8\">B_SW S LEX</td><td>B_M L RAN D LEX</td></tr><tr><td>DE</td><td>TNR PREC.</td><td colspan=\"2\">+14.35 +23.15 +0.76 +1.69</td><td>+24.79 +1.95</td><td>+13.84 +0.72</td><td>+23.76 +1.78</td><td>+26.40 +2.24</td><td>+29.08 +2.83</td><td>+25.7 +2.11</td><td>+21.0 +1.4</td><td>+14.17 +0.75</td><td>+26.90 +2.34</td></tr><tr><td>FR</td><td>TNR PREC.</td><td colspan=\"2\">+15.29 +19.16 +0.77 +1.10</td><td>+20.05 +1.19</td><td>+15.10 +0.75</td><td>+20.41 +1.23</td><td>+22.11 +1.43</td><td>+19.57 +1.14</td><td>+21.02 +1.30</td><td>+20.68 +1.26</td><td>+12.71 +0.59</td><td>+18.80 +1.07</td></tr></table>",
"type_str": "table",
"text": "Precision and TNR improvements at our high recall threshold for all the explored models for DE and FR languages using different training strategies and for ablation studies in Section 4.1 over BOE_LIN. Here B refers to M-BERT finetuned with limited labeled data. B_T LRS B_T LF T B_T LCL B_M LLEX B_T LACL B_T LRCL B_M L REV",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table><tr><td colspan=\"3\">LANGUAGE EN DE</td><td>FR</td><td>ES</td><td>IT</td></tr><tr><td>EN</td><td>1</td><td colspan=\"2\">0.6 0.27</td><td>-</td><td>-</td></tr><tr><td>DE</td><td>0.6</td><td>1</td><td>0.29</td><td>-</td><td>-</td></tr><tr><td>FR</td><td colspan=\"2\">0.27 0.29</td><td>1</td><td colspan=\"2\">0.75 0.89</td></tr><tr><td>ES</td><td>-</td><td>-</td><td>0.75</td><td>1</td><td>0.82</td></tr><tr><td>IT</td><td>-</td><td>-</td><td colspan=\"2\">0.89 0.82</td><td>1</td></tr></table>",
"type_str": "table",
"text": "Lexical Similarity scores between languages of interest taken from Wikipedia.",
"num": null,
"html": null
}
}
}
}