ACL-OCL / Base_JSON /prefixE /json /ecnlp /2022.ecnlp-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:51.269186Z"
},
"title": "Robust Product Classification with Instance-Dependent Noise",
"authors": [
{
"first": "Huy",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon.com, Inc. Seattle",
"location": {
"settlement": "Washington",
"country": "USA"
}
},
"email": "nguynnq@amazon.com"
},
{
"first": "Devashish",
"middle": [],
"last": "Khatwani",
"suffix": "",
"affiliation": {},
"email": "khatwad@amazon.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Noisy labels in large E-commerce product data (i.e., product items are placed into incorrect categories) are a critical issue for product categorization task because they are unavoidable, nontrivial to remove and degrade prediction performance significantly. Training a product title classification model which is robust to noisy labels in the data is very important to make product classification applications more practical. In this paper, we study the impact of instancedependent noise to performance of product title classification by comparing our data denoising algorithm and different noise-resistance training algorithms which were designed to prevent a classifier model from over-fitting to noise. We develop a simple yet effective Deep Neural Network for product title classification to use as a base classifier. Along with recent methods of stimulating instance-dependent noise, we propose a novel noise stimulation algorithm based on product title similarity. Our experiments cover multiple datasets, various noise methods and different training solutions. Results uncover the limit of classification task when noise rate is not negligible and data distribution is highly skewed.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Noisy labels in large E-commerce product data (i.e., product items are placed into incorrect categories) are a critical issue for product categorization task because they are unavoidable, nontrivial to remove and degrade prediction performance significantly. Training a product title classification model which is robust to noisy labels in the data is very important to make product classification applications more practical. In this paper, we study the impact of instancedependent noise to performance of product title classification by comparing our data denoising algorithm and different noise-resistance training algorithms which were designed to prevent a classifier model from over-fitting to noise. We develop a simple yet effective Deep Neural Network for product title classification to use as a base classifier. Along with recent methods of stimulating instance-dependent noise, we propose a novel noise stimulation algorithm based on product title similarity. Our experiments cover multiple datasets, various noise methods and different training solutions. Results uncover the limit of classification task when noise rate is not negligible and data distribution is highly skewed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Product classification is a quintessential Ecommerce machine learning problem in which product items are placed into their respective categories. With recent advancements of Deep Learning, various unimodal (i.e ., text only) and multimodal (e.g., text and image) models have been developed to predict larger numbers of items and categories with better accuracy (Gao et al., 2020; Chen et al., 2021a; Brinkmann and Bizer, 2021) . However, one of the fundamental assumptions behind such models is the availability of large and highquality labeled datasets. Access to such datasets is usually costly or infeasible in some settings. Large product datasets usually suffer from annotation er-rors, i.e., products are assigned to incorrect categories, partially due to complex category structure, confusing categories and similar titles. The problem of noisy labels is even more severe when product category distribution is highly imbalanced with heavy-tail (Shen et al., 2012; Das et al., 2016) . Therefore, a text classifier which is robust to noisy labels present in training data is critical for highperforming product classification applications.",
"cite_spans": [
{
"start": 179,
"end": 210,
"text": "Learning, various unimodal (i.e",
"ref_id": null
},
{
"start": 361,
"end": 379,
"text": "(Gao et al., 2020;",
"ref_id": "BIBREF9"
},
{
"start": 380,
"end": 399,
"text": "Chen et al., 2021a;",
"ref_id": "BIBREF4"
},
{
"start": 400,
"end": 426,
"text": "Brinkmann and Bizer, 2021)",
"ref_id": "BIBREF3"
},
{
"start": 951,
"end": 970,
"text": "(Shen et al., 2012;",
"ref_id": "BIBREF27"
},
{
"start": 971,
"end": 988,
"text": "Das et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While machine learning in the presence of label noise has been studied for decades, most of prior studies experimented in computer vision domain (Gu et al., 2021; Song et al., 2022) , and only a few research was conducted in text classification (Jindal et al., 2019; Garg et al., 2021) . Without an annotated dataset with manually-identified label noise, classical approaches for label noise stimulation assume class-conditional noise (CCN) where the probability of an item having label corrupted depends on the original and noisy labels. With this assumption, all products of \"Men's Watches\" category have the sample probability to be assigned \"Women's Watches\" label. This is not generally correct. For instance, product titles having phrase \"men's watches\" are less likely mis-labeled. Recent research addresses more general label noise, i.e., instance-dependent noise (IDN) , that an item is mis-labeled with a probability depending on its original label and features.",
"cite_spans": [
{
"start": 145,
"end": 162,
"text": "(Gu et al., 2021;",
"ref_id": null
},
{
"start": 163,
"end": 181,
"text": "Song et al., 2022)",
"ref_id": "BIBREF28"
},
{
"start": 245,
"end": 266,
"text": "(Jindal et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 267,
"end": 285,
"text": "Garg et al., 2021)",
"ref_id": "BIBREF10"
},
{
"start": 872,
"end": 877,
"text": "(IDN)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a comprehensive study on improving product title classification in the presence of IDN. We develop a simple yet effective Deep Neural Network for text classification and show that our model performs well on different product title datasets ranging from small to medium sizes, balanced to skewed distributions, and tens to over a hundred categories. To generate noisy labels for experiments, our first contribution is an IDN stimulation algorithm which flips an item's label based on its similarity to items of other categories. Noisy label data generated by our method is com-pared with prior IDN stimulation methods for their impact to model accuracy degradation. To make the model robust to label noise, our second contribution is a data augmentation method that reduces noise rate and thus improves model's accuracy. We compare three state-of-the-art Deep Neural Network training algorithms to train a classifier on data with label noise generated by different methods. From experimental results we discuss lessons learned for product title classification in production. To the best of our knowledge, this work is the first time that noise-resistance model training is studied in E-commerce domain, which is our third contribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatic product categorization has been well studied to address its challenges including large number of items and categories, and hierarchical categories structure (Gao et al., 2020; Chen et al., 2021a; Brinkmann and Bizer, 2021) . The largescale nature of product data leads to a critical issue of noisy labels. For example, an E-commerce website reported that 15% of product listings by sellers have incorrect labels (Shen et al., 2012) . Das et al. (2016) attempted to use a latent topic model to help manually inspect noisy categories and remove incorrect samples. Our current study focuses on fully automated methods for data denoising and noise-resistance training to prevent models from over-fitting to noisy samples.",
"cite_spans": [
{
"start": 167,
"end": 185,
"text": "(Gao et al., 2020;",
"ref_id": "BIBREF9"
},
{
"start": 186,
"end": 205,
"text": "Chen et al., 2021a;",
"ref_id": "BIBREF4"
},
{
"start": 206,
"end": 232,
"text": "Brinkmann and Bizer, 2021)",
"ref_id": "BIBREF3"
},
{
"start": 422,
"end": 441,
"text": "(Shen et al., 2012)",
"ref_id": "BIBREF27"
},
{
"start": 444,
"end": 461,
"text": "Das et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Training Deep Neural Networks (DNN) with noisy labels is challenging because DNN's large learning capacity make them highly susceptible to over-fitting to noise (Arpit et al., 2017; Zhang et al., 2021a) . Early work stacked DNN with layers to model noise-transition matrix assuming classconditional noise, i.e., noisy label\u0177 only depends on true label y but not on the input x (Jindal et al., 2016; Patrini et al., 2017) . Because noise transition matrix can be difficult to learn or not feasible in real-world settings, other directions targeted to selecting clean samples in each mini-batch and use them to update DNN's parameters (Jiang et al., 2018; Malach and Shalev-Shwartz, 2017 ). Among those, CoTeaching (Han et al., 2018) and CoTeaching + (Yu et al., 2019) showed the effectiveness of cross-training two networks simultaneously in that each network sends selective samples for the other to learn. A more realistic assumption of noisy labels is instance-dependent noise (IDN) in which probability of noisy label\u0177 depends on true label y and input x . Among stateof-the-art work on IDN, Self-Evolution Average Label -SEAL and Progressive Label Correction -PLC (Zhang et al., 2021b) are representatives of label refurbishment (Song et al., 2022) that uses softmax output to assign soft labels to training instances. We compare SEAL, PLC and CoTeaching + on training a product title classifier with label noise.",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "(Arpit et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 182,
"end": 202,
"text": "Zhang et al., 2021a)",
"ref_id": "BIBREF30"
},
{
"start": 377,
"end": 398,
"text": "(Jindal et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 399,
"end": 420,
"text": "Patrini et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 633,
"end": 653,
"text": "(Jiang et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 654,
"end": 685,
"text": "Malach and Shalev-Shwartz, 2017",
"ref_id": "BIBREF20"
},
{
"start": 713,
"end": 731,
"text": "(Han et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 749,
"end": 766,
"text": "(Yu et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 1168,
"end": 1189,
"text": "(Zhang et al., 2021b)",
"ref_id": "BIBREF31"
},
{
"start": 1233,
"end": 1252,
"text": "(Song et al., 2022)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this study, we employ 6 public datasets for product classification. While some datasets have multimodal inputs, e.g., product titles, descriptions, images, we use only product title inputs and leave other fields for a future work. This restriction may prevent us from achieving the best possible performance by incorporating other information-rich inputs (Chen et al., 2021a) . However, our main motivation is to evaluate noise-resistance training approaches. For each dataset, we filter-out category labels with less than 10 samples, then apply stratified random sampling to split 10% for testing and 90% for training. We leave a study of few-shot learning for product title classification for future work. Hyper-parameters of models and training algorithms are fine-tuned within training sets when needed. In experiments with noisy labels, only training samples have label corrupted while testing sets are unchanged. This assures a realistic evaluation that model accuracies are measured against ground-truth disregarding how the model was trained. To measure skewness of data label distribution, we calculate KL-divergence from the actual category distribution to uniform distribution. Data statistics are shown in Table 1 . \u2022 WDC dataset is WDC-25 Gold Standard for Product Categorization (Primpeli et al., 2019) . We remove items with category label \"notfound\" and keep 23,597 samples with 24 class labels. \u2022 Pricerunner, Shopmania, Skroutz datasets 2 were collected from three online electronic stores and product comparison platforms (Akritidis et al., 2018 (Akritidis et al., , 2020 .",
"cite_spans": [
{
"start": 358,
"end": 378,
"text": "(Chen et al., 2021a)",
"ref_id": "BIBREF4"
},
{
"start": 1296,
"end": 1319,
"text": "(Primpeli et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 1544,
"end": 1567,
"text": "(Akritidis et al., 2018",
"ref_id": "BIBREF0"
},
{
"start": 1568,
"end": 1593,
"text": "(Akritidis et al., , 2020",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1221,
"end": 1228,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "As shown in Table 1 , datasets Flipkart, Shopmania and Skroutz are highly imbalanced with KLdivergence greater than 1. Each of these datasets has major classes with thousands of samples and minor classes with tens of samples. WDC dataset is moderately skewed having 24 classes with number of samples ranging from 10 to 4,753. Retail and Pricerunner sets are the most balanced with KL-divergence close to zero. Retail dataset has roughly 2,200 samples per class while Pricerunner has class samples in range (2000, 6000).",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We develop a product title classifier based on LSTM-CNNs architecture proposed in (Ma and Hovy, 2016) . The network architecture is depicted in Figure 1 . Input encoding layer is a concatenation of word-embeddings (looking-up function against GloVe pre-trained embeddings (Pennington et al., 2014)) and character embeddings (output of a Character-CNN layer). The sequence of embedding vectors is passed to a Bidirectional Recurrent Neural Network of LSTM cells (Hochreiter and Schmidhuber, 1997) . Prediction is carried by a dense layer whose input is last hidden state of Bidirectional LSTM. The DNN is implemented To evaluate our implementation, we compare model performance with fine-tunning the pretrained BERT-base uncased language model (Devlin et al., 2019) . Results on 6 datasets with clean label are reported in Table 2 . 3 Our model performs on par with BERT-base in small datasets Flipkart, WDC, and Retail with macro F1 of less than 1 percentage point lower. For datasets Pricerunner and Skroutz, both models return great performance with BERT-base outperforming our model by 2 percentage points. Shopmania dataset observes the largest performance difference when BERT achieves F1 score 4 percentage points higher than LSTM-CNNs. Good performance of LSTM-CNNs gives us a strong base classifier which is much faster to train than BERT-base (LSTM-CNNs has approximately 6M of trainable parameters while it is 110M for BERT). We will study the impact of pre-training on noise-resistance in a future study.",
"cite_spans": [
{
"start": 82,
"end": 101,
"text": "(Ma and Hovy, 2016)",
"ref_id": "BIBREF19"
},
{
"start": 461,
"end": 495,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF13"
},
{
"start": 743,
"end": 764,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 832,
"end": 833,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 822,
"end": 829,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Base Model for Product Title Categorization",
"sec_num": "4"
},
{
"text": "A common approach for automated IDN generation is to train one or a set of classifiers on clean label data, and use such classifiers to generate noisy labels for the whole dataset. Related studies can be different on how to maintain a pool of classifiers, e.g., different checkpoints of a single models or different model architectures, and label placement strategies, e.g., whether replacing clean label samples with noisy counterparts or allowing a sample to have multiple copies with different labels. We follow (Zhang et al., 2021b; to use replacement strategy which is considered a more difficult setting. We implement four different IDN algorithms, and adjust parameters to generate noisy label data with noise rates (i.e., ratio of noisy label samples over data size) in two levels: 0.2 (low) and 0.4 (medium). Last-epoch IDN: We train a base classifier for 10 epochs to obtain the network corresponding to last epoch checkpoint. The trained network is executed on training data to obtain prediction confidence score (i.e., output of softmax layer) for every sample. Following the formula of noise type-I described in (Zhang et al., 2021b) , we corrupt item category from the most confident label to the second confident label. This method uses a noise factor parameter to control noise rate, thus we run different trials to probe the noise factors that give us noise rates of interest. Multi-epoch IDN: The base classifier is trained for 10 epochs to obtain a sequence of networks corresponding to multiple epoch checkpoints. Each sample is assigned a score as the average of prediction probabilities assigned by network sequences following the algorithm proposed in . Potential noisy label should have the highest score among possible labels excluding the ground truth. In particular, data instances are sorted by scores of most likely corrupted labels, and r proportion of top instances will have labels flipped to obtain noise rate r.",
"cite_spans": [
{
"start": 515,
"end": 536,
"text": "(Zhang et al., 2021b;",
"ref_id": "BIBREF31"
},
{
"start": 1125,
"end": 1146,
"text": "(Zhang et al., 2021b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "Multi-model IDN: Similarly to multi-epoch IDN, we train 5 different versions of the base classifier by varying initial weights to get a network sequence, each network corresponds to last epoch checkpoint (i.e., epoch 10) of a training. Then we apply the same algorithm as in multi-epoch IDN to calculate noisy labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "Similarity-based IDN: From our experience in product data analyses, we hypothesize that human annotators, and thus machine learning models, may have difficulties in categorizing similar items, e.g., \"Tara Lifestyle Chhota Bheem Printed Art Plastic Pencil Boxes\" and \"Starmark BTS Star Art Polyester Pencil Box\". Our idea is to locate highly similar items across categories and flip their category labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "To generate noisy labels, we first calculate textual similarity between items of different categories. We implement two vector-based cosine similarity computations. First, A SentenceTransformer model (Reimers and Gurevych, 2019) 4 is used to generate embeddings of product titles. Second, a Tf-Idf model is learned from training set to generate Tf-Idf vectors of input titles. For each pair of product titles, we compare two cosine similarities calculated from sentence embedding vectors and Tf-Idf vectors. The greater score of two methods is assigned as similarity score Sim of two inputs. For each item i c of category c, we record the maximal similarity score Maxsim between it and every item from another category c of category set C:",
"cite_spans": [
{
"start": 200,
"end": 228,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "Maxsim c (i c ) = max j (Sim(i c , j)) j \u2208 c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "The sequence of maximal similarity scores of the item is used as weight vector I c for a multinominal distribution from which we draw a noisy label\u0109 given the item.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "I c = {Maxsim c (i c ) \u2200c \u2208 C, c = c}. c \u223c Multinomial(I c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "For all items, we assign their Maxsim\u0109 as representative scores of their corrupted labels, and we sort items by corrupted label scores from high to low. Given noise rate r, we select top r proportion of items to replace true labels by corrupted labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "6 Experiments on Noisy Labels",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance-Dependent Noise Stimulation",
"sec_num": "5"
},
{
"text": "We propose a novel data denoising method that reduces noise ratio by relabeling a sample when its prediction is certain. We say an input has certain prediction when model prediction on both original and corrupted inputs are the same. Our method relies on an idea of critical information assumption, i.e., we hypothesize that there are product titles which provide too much information that model does not need to use all words to predict their labels. For such titles, if one or more words are dropped, model should still predict the same label. There have been different studies to extract part of critical information from input to explain output of prediction models (Ribeiro et al., 2016; Lundberg and Lee, 2017; Kokalj et al., 2021) . Regarding product title, leading words are considerately more important than trailing words for recognizing product category. 5 Algorithm 1 is a simple heuristic to drop words from a product title. Statement 2 makes sure some right words are dropped even when an input is less than 15 words. We propose Algorithm 2 to denoise training data. With clean data, model should achieves highly confident predictions on training samples. Thus, we reason that unconfident predictions on training samples (i.e., p \u2264 0.8) are likely due to noisy labels. We note that in case of noisy training, input label is not considered ground truth generally. 5 A common template arranges title words in order of Brand Name > Product > Key features > Size > Color > Quantity (sellerengine.com/product-title-keyword-strategiesfor-new-products-on-amazon).",
"cite_spans": [
{
"start": 717,
"end": 737,
"text": "Kokalj et al., 2021)",
"ref_id": "BIBREF17"
},
{
"start": 1377,
"end": 1378,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Denoising by Corrupting Product Titles",
"sec_num": "6.1"
},
{
"text": "Algorithm 1 Drop words from a product title 1: Drop left words until dropped words have at least 5 letters in total or less then 4 words remaining 2: Drop right words until dropped words have at least 5 letters in total or less then 4 words remaining 3: Drop right words while there are more than 15 words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Denoising by Corrupting Product Titles",
"sec_num": "6.1"
},
{
"text": "Steps 3 and 4 update 6 training samples while step 5 removes samples which the model is unsure. Our denoising algorithm reduces noise rate with a trade-off of smaller training data. Their impact to training data is shown in Table 3 . For each dataset and input noise rate, we average noise rate and data size reductions after denoising the data corrputed by different noise stimulations.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data Denoising by Corrupting Product Titles",
"sec_num": "6.1"
},
{
"text": "Algorithm 2 Denoise training data 1: Run pre-trained model M on training data D:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Denoising by Corrupting Product Titles",
"sec_num": "6.1"
},
{
"text": "{L o , P o } \u2190 M (D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Denoising by Corrupting Product Titles",
"sec_num": "6.1"
},
{
"text": "where L o are predicted label and P o are prediction probability 2: Run M on corrupted training dataD (i.e., drops words from titles): ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Denoising by Corrupting Product Titles",
"sec_num": "6.1"
},
{
"text": "{L d , P d } \u2190 M (D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Denoising by Corrupting Product Titles",
"sec_num": "6.1"
},
{
"text": "In this study, we compare three training solutions that were developed for data with noisy labels: Self-Evolution Average Label -SEAL , Progressive Label Correction -PLC (Zhang et al., 2021b) and CoTeaching + -CTp (Yu et al., 2019) . The three training algorithms work independently from the underlying models. SEAL trains a model on multiple iterations. In each iteration, SEAL optimizes model's loss against soft labels which are average predictions over epochs of the previous iteration. PLC first trains noisy label data normally for a number of epochs, i.e., warm-up phase, with expectation that model can learn from clean labels before over-fits to noisy labels. Then PLC corrects input labels after each epoch for cases that it yields a confidence score above a threshold. CoTeaching + is an upgrade of CoTeaching paradigm that cross-trains two models using only small-loss samples in each mini-batch. CoTeaching + further prevents the two models from convergence by passing only samples whose predictions disagree among small-loss data to loss optimization step.",
"cite_spans": [
{
"start": 170,
"end": 191,
"text": "(Zhang et al., 2021b)",
"ref_id": "BIBREF31"
},
{
"start": 214,
"end": 231,
"text": "(Yu et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Noise-Resistance Training Algorithms",
"sec_num": "6.2"
},
{
"text": "Experimental results of individual models are shown in Table 4 . We first train the base classifier directly on noisy label data and record Macro F1 score on column Base. We then denoise 7 training data before training the base classifier, and enter performance into column DeN. Next columns report F1 scores of models trained by noise-resistance algorithms on noisy label data (i.e., not desnoised).",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.3"
},
{
"text": "As expected, label noises degrade model performance significantly. Noise rate 0.2 reduces performance of base model from 5% (Skroutz) -18% (Flipkart), while the performance reduction is 17% (Skroutz) to 46% (Flipkart) given noise rate 0.4. Pricerunner and Skroutz have lowest performance degradation which is reasonable because these two datasets are the easiest (see Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 375,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.3"
},
{
"text": "Evaluating impact of different IDN methods, similarity-based IDN degrades performance of base classifier the most in comparison with other IDN methods. Comparing performance of noiseresistance training methods with base classifier, we report average performance improvement (API) over different datasets in percentage point. Noiseresistance training methods have the most diffi-culty in improving multi-epoch and multi-model IDNs. In particular, performance improvements are at most 2% and 5% when multi-epoch and multimodel IDN rates are 0.2 and 0.4 respectively. Such noise-resistance training methods achieve much higher performance improvements when noisy labels are generated by other two IDN methods. Particularly, average performance improves are at least 4% and 8% when last-epoch and similarity-based IDN rates are 0.2 and 0.4 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.3"
},
{
"text": "Denoising data before training show improvements but performance improvements are lower for multi-epoch and multi-model IDN's than for last-epoch and similarity-based IDN's. Although our data denoising implementation is basic, it helps improve performance more than PLC in many settings, e.g., higher API in last-epoch, multi-epoch and multi-model IDN's. This encourage us to explore more advanced classifiers for better noise reduction results. Table 5 summarizes the results by grouping by dataset name then averaging over different noise stimulation methods. It is shown that CoTeaching + performs better than other methods in many datasets, e.g., 5 datasets with noise rate 0.2 and 4 datasets out of 6 with noise rate 0.4. DeN performs worse than three noise-resistance training methods despite a fact that noise rate was reduced significantly as shown in Table 3 . We hypothesize that regular training cannot recover from noisy instances that denoising algorithm is unable to correct/remove.",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 453,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 860,
"end": 867,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.3"
},
{
"text": "Comparing different datasets, we observe that Shopmania is the most difficult. Among denoising and noise-resistance training algorithms, the best approach could only improve performance by 4% and 7% when noise rate is 0.2 and 0.4 respectively. CoTeaching + even performed worse than base classifier on this dataset. As shown in Table 1 , Shopmania is the largest dataset, has the most num- Regarding imbalanced data, noisy labels in a minor class might be harder to address due to its small number of instances. Finally, prediction performance at high noise rate 0.6 is briefly shown in Table 6 . We only compare base classifier to CoTeaching + which is the best performing approach in this setting. While noise-resistance training algorithms do improve performance, overall performance is low. In our opinion, such a performance score is too low for an product title classification application. Thus we do not find any of the three training algorithms or our denoising algorithm can work reasonably well with high noise rate in product data.",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 587,
"end": 594,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.3"
},
{
"text": "Data denoising algorithm opens new opportunities for us to further improve product title classification with noisy labels. We plan to improve data denoising by several techniques: (1) run denoising algorithm using a base model trained with small number of epochs to prevent over-fitting to noise, (2) use more advanced base classifier, and transformerbased model is a good candidate. Stacking data denoising and noise-resistance training is another extension, and we can approach this in two ways: (1) data denoising provides less-noisy data for noiseresistance training, (2) noise-resistance training provides better base model to denoise data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "6.4"
},
{
"text": "In this paper, we evaluate a denoising algorithm and three training approaches for product title classification with category labels corrupted by instancedependent noise. We introduce a new IDN stimulation algorithm and compare with three IDN algorithms from prior studies to explore model performance on a wider range of noise type. Therefore our study can evaluate model robustness to IDN more reliably. Overall we find that CoTeaching + achieves highest average improvement and be our recommendation when applying to new product data without prior knowledge of noise cause or true distribution. SEAL can be a good method when we have clean validation data to evaluate. However, all methods studied in this paper have difficulties to address noise in large scale data with highly imbalanced class distribution, especially when noise rate is high. For such extreme setting, application of data denoising and noise-resistance training algorithms could not yield to reasonable performance for applying to production. For a future work, we plan to combine multiple techniques including transformer-based classifier as a more advanced model and stacking data denoising with noise-resistance training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "www.kaggle.com/PromptCloudHQ/flipkart-products",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.kaggle.com/lakritidis/product-classification-andcategorization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Macro F1 score is a fair evaluation metric for imbalanced data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Pretrained model all-MiniLM-L12-v2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For efficiency, our actual implementation only update a sample when its input label is different from predicted label. This condition is ignored in pseudo code for simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We run pre-trained model reported in column Base on training data to collect prediction outputs as described in Algorithm 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Effective Products Categorization with Importance Scores and Morphological Analysis of the Titles",
"authors": [
{
"first": "Leonidas",
"middle": [],
"last": "Akritidis",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)",
"volume": "",
"issue": "",
"pages": "213--220",
"other_ids": {
"DOI": [
"10.1109/ICTAI.2018.00041"
]
},
"num": null,
"urls": [],
"raw_text": "Leonidas Akritidis, Athanasios Fevgas, and Panayiotis Bozanis. 2018. Effective Products Categorization with Importance Scores and Morphological Analysis of the Titles. In 2018 IEEE 30th International Con- ference on Tools with Artificial Intelligence (ICTAI), pages 213-220.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Panayiotis Bozanis, and Christos Makris. 2020. A self-verifying clustering approach to unsupervised matching of product titles",
"authors": [
{
"first": "Leonidas",
"middle": [],
"last": "Akritidis",
"suffix": ""
},
{
"first": "Athanasios",
"middle": [],
"last": "Fevgas",
"suffix": ""
}
],
"year": null,
"venue": "Artificial Intelligence Review",
"volume": "",
"issue": "",
"pages": "1--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonidas Akritidis, Athanasios Fevgas, Panayiotis Boza- nis, and Christos Makris. 2020. A self-verifying clus- tering approach to unsupervised matching of product titles. Artificial Intelligence Review, pages 1-44.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Closer Look at Memorization in Deep Networks",
"authors": [
{
"first": "Devansh",
"middle": [],
"last": "Arpit",
"suffix": ""
},
{
"first": "Stanis\\law",
"middle": [],
"last": "Jastrzundefinedbski",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Ballas",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Krueger",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Maxinder",
"middle": [
"S"
],
"last": "Kanwal",
"suffix": ""
},
{
"first": "Tegan",
"middle": [],
"last": "Maharaj",
"suffix": ""
},
{
"first": "Asja",
"middle": [],
"last": "Fischer",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Lacoste-Julien",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "233--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devansh Arpit, Stanis\\law Jastrzundefinedbski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxin- der S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. 2017. A Closer Look at Memorization in Deep Net- works. In Proceedings of the 34th International Con- ference on Machine Learning -Volume 70, ICML'17, pages 233-242. JMLR.org. Event-place: Sydney, NSW, Australia.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improving hierarchical product classification using domain-specific language modelling",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Brinkmann",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of Workshop on Knowledge Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Brinkmann and Christian Bizer. 2021. Im- proving hierarchical product classification using domain-specific language modelling. In Proceed- ings of Workshop on Knowledge Management in e- Commerce.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multimodal Item Categorization Fully Based on Transformer",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Houwei",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Yandi",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Hirokazu",
"middle": [],
"last": "Miyake",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of The 4th Workshop on e-Commerce and NLP",
"volume": "",
"issue": "",
"pages": "111--115",
"other_ids": {
"DOI": [
"10.18653/v1/2021.ecnlp-1.13"
]
},
"num": null,
"urls": [],
"raw_text": "Lei Chen, Houwei Chou, Yandi Xia, and Hirokazu Miyake. 2021a. Multimodal Item Categorization Fully Based on Transformer. In Proceedings of The 4th Workshop on e-Commerce and NLP, pages 111- 115, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Guangyong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jingwei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Pheng-Ann",
"middle": [],
"last": "Heng",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Chen, Junjie Ye, Guangyong Chen, Jingwei Zhao, and Pheng-Ann Heng. 2021b. Beyond Class- Conditional Assumption: A Primary Attempt to Com- bat Instance-Dependent Label Noise. In Proceedings of the AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Large-scale taxonomy categorization for noisy product listings",
"authors": [
{
"first": "Pradipto",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Yandi",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Levine",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [
"Di"
],
"last": "Fabbrizio",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Datta",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on Big Data (Big Data)",
"volume": "",
"issue": "",
"pages": "3885--3894",
"other_ids": {
"DOI": [
"10.1109/BigData.2016.7841063"
]
},
"num": null,
"urls": [],
"raw_text": "Pradipto Das, Yandi Xia, Aaron Levine, Giuseppe Di Fabbrizio, and Ankur Datta. 2016. Large-scale taxonomy categorization for noisy product listings. In 2016 IEEE International Conference on Big Data (Big Data), pages 3885-3894.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Retail Product Categorisation Dataset",
"authors": [
{
"first": "Febin",
"middle": [],
"last": "Sebastian Elayanithottathil",
"suffix": ""
},
{
"first": "Janis",
"middle": [],
"last": "Keuper",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Febin Sebastian Elayanithottathil and Janis Keuper. 2021. A Retail Product Categorisation Dataset. _eprint: 2103.13864.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep Hierarchical Classification for Category Prediction in E-commerce System",
"authors": [
{
"first": "Dehong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Wenjing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Huiling",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dehong Gao, Wenjing Yang, Huiling Zhou, Yi Wei, Y. Hu, and H. Wang. 2020. Deep Hierarchical Clas- sification for Category Prediction in E-commerce System. ArXiv, abs/2005.06692.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Towards Robustness to Label Noise in Text Classification via Noise Modeling",
"authors": [
{
"first": "Siddhant",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Goutham",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Thumbe",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 30th ACM International Conference on Information & Knowledge Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddhant Garg, Goutham Ramakrishnan, and Varun Thumbe. 2021. Towards Robustness to Label Noise in Text Classification via Noise Modeling. Proceed- ings of the 30th ACM International Conference on Information & Knowledge Management.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Jack Nikodem, and Dong Yin. 2021. A Realistic Simulation Framework for Learning with Label Noise",
"authors": [
{
"first": "Keren",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Xander",
"middle": [],
"last": "Masotto",
"suffix": ""
},
{
"first": "Vandana",
"middle": [],
"last": "Bachani",
"suffix": ""
},
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
}
],
"year": null,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keren Gu, Xander Masotto, Vandana Bachani, Balaji Lakshminarayanan, Jack Nikodem, and Dong Yin. 2021. A Realistic Simulation Framework for Learn- ing with Label Noise. ArXiv, abs/2107.11413.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Quanming",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Xingrui",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Miao",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ivor",
"middle": [],
"last": "Tsang",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Sugiyama",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Cambridge",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Publisher",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput., 9(8):1735- 1780. Place: Cambridge, MA, USA Publisher: MIT Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhengyuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2018,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. MentorNet: Learning Data- Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. In ICML.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning deep networks from noisy labels with dropout regularization",
"authors": [
{
"first": "Ishan",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Nokleby",
"suffix": ""
},
{
"first": "Xuewen",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE 16th International Conference on",
"volume": "",
"issue": "",
"pages": "967--972",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ishan Jindal, Matthew Nokleby, and Xuewen Chen. 2016. Learning deep networks from noisy labels with dropout regularization. In Data Mining (ICDM), 2016 IEEE 16th International Conference on, pages 967-972. IEEE.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An Effective Label Noise Model for DNN Text Classification",
"authors": [
{
"first": "Ishan",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Pressel",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Lester",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Nokleby",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3246--3256",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1328"
]
},
"num": null,
"urls": [],
"raw_text": "Ishan Jindal, Daniel Pressel, Brian Lester, and Matthew Nokleby. 2019. An Effective Label Noise Model for DNN Text Classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3246-3256, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "BERT meets Shapley: Extending SHAP Explanations to Transformer-based Classifiers",
"authors": [
{
"first": "Enja",
"middle": [],
"last": "Kokalj",
"suffix": ""
},
{
"first": "Bla\u017e",
"middle": [],
"last": "\u0160krlj",
"suffix": ""
},
{
"first": "Nada",
"middle": [],
"last": "Lavra\u010d",
"suffix": ""
},
{
"first": "Senja",
"middle": [],
"last": "Pollak",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Robnik-\u0160ikonja",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation",
"volume": "",
"issue": "",
"pages": "16--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enja Kokalj, Bla\u017e \u0160krlj, Nada Lavra\u010d, Senja Pollak, and Marko Robnik-\u0160ikonja. 2021. BERT meets Shapley: Extending SHAP Explanations to Transformer-based Classifiers. In Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation, pages 16-21, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Unified Approach to Interpreting Model Predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17",
"volume": "",
"issue": "",
"pages": "4768--4777",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Pro- ceedings of the 31st International Conference on Neu- ral Information Processing Systems, NIPS'17, pages 4768-4777, Red Hook, NY, USA. Curran Associates Inc. Event-place: Long Beach, California, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1064--1074",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end Se- quence Labeling via Bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1064-1074, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Decoupling \"when to update\" from \"how to update",
"authors": [
{
"first": "Eran",
"middle": [],
"last": "Malach",
"suffix": ""
},
{
"first": "Shai",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eran Malach and Shai Shalev-Shwartz. 2017. Decou- pling \"when to update\" from \"how to update\". In NIPS.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic differentiation in PyTorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS-W",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS- W.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Making deep neural networks robust to label noise: A loss correction approach",
"authors": [
{
"first": "Giorgio",
"middle": [],
"last": "Patrini",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Rozza",
"suffix": ""
},
{
"first": "Aditya",
"middle": [
"Krishna"
],
"last": "Menon",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Nock",
"suffix": ""
},
{
"first": "Lizhen",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "1944--1952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giorgio Patrini, Alessandro Rozza, Aditya Kr- ishna Menon, Richard Nock, and Lizhen Qu. 2017. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 1944-1952.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The WDC Training Dataset and Gold Standard for Large-Scale Product Matching",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Primpeli",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Peeters",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2019,
"venue": "Companion Proceedings of The 2019 World Wide Web Conference, WWW '19",
"volume": "",
"issue": "",
"pages": "381--386",
"other_ids": {
"DOI": [
"10.1145/3308560.3316609"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Primpeli, Ralph Peeters, and Christian Bizer. 2019. The WDC Training Dataset and Gold Standard for Large-Scale Product Matching. In Companion Pro- ceedings of The 2019 World Wide Web Conference, WWW '19, pages 381-386, New York, NY, USA. Association for Computing Machinery. Event-place: San Francisco, USA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence Embeddings using Siamese BERT- Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Why Should I Trust You?\": Explaining the Predictions of Any Classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"Why Should I Trust You?\": Ex- plaining the Predictions of Any Classifier. In Pro- ceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Large-Scale Item Categorization for e-Commerce",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jean-David",
"middle": [],
"last": "Ruvini",
"suffix": ""
},
{
"first": "Badrul",
"middle": [],
"last": "Sarwar",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12",
"volume": "",
"issue": "",
"pages": "595--604",
"other_ids": {
"DOI": [
"10.1145/2396761.2396838"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Shen, Jean-David Ruvini, and Badrul Sarwar. 2012. Large-Scale Item Categorization for e-Commerce. In Proceedings of the 21st ACM International Confer- ence on Information and Knowledge Management, CIKM '12, pages 595-604, New York, NY, USA. Association for Computing Machinery. Event-place: Maui, Hawaii, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning from Noisy Labels with Deep Neural Networks: A Survey",
"authors": [
{
"first": "Hwanjun",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Minseok",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Dongmin",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Yooju",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Jae-Gil",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2022,
"venue": "IEEE Transactions on Neural Networks and Learning Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. 2022. Learning from Noisy Labels with Deep Neural Networks: A Survey. IEEE Transactions on Neural Networks and Learning Sys- tems.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "How does disagreement help generalization against label corruption?",
"authors": [
{
"first": "Xingrui",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jiangchao",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Ivor",
"middle": [],
"last": "Tsang",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Sugiyama",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "7164--7173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. 2019. How does disagreement help generalization against label cor- ruption? In International Conference on Machine Learning, pages 7164-7173. PMLR.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Understanding Deep Learning (Still) Requires Rethinking Generalization",
"authors": [
{
"first": "Chiyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Moritz",
"middle": [],
"last": "Hardt",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Recht",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2021,
"venue": "Publisher: Association for Computing Machinery",
"volume": "64",
"issue": "",
"pages": "107--115",
"other_ids": {
"DOI": [
"10.1145/3446776"
]
},
"num": null,
"urls": [],
"raw_text": "Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2021a. Understanding Deep Learning (Still) Requires Rethinking General- ization. Commun. ACM, 64(3):107-115. Place: New York, NY, USA Publisher: Association for Comput- ing Machinery.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning with Feature-Dependent Label Noise: A Progressive Approach",
"authors": [
{
"first": "Yikai",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Songzhu",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Pengxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, and Chao Chen. 2021b. Learning with Feature-Dependent Label Noise: A Progressive Ap- proach. In ICLR.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "LSTM-CNNs architecture for product title classifier",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "3: Assign predicted labels to samples where predictions are confident: InputLabel \u2190 L o if P o \u2265 0.8 4: Assign predicted labels to samples where predictions are certain: InputLabel \u2190 L o if L o = L d 5: Remove samples where predictions are neither certain nor confident: L o = L d and P o \u2264 0.8 and P d \u2264 0.8",
"uris": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Dataset</td><td>#cls</td><td>#train</td><td>#test</td><td>KL</td></tr><tr><td>Flipkart</td><td>28</td><td>17,682</td><td colspan=\"2\">1,984 1.04</td></tr><tr><td>WDC</td><td>24</td><td>21,225</td><td colspan=\"2\">2,372 0.34</td></tr><tr><td>Retail</td><td>21</td><td>41,586</td><td colspan=\"2\">4,642 0.00</td></tr><tr><td>Pricerunner</td><td>10</td><td>31,773</td><td colspan=\"2\">3,538 0.03</td></tr><tr><td colspan=\"5\">Shopmania 147 282,095 31,437 1.49</td></tr><tr><td>Skroutz</td><td colspan=\"4\">12 214,346 23,824 1.10</td></tr><tr><td colspan=\"5\">\u2022 Retail dataset has 46,228 training samples</td></tr><tr><td colspan=\"5\">with item titles, descriptions, images and</td></tr><tr><td colspan=\"5\">category labels placed into 21 categories</td></tr><tr><td colspan=\"5\">(Elayanithottathil and Keuper, 2021). We do</td></tr><tr><td colspan=\"5\">not use their test data which does not have</td></tr><tr><td colspan=\"2\">category labels.</td><td/><td/><td/></tr></table>",
"text": "Summary of product title datasets"
},
"TABREF2": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">LSTM-CNNs BERT-base</td></tr><tr><td>Flipkart</td><td>0.89</td><td>0.90</td></tr><tr><td>WDC</td><td>0.92</td><td>0.92</td></tr><tr><td>Retail</td><td>0.82</td><td>0.82</td></tr><tr><td>Pricerunner</td><td>0.96</td><td>0.98</td></tr><tr><td>Shopmania</td><td>0.83</td><td>0.87</td></tr><tr><td>Skroutz</td><td>0.96</td><td>0.98</td></tr><tr><td colspan=\"3\">in PyTorch (Paszke et al., 2017) and trained us-</td></tr><tr><td colspan=\"3\">ing Adam optimizer with Cross-entropy loss. For</td></tr><tr><td colspan=\"3\">experiments with different datasets, we use the</td></tr><tr><td colspan=\"3\">same set of hyper-parameters: Glove embedding</td></tr><tr><td colspan=\"3\">42B.300d, LSTM hidden size 100, character em-</td></tr><tr><td colspan=\"3\">bedding size 25 with 3 convolution heads of filter</td></tr><tr><td colspan=\"3\">sizes 2, 3, 4, learning rate 5e-4, clip gradient norm</td></tr><tr><td colspan=\"3\">greater than 5.0. Models are trained for 10 epoch</td></tr><tr><td colspan=\"2\">with batch size 16.</td><td/></tr></table>",
"text": "Models' macro F1 scores on product title data"
},
"TABREF3": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>Noise rate 0.2</td><td/><td>Noise rate 0.4</td><td/></tr><tr><td>Dataset</td><td colspan=\"4\">Noise reduction Data reduction Noise reduction Data reduction</td></tr><tr><td>Flipkart</td><td>36%</td><td>4%</td><td>29%</td><td>11%</td></tr><tr><td>WDC</td><td>28%</td><td>3%</td><td>21%</td><td>8%</td></tr><tr><td>Retail</td><td>26%</td><td>8%</td><td>30%</td><td>17%</td></tr><tr><td>Pricerunner</td><td>48%</td><td>3%</td><td>43%</td><td>11%</td></tr><tr><td>Shopmania</td><td>50%</td><td>7%</td><td>43%</td><td>12%</td></tr><tr><td>Skroutz</td><td>44%</td><td>6%</td><td>33%</td><td>7%</td></tr></table>",
"text": "Average reduction of noise rate and data size after denoising"
},
"TABREF4": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"3\">Noise rate 0.2</td><td/><td/><td/><td colspan=\"2\">Noise rate 0.4</td></tr><tr><td colspan=\"10\">Last-epoch IDN Base DeN SEAL PLC CTp Base DeN SEAL PLC 0.74 0.82 0.81 0.78 0.81 0.55 0.67 0.62 0.69 0.86 0.86 0.88 0.87 0.88 0.68 0.71 0.71 0.73 0.72 0.76 0.78 0.78 0.78 0.59 0.66 0.70 0.66 Pricerunner 0.89 0.94 Dataset Flipkart WDC Retail 0.94 0.93 0.94 0.71 0.87 0.90 0.79 Shopmania 0.74 0.71 0.73 0.76 0.68 0.59 0.62 0.62 0.63 Skroutz 0.90 0.94 0.94 0.93 0.95 0.77 0.86 0.86 0.78</td><td>CTp 0.66 0.77 0.71 0.91 0.56 0.92</td></tr><tr><td>API</td><td>-</td><td colspan=\"4\">3.7% 4.8% 4.2% 3.8%</td><td>-</td><td colspan=\"3\">12.9% 15.3% 8.5%</td><td>16%</td></tr><tr><td colspan=\"3\">Flipkart WDC Retail Pricerunner 0.91 0.91 0.73 0.73 0.81 0.82 0.79 0.80 Shopmania 0.76 0.75 Skroutz 0.95 0.95</td><td>0.74 0.83 0.79 0.92 0.76 0.95</td><td>0.75 0.83 0.79 0.92 0.77 0.95</td><td colspan=\"3\">Multi-epoch IDN 0.75 0.61 0.59 0.82 0.65 0.66 0.80 0.73 0.73 0.92 0.80 0.82 0.67 0.63 0.65 0.95 0.88 0.90</td><td>0.64 0.66 0.76 0.84 0.65 0.90</td><td>0.62 0.65 0.74 0.82 0.62 0.88</td><td>0.63 0.68 0.76 0.85 0.57 0.90</td></tr><tr><td>API</td><td>-</td><td colspan=\"4\">0.2% 0.8% 1.3% -0.9%</td><td>-</td><td>1%</td><td>3.5%</td><td>0.6%</td><td>1.8%</td></tr><tr><td colspan=\"3\">Flipkart WDC Retail Pricerunner 0.90 0.91 0.72 0.74 0.82 0.83 0.78 0.79 Shopmania 0.76 0.75 Skroutz 0.95 0.95</td><td>0.75 0.83 0.80 0.92 0.78 0.95</td><td>0.74 0.82 0.79 0.91 0.77 0.95</td><td colspan=\"3\">Multi-model IDN 0.75 0.57 0.61 0.83 0.65 0.65 0.79 0.70 0.73 0.92 0.80 0.81 0.68 0.66 0.65 0.95 0.90 0.92</td><td>0.64 0.67 0.76 0.84 0.66 0.91</td><td>0.61 0.66 0.73 0.81 0.64 0.91</td><td>0.63 0.67 0.74 0.84 0.57 0.92</td></tr><tr><td>API</td><td>-</td><td colspan=\"2\">0.8% 2.1%</td><td colspan=\"2\">1% -0.2%</td><td>-</td><td>2.2%</td><td>5%</td><td>2%</td><td>2.1%</td></tr><tr><td colspan=\"3\">Flipkart WDC Retail Pricerunner 0.86 0.91 0.73 0.76 0.73 0.74 0.69 0.75 Shopmania 0.70 0.70 Skroutz 0.84 0.89</td><td>0.76 0.75 0.77 0.93 0.71 0.85</td><td>0.77 0.75 0.76 0.92 0.73 0.84</td><td colspan=\"3\">Similarity-based IDN 0.78 0.55 0.58 0.76 0.58 0.58 0.77 0.57 0.66 0.93 0.72 0.83 0.65 0.57 0.59 0.88 0.68 0.76</td><td>0.61 0.59 0.72 0.85 0.57 0.72</td><td>0.65 0.59 0.70 0.82 0.59 0.69</td><td>0.67 0.60 0.72 0.86 0.50 0.76</td></tr><tr><td>API</td><td>-</td><td colspan=\"4\">4.3% 4.8% 4.9% 4.7%</td><td>-</td><td colspan=\"3\">8.6% 10.4% 10.2% 11.7%</td></tr></table>",
"text": "Models' macro F1 scores on product title data with noisy labels. Highest scores are bold. API shows average performance improvement compared to base classifier."
},
"TABREF5": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Noise rate 0.2</td><td>Noise rate 0.4</td></tr><tr><td colspan=\"2\">Dataset Flipkart WDC Retail Pricerunner 0.89 Base DeN SEAL PLC CTp Base 0.73 0.762 0.765 0.76 0.772 0.57 0.805 0.812 0.822 0.817 0.822 0.64 0.745 0.775 0.785 0.78 0.785 0.6475 0.695 DeN 0.6125 0.645 SEAL PLC CTp 0.625 0.647 0.65 0.6575 0.657 0.68 0.707 0.732 0.735 0.917 0.927 0.92 0.927 0.757 0.832 0.857 0.81 0.865 Shopmania 0.74 0.727 0.745 0.757 0.67 0.612 0.625 0.62 0.55 0.627 Skroutz 0.91 0.932 0.9225 0.917 0.932 0.807 0.86 0.8475 0.815 0.875</td></tr></table>",
"text": "Models' macro F1 scores averaged over different noise stimulations. Highest scores are bold."
},
"TABREF6": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">Base CTp API (%)</td></tr><tr><td>Flipkart</td><td>0.41 0.45</td><td>10%</td></tr><tr><td>WDC</td><td>0.42 0.46</td><td>9%</td></tr><tr><td>Retail</td><td>0.52 0.60</td><td>15%</td></tr><tr><td colspan=\"2\">Pricerunner 0.51 0.54</td><td>6%</td></tr><tr><td colspan=\"2\">Shopmania 0.42 0.42</td><td>0%</td></tr><tr><td>Skroutz</td><td>0.58 0.64</td><td>10%</td></tr><tr><td colspan=\"3\">ber of classes and the most imbalanced distribution.</td></tr></table>",
"text": "Models' macro F1 scores on product title data with noise rate 0.6. Scores are averaged over IDN methods."
}
}
}
}