{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:24.773992Z" }, "title": "Extreme Multi-Label Classification with Label Masking for Product Attribute Value Extraction", "authors": [ { "first": "Wei-Te", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rakuten Group Inc", "location": {} }, "email": "weite.chen@rakuten.com" }, { "first": "Yandi", "middle": [], "last": "Xia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rakuten Group Inc", "location": {} }, "email": "yandi.xia@rakuten.com" }, { "first": "Keiji", "middle": [], "last": "Shinzato", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rakuten Group Inc", "location": {} }, "email": "keiji.shinzato@rakuten.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Although most studies have treated attribute value extraction (AVE) as named entity recognition, these approaches are not practical in real-world e-commerce platforms because they perform poorly, and require canonicalization of extracted values. Furthermore, since values needed for actual services is static in many attributes, extraction of new values is not always necessary. Given the above, we formalize AVE as extreme multi-label classification (XMC). A major problem in solving AVE as XMC is that the distribution between positive and negative labels for products is heavily imbalanced. To mitigate the negative impact derived from such biased distribution, we propose label masking, a simple and effective method to reduce the number of negative labels in training. We exploit attribute taxonomy designed for ecommerce platforms to determine which labels are negative for products. Experimental results using a dataset collected from a Japanese ecommerce platform demonstrate that the label masking improves micro and macro F 1 scores by 3.38 and 23.20 points, respectively.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Although most studies have treated attribute value extraction (AVE) as named entity recognition, these approaches are not practical in real-world e-commerce platforms because they perform poorly, and require canonicalization of extracted values. Furthermore, since values needed for actual services is static in many attributes, extraction of new values is not always necessary. Given the above, we formalize AVE as extreme multi-label classification (XMC). A major problem in solving AVE as XMC is that the distribution between positive and negative labels for products is heavily imbalanced. To mitigate the negative impact derived from such biased distribution, we propose label masking, a simple and effective method to reduce the number of negative labels in training. We exploit attribute taxonomy designed for ecommerce platforms to determine which labels are negative for products. Experimental results using a dataset collected from a Japanese ecommerce platform demonstrate that the label masking improves micro and macro F 1 scores by 3.38 and 23.20 points, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Since organized product data plays a crucial role in serving better product search and recommendation to customers, attribute value extraction (AVE) has become a critical task in the e-commerce industry. Although many studies have treated AVE as named entity recognition (NER) task ( \u00a7 2.1), NER-based approaches are not practical in realworld e-commerce platforms. First, NER-based methods perform poorly because the number of attributes (classes) in e-commerce domains is extremely large (Xu et al., 2019) . Second, it is necessary to take a further step to normalize extracted values (e.g., coral to pink). To reflect extracted values in actual services, e-commerce platform providers need to convert the values into canonical form by referring their own attribute taxonomy that covers attributes and values for the services. Third, extraction of new values is not necessary in many attributes (e.g., country of origin). Since it is rare for new values of attributes other than brands to be introduced to the world, it is sufficient to extract the values defined in the attribute taxonomy. Given the above reasons, we formalize AVE as extreme multi-label classification (XMC), and design a model that directly predicts possible canonical attribute-value pairs except for brands 1 from given product data. The main problem in solving AVE as XMC is that the number of relevant attribute-value pairs to products is far fewer than that of irrelevant pairs; the majority of attribute-value pairs are regarded as irrelevant (e.g., \u27e8Memory size, 512GB\u27e9 for sneakers). To tackle this problem, we propose label masking that mitigates the negative effects of a large amount of irrelevant pairs in training (Figure 1, \u00a7 4.2). We detect the irrelevant pairs by referring an attribute taxonomy ( \u00a7 3) associated with a real-world dataset we use to train and evaluate models. Through experiments using the dataset, we confirm that our label masking method improves micro and macro F 1 scores by 3.38 and 23.20 points, respectively.", "cite_spans": [ { "start": 490, "end": 507, "text": "(Xu et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions can be summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We formalize AVE as an XMC problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We proposed label masking, a simple and effective method to alleviate the negative impact from irrelevant attribute-value pairs in training ( \u00a7 4.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We showed the effectiveness of the label masking using a real-world dataset. It especially performed well on attribute-value pairs at the long tail ( \u00a7 5.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are many attempts based on NER techniques to extract attribute values from product descriptions (Probst et al., 2007; Wong et al., 2008; Putthividhya and Hu, 2011; Bing et al., 2012; Shinzato and Sekine, 2013; More, 2016; Zheng et al., 2018; Rezk et al., 2019; Karamanolakis et al., 2020; Zhang et al., 2020) . As Xu et al. (2019) reported, NER-based models perform poorly on a real-world dataset including ten thousand attributes or more.", "cite_spans": [ { "start": 102, "end": 123, "text": "(Probst et al., 2007;", "ref_id": "BIBREF11" }, { "start": 124, "end": 142, "text": "Wong et al., 2008;", "ref_id": "BIBREF17" }, { "start": 143, "end": 169, "text": "Putthividhya and Hu, 2011;", "ref_id": "BIBREF12" }, { "start": 170, "end": 188, "text": "Bing et al., 2012;", "ref_id": "BIBREF0" }, { "start": 189, "end": 215, "text": "Shinzato and Sekine, 2013;", "ref_id": "BIBREF14" }, { "start": 216, "end": 227, "text": "More, 2016;", "ref_id": "BIBREF10" }, { "start": 228, "end": 247, "text": "Zheng et al., 2018;", "ref_id": "BIBREF23" }, { "start": 248, "end": 266, "text": "Rezk et al., 2019;", "ref_id": "BIBREF13" }, { "start": 267, "end": 294, "text": "Karamanolakis et al., 2020;", "ref_id": "BIBREF5" }, { "start": 295, "end": 314, "text": "Zhang et al., 2020)", "ref_id": "BIBREF21" }, { "start": 320, "end": 336, "text": "Xu et al. (2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work 2.1 Attribute Value Extraction", "sec_num": "2" }, { "text": "To deal with a large number of attributes, there is research that introduces question-answering (QA) models for the AVE task (Xu et al., 2019; Shinzato et al., 2022) . These QAbased approaches take an attribute as query and a product title as context, and extract attribute values from the context as answer for the query. Since those models take attributes as input, it is necessary to run the extraction repeatedly on the same product titles with different attributes. Hence, the QA-based approaches are more time-consuming than XMC-based approaches that can predict values for multiple attributes at a time.", "cite_spans": [ { "start": 125, "end": 142, "text": "(Xu et al., 2019;", "ref_id": "BIBREF19" }, { "start": 143, "end": 165, "text": "Shinzato et al., 2022)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work 2.1 Attribute Value Extraction", "sec_num": "2" }, { "text": "To reduce the large output space, previous XMC studies perform label clustering as a separate stage from training classifiers (Wydmuch et al., 2018; You et al., 2019; Chang et al., 2020; Zhang et al., 2021; Jiang et al., 2021; Mittal et al., 2021a,b) . For example, XR-Transformer (Zhang et al., 2021) first vectorizes each label with combination of TF-IDF and embeddings of text associated with the label. Then, it applies balanced k-means (Malinen and Fr\u00e4nti, 2014) to these label vectors to generate a hierarchical label cluster tree by recursively partitioning label sets. Instead of k-means, Mittal et al. (2021a) and Mittal et al. (2021b) classifier per cluster that predicts whether a given text is relevant to labels in the cluster.", "cite_spans": [ { "start": 126, "end": 148, "text": "(Wydmuch et al., 2018;", "ref_id": "BIBREF18" }, { "start": 149, "end": 166, "text": "You et al., 2019;", "ref_id": "BIBREF20" }, { "start": 167, "end": 186, "text": "Chang et al., 2020;", "ref_id": "BIBREF1" }, { "start": 187, "end": 206, "text": "Zhang et al., 2021;", "ref_id": "BIBREF22" }, { "start": 207, "end": 226, "text": "Jiang et al., 2021;", "ref_id": "BIBREF4" }, { "start": 227, "end": 250, "text": "Mittal et al., 2021a,b)", "ref_id": null }, { "start": 281, "end": 301, "text": "(Zhang et al., 2021)", "ref_id": "BIBREF22" }, { "start": 597, "end": 618, "text": "Mittal et al. (2021a)", "ref_id": "BIBREF8" }, { "start": 623, "end": 644, "text": "Mittal et al. (2021b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Extreme Multi-Label Classification", "sec_num": "2.2" }, { "text": "On the other hand, in real-world e-commerce platforms, an attribute taxonomy is available. This can be regarded as label clusters manually tailored by the e-commerce platform providers. Therefore, we simply leverage the existing attribute taxonomy to reduce the size of labels in training through label masking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extreme Multi-Label Classification", "sec_num": "2.2" }, { "text": "We assume that for each category, attribute taxonomy defines all possible attribute-value pairs that products in the category can take. General attribute-value pairs (e.g., \u27e8Color, Red\u27e9) are defined for multiple categories. Table 1 shows an example of attributes and values defined for the category of sneakers. By referring to the attribute taxonomy, it is possible to determine which attributes and values are relevant or irrelevant to which category of products. For example, from the table, we can see that 512GB of memory size is irrelevant to sneakers.", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Attribute Taxonomy", "sec_num": "3" }, { "text": "This section proposes our model based on XMC with label masking for the AVE task. Given a product data x = \u27e8c, t, d\u27e9, where c denotes a category, t denotes a title consisting of n tokens ({t 1 ,t 2 ,. . . ,t n }) and d denotes a description consisting of m tokens ({d 1 ,d 2 ,. . . ,d m }), respectively, the model returns a set of attribute-value pairs that should be linked with the product data x. Figure 1 depicts the model architecture. As a backbone of the architecture, we employ a pretrained BERT-base model (Devlin et al., 2019) , and put a feed forward layer on the top of BERT. As an input to BERT, we construct a string [CLS; t; SEP; d] by concatenating t, d, CLS and SEP; CLS and SEP are special tokens to represent a classifier token and a separator, respectively. Similar with Jiang et al. 2021, we concatenate the last l hidden representations of the CLS token, and then feed the concatenated vector into a feed forward", "cite_spans": [ { "start": 516, "end": 537, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 401, "end": 409, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Proposed Method", "sec_num": "4" }, { "text": "Description Attribute-value pairs layer as the representation of the input. The size of the outputs from the feed forward layer is equal to the total number of labels (attributevalue pairs). The outputs are converted into probability through a sigmoid layer, and then pass to the label masking. To mask labels irrelevant to the given product data x, we refer an attribute taxonomy built for an e-commerce platform. We compute binary cross entropy (BCE) loss over only relevant labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Title", "sec_num": null }, { "text": "\u9774 > \u30e1 \u30f3 \u30ba \u9774 > \u30b9\u30cb\u30fc\u30ab\u30fc \u30ce \u30fc \u30b9 \u30a6 \u30a8 \u30fc \u30d6 \u3010northwave\u3011ESPRESSO ORIGINAL RED \u7537 \u6027 \u7528 \u30e1 \u30f3 \u30ba / \u5973 \u6027 \u7528 \u30ec \u30c7\u30a3\u30fc\u30b9 /", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Title", "sec_num": null }, { "text": "In testing, we choose ones whose probability returned from the model exceeds 0.5 among labels relevant to the product data x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Title", "sec_num": null }, { "text": "XMC is a special case of the multi-label classification problem. What makes XMC unique is its size of a target label set. The label size is 4K to 501K in common XMC datasets (Chang et al., 2020) .", "cite_spans": [ { "start": 174, "end": 194, "text": "(Chang et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary: XMC", "sec_num": "4.1" }, { "text": "Formally, XMC can be defined as follows: Giving a training set {(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary: XMC", "sec_num": "4.1" }, { "text": "x (i) , y (i) )} N i=1 where x (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary: XMC", "sec_num": "4.1" }, { "text": "is the instance, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary: XMC", "sec_num": "4.1" }, { "text": "y (i) \u2208 {0, 1} L is the label of x (i) represented by L dimensional multi-hot vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary: XMC", "sec_num": "4.1" }, { "text": "L is the size of the label set. y (i) j = 1 indicates that the j-th label is a positive example for x i . The regular XMC is aimed to learn the function \u03c3 \u03b8 (x) \u2208 {(0, 1) \u2282 R} L which predicts scores in range of [0.0, 1.0] to all labels by giving x. \u03c3 tends to be closed to 1.0 to j-th label when y j = 1. The ordinary loss function in XMC is BCE:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary: XMC", "sec_num": "4.1" }, { "text": "BCE = \u2212 L j=1 (yj log \u03c3 j \u03b8 (x) + (1 \u2212 yj) log (1 \u2212 \u03c3 j \u03b8 (x)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary: XMC", "sec_num": "4.1" }, { "text": "BCE loss sums over the log loss among all labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary: XMC", "sec_num": "4.1" }, { "text": "In the AVE task, the number of \"hot\" labels is extremely small compared to the number of labels defined for the task (L). This means that distribution between positive and negative labels is heavily imbalanced. Such distribution has the negative impact on training classification models because the BCE sums far more loss values from the negative labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label Masking", "sec_num": "4.2" }, { "text": "To alleviate the impact derived from the negative labels, we exploit attribute taxonomy. Since the majority of the negative labels are irrelevant labels to given product data x, we introduce a function M that returns only relevant labels to x. BCE loss can be rewritten as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label Masking", "sec_num": "4.2" }, { "text": "BCE = \u2212 j\u2208M(x) (yj log \u03c3 j \u03b8 (x) + (1 \u2212 yj) log(1 \u2212 \u03c3 j \u03b8 (x))) M(x) = {j : j \u2208 L \u2227 lj rel \u223c x}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label Masking", "sec_num": "4.2" }, { "text": "where l j rel \u223c x means label l j is relevant to x. By matching a category of x with categories in the attribute taxonomy, we can obtain all possible attribute-value pairs for x. We regard those pairs as relevant labels to x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label Masking", "sec_num": "4.2" }, { "text": "By introducing the function M, BCE loss discards the log loss values from the irrelevant labels. The label masking enables us to train XMC models more properly since (1) it reduces bias in the distribution between positive and negative labels, and (2) the irrelevant labels would not affect the model parameters during back-propagating. This makes the model training more sensitive than normal to misclassification within relevant labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label Masking", "sec_num": "4.2" }, { "text": "We use product data and attribute taxonomy from Rakuten 2 , a large e-commerce platform in Japan. Each product consists of a tuple of category, title, description and a set of attribute-value pairs. Rakuten manages category and attribute taxonomies, and sellers assign products a category and attribute-value pairs defined in the taxonomies. Figure 2 shows an example of the product data. For experiments, among product data in Rakuten, we randomly sampled 2,000,446 product data that own one or more attribute-value pairs except brands. We halve this dataset as a 50-50 train/evaluation split. We selected attribute-value pairs appeared in both datasets 3 , and removed product data that did not have any selected pairs. Moreover, from the evaluation dataset, we discarded product data whose category did not appear in the training dataset. As a result, the training and evaluation datasets contain 1,000,047 and 999,128 products respectively. Statistics of the dataset are listed in Table 2 . We can see that the label masking reduces the size of labels from 7,979 to 489 on average.", "cite_spans": [], "ref_spans": [ { "start": 342, "end": 350, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 985, "end": 992, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "We use precision (P), recall (R), F 1 score and precision at k (P@k, k = 1,3,5), which is widely used in the XMC tasks.To obtain a top-k list, we regard all prediction results as output regardless of scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "We compare the following models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "XR-Transformer XMC model that shows the state-of-the-art performance on datasets commonly used in the XMC field (Zhang et al., 2021) . We train the model using the codes released from the authors 4 with default parameters other than max Table 3 : Hyper parameters sequence length and batch size. We set 512 for max sequence length and 64 for batch size.", "cite_spans": [ { "start": 112, "end": 132, "text": "(Zhang et al., 2021)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 237, "end": 244, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "BERT BERT (Devlin et al., 2019) without our label masking. It computes BCE loss from all labels.", "cite_spans": [ { "start": 10, "end": 31, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "BERT with multiple classifiers Model that simply exploits a given category. We design a classifier (feed forward) layer for each category, and put them on the top of a single BERT. Because of this, parameters in BERT are in common with all classifiers. According to the category, we replace a classifier in training and testing. We construct mini-batches to include product data in the same category. As categories, in addition to leaf categories (e.g., Sneakers), we also adopt top categories (Shoes). This is because the size of training data is not sufficient in some minor leaf categories. By taking top categories, we can expect that the size of training data is enlarged although it increases irrelevant labels to leaf categories assigned in products. The total number of top categories is 38, including shoes, food, furniture and home appliances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "BERT with label masking Our proposed model. It computes BCE loss from only relevant attributevalue pairs to the category of given product data. Unlike BERT with multiple classifiers, this model has a single classifier, and the classifier is trained using product data from all categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "For fair comparison with our model that assumes a category of the target product to be given, we discard irrelevant labels that the baseline models predict.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "We employ a pretrained Japanese BERT-base model and its tokenzier released from Tohoku University 5 , and use them in all models. We apply NFKC Unicode normalization 6 to titles and descriptions before the tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "Macro P@k (%) P (%) R (%) F1 P (%) R (%) F1 k = 1 k = 3 k = 5 For models other than XR-Transformer, we use gradient descent by the Adam (Kingma and Ba, 2015) optimizer. To avoid overfitting, we apply a dropout rate at 0.1 and stochastic weight averaging (Izmailov et al., 2018) to the models. Table 3 shows the hyper parameters.", "cite_spans": [ { "start": 254, "end": 277, "text": "(Izmailov et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Micro", "sec_num": null }, { "text": "Similarly with our model, as the representation of the input to BERT and BERT with multiple classifiers, we use a vector concatenating CLS embeddings obtained from the last five encoders. We implemented the models in PyTorch. Table 4 shows the performance of each model. We can observe that our proposed model outperformed all baselines. Micro and macro F 1 gains over BERT without label masking are 3.38 and 23.20 points, respectively. The significant improvement on macro F 1 score shows that the label masking is effective on various kinds of attribute-value pairs. These results show that reducing the number of irrelevant labels in training is crucial to train more accurate XMC models.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 233, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Micro", "sec_num": null }, { "text": "The reason why the performance of BERT with multiple classifiers trained on leaf categories is lower than ours is that the number of training examples for this model is insufficient in many leaf categories, as we mentioned. For 5,572 categories, the number of training examples is less than 64. Since parameters of BERT in this model are in common with all categories, this result implies that the classifiers are not well trained. On the other hand, the single classifier in our model is successfully trained because (general) attribute-value pairs scattered on various leaf categories are fully used to train the classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "Since the data sparseness problem is alleviated, BERT with multiple classifiers trained on top categories outperforms the model trained on leaf categories. Furthermore, its performance is closed to ours. We believe that the gap of the performance between the model trained on top categories and ours is from the quality of association between categories and attributes. In case of the model trained on top categories, attribute-value pairs defined for different leaf categories in the same top category are handled as relevant labels (e.g., heel height for sneakers). Meanwhile, our model is not affected by such attribute-value pairs. The gap implies that these erroneous relevant pairs hurt the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "To see the effectiveness of the label masking in detail, we categorize attribute-value pairs according to the frequency in the training data, and then check the performance for each frequency group. Table 5 shows the performance of our model in each group together with micro and macro F 1 gains over BERT. The improvement in micro and macro F 1 scores is greater for attribute-value pairs with less training examples. This means that the label masking works well for attribute-value pairs at the long-tail.", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 206, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "In this paper, we formalized AVE as XMC, and proposed label masking, a simple and effective method that mitigates the negative impact from the imbalanced distribution of attribute-value pairs relevant and irrelevant to products. Experimental results using a real-world dataset show that the label masking improves the performance of BERTbased XMC models; it is especially effective for attributes with less training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "As for future work, we plan to see the effectiveness of the label masking method on other tasks in e-commerce domains such as item classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "NER-based methods are necessary to extract new values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.rakuten.co.jp/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The total number of the un-selected attribute-value pairs is 1,289. These pairs appeared 10 times or less in the sampled product data.4 https://github.com/amzn/pecos", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/cl-tohoku/bert-japanese 6 https://unicode.org/reports/tr15/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Naoki Yoshinaga for the fruitful comments before the submission. We also thank the anonymous reviewers for their careful reading of our paper and insightful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised extraction of popular product attributes from web sites", "authors": [ { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Tak-Lam", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" } ], "year": 2012, "venue": "Information Retrieval Technology, 8th Asia Information Retrieval Societies Conference", "volume": "2012", "issue": "", "pages": "437--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lidong Bing, Tak-Lam Wong, and Wai Lam. 2012. Un- supervised extraction of popular product attributes from web sites. In Information Retrieval Technology, 8th Asia Information Retrieval Societies Conference, AIRS 2012, volume 7675 of Lecture Notes in Com- puter Science, pages 437-446, Berlin, Heidelberg. Springer.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Taming Pretrained Transformers for Extreme Multi-Label Text Classification", "authors": [ { "first": "Wei-Cheng", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Hsiang-Fu", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Inderjit", "middle": [ "S" ], "last": "Dhillon", "suffix": "" } ], "year": 2020, "venue": "Association for Computing Machinery", "volume": "20", "issue": "", "pages": "3163--3171", "other_ids": { "DOI": [ "10.1145/3394486.3403368" ] }, "num": null, "urls": [], "raw_text": "Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, and Inderjit S. Dhillon. 2020. Taming Pre- trained Transformers for Extreme Multi-Label Text Classification, KDD '20, page 3163-3171. Associa- tion for Computing Machinery, New York, NY, USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 4171-4186, Minneapolis, Minnesota, USA. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Averaging weights leads to wider optima and better generalization", "authors": [ { "first": "Pavel", "middle": [], "last": "Izmailov", "suffix": "" }, { "first": "Dmitrii", "middle": [], "last": "Podoprikhin", "suffix": "" }, { "first": "Timur", "middle": [], "last": "Garipov", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Vetrov", "suffix": "" }, { "first": "Andrew", "middle": [ "Gordon" ], "last": "Wilson", "suffix": "" } ], "year": 2018, "venue": "34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018", "volume": "", "issue": "", "pages": "876--885", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and better generalization. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, 34th Confer- ence on Uncertainty in Artificial Intelligence 2018, UAI 2018, pages 876-885. Association For Uncer- tainty in Artificial Intelligence (AUAI).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Lightxml: Transformer with dynamic negative sampling for high-performance extreme multi-label text classification", "authors": [ { "first": "Ting", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Deqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Leilei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Huayi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhengyang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Fuzhen", "middle": [], "last": "Zhuang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "35", "issue": "", "pages": "7987--7994", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ting Jiang, Deqing Wang, Leilei Sun, Huayi Yang, Zhengyang Zhao, and Fuzhen Zhuang. 2021. Lightxml: Transformer with dynamic negative sam- pling for high-performance extreme multi-label text classification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9):7987-7994.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "TXtract: Taxonomy-aware knowledge extraction for thousands of product categories", "authors": [ { "first": "Giannis", "middle": [], "last": "Karamanolakis", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Xin Luna", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8489--8502", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.751" ] }, "num": null, "urls": [], "raw_text": "Giannis Karamanolakis, Jun Ma, and Xin Luna Dong. 2020. TXtract: Taxonomy-aware knowledge extrac- tion for thousands of product categories. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8489-8502, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the third International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the third International Conference on Learning Representations, San Diego, California, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Balanced kmeans for clustering", "authors": [ { "first": "I", "middle": [], "last": "Mikko", "suffix": "" }, { "first": "Pasi", "middle": [], "last": "Malinen", "suffix": "" }, { "first": "", "middle": [], "last": "Fr\u00e4nti", "suffix": "" } ], "year": 2014, "venue": "Structural, Syntactic, and Statistical Pattern Recognition", "volume": "", "issue": "", "pages": "32--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikko I. Malinen and Pasi Fr\u00e4nti. 2014. Balanced k- means for clustering. In Structural, Syntactic, and Statistical Pattern Recognition, pages 32-41, Berlin, Heidelberg. Springer Berlin Heidelberg.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Decaf: Deep extreme classification with label features", "authors": [ { "first": "A", "middle": [], "last": "Mittal", "suffix": "" }, { "first": "K", "middle": [], "last": "Dahiya", "suffix": "" }, { "first": "S", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "D", "middle": [], "last": "Saini", "suffix": "" }, { "first": "S", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "P", "middle": [], "last": "Kar", "suffix": "" }, { "first": "M", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the ACM International Conference on Web Search and Data Mining, WSDM '21", "volume": "", "issue": "", "pages": "49--57", "other_ids": { "DOI": [ "10.1145/3437963.3441807" ] }, "num": null, "urls": [], "raw_text": "A. Mittal, K. Dahiya, S. Agrawal, D. Saini, S. Agarwal, P. Kar, and M. Varma. 2021a. Decaf: Deep extreme classification with label features. In Proceedings of the ACM International Conference on Web Search and Data Mining, WSDM '21, page 49-57, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Eclare: Extreme classification with label graph correlations", "authors": [ { "first": "A", "middle": [], "last": "Mittal", "suffix": "" }, { "first": "N", "middle": [], "last": "Sachdeva", "suffix": "" }, { "first": "S", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "S", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "P", "middle": [], "last": "Kar", "suffix": "" }, { "first": "M", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2021, "venue": "Proceedings of The ACM International World Wide Web Conference, WWW '21", "volume": "", "issue": "", "pages": "3721--3732", "other_ids": { "DOI": [ "10.1145/3442381.3449815" ] }, "num": null, "urls": [], "raw_text": "A. Mittal, N. Sachdeva, S. Agrawal, S. Agarwal, P. Kar, and M. Varma. 2021b. Eclare: Extreme classifica- tion with label graph correlations. In Proceedings of The ACM International World Wide Web Conference, WWW '21, page 3721-3732, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Attribute extraction from product titles in ecommerce", "authors": [ { "first": "Ajinkya", "middle": [], "last": "More", "suffix": "" } ], "year": 2016, "venue": "KDD 2016 Workshop on Enterprise Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ajinkya More. 2016. Attribute extraction from product titles in ecommerce. In KDD 2016 Workshop on Enterprise Intelligence, San Francisco, California, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Semi-supervised learning of attribute-value pairs from product descriptions", "authors": [ { "first": "Katharina", "middle": [], "last": "Probst", "suffix": "" }, { "first": "Rayid", "middle": [], "last": "Ghani", "suffix": "" }, { "first": "Marko", "middle": [], "last": "Krema", "suffix": "" }, { "first": "Andrew", "middle": [ "E" ], "last": "Fano", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence, IJCAI'07", "volume": "", "issue": "", "pages": "2838--2843", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katharina Probst, Rayid Ghani, Marko Krema, An- drew E. Fano, and Yan Liu. 2007. Semi-supervised learning of attribute-value pairs from product de- scriptions. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, IJCAI'07, pages 2838-2843, Hyderabad, India. Morgan Kauf- mann Publishers Inc.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bootstrapped named entity recognition for product attribute extraction", "authors": [ { "first": "Duangmanee", "middle": [], "last": "Putthividhya", "suffix": "" }, { "first": "Junling", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1557--1567", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duangmanee Putthividhya and Junling Hu. 2011. Boot- strapped named entity recognition for product at- tribute extraction. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, pages 1557-1567, Edinburgh, Scotland, UK. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Accurate product attribute extraction on the field", "authors": [ { "first": "Martin", "middle": [], "last": "Rezk", "suffix": "" }, { "first": "Laura", "middle": [ "Alonso" ], "last": "Alemany", "suffix": "" }, { "first": "Lasguido", "middle": [], "last": "Nio", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 35th IEEE International Conference on Data Engineering", "volume": "", "issue": "", "pages": "1862--1873", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Rezk, Laura Alonso Alemany, Lasguido Nio, and Ted Zhang. 2019. Accurate product attribute extraction on the field. In Proceedings of the 35th IEEE International Conference on Data Engineering, pages 1862-1873, Macau SAR, China. IEEE.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised extraction of attributes and their values from product description", "authors": [ { "first": "Keiji", "middle": [], "last": "Shinzato", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1339--1347", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keiji Shinzato and Satoshi Sekine. 2013. Unsupervised extraction of attributes and their values from product description. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1339-1347, Nagoya, Japan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Simple and effective knowledgedriven query expansion for QA-based product attribute extraction", "authors": [ { "first": "Keiji", "middle": [], "last": "Shinzato", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Yoshinaga", "suffix": "" }, { "first": "Yandi", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Wei-Te", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2022, "venue": "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keiji Shinzato, Naoki Yoshinaga, Yandi Xia, and Wei- Te Chen. 2022. Simple and effective knowledge- driven query expansion for QA-based product at- tribute extraction. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics. (to appear).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning to extract attribute value from product via question answering: A multi-task approach", "authors": [ { "first": "Qifan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Bhargav", "middle": [], "last": "Kanagal", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Sanghai", "suffix": "" }, { "first": "D", "middle": [], "last": "Sivakumar", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Zac", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Elsas", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '20", "volume": "", "issue": "", "pages": "47--55", "other_ids": { "DOI": [ "10.1145/3394486.3403047" ] }, "num": null, "urls": [], "raw_text": "Qifan Wang, Li Yang, Bhargav Kanagal, Sumit Sanghai, D. Sivakumar, Bin Shu, Zac Yu, and Jon Elsas. 2020. Learning to extract attribute value from product via question answering: A multi-task approach. In Pro- ceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, KDD '20, pages 47-55, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An unsupervised framework for extracting and normalizing product attributes from multiple web sites", "authors": [ { "first": "Tak-Lam", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Tik-Shun", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '08", "volume": "", "issue": "", "pages": "35--42", "other_ids": { "DOI": [ "10.1145/1390334.1390343" ] }, "num": null, "urls": [], "raw_text": "Tak-Lam Wong, Wai Lam, and Tik-Shun Wong. 2008. An unsupervised framework for extracting and nor- malizing product attributes from multiple web sites. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '08, pages 35-42, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A no-regret generalization of hierarchical softmax to extreme multi-label classification", "authors": [ { "first": "Marek", "middle": [], "last": "Wydmuch", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Jasinska", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Kuznetsov", "suffix": "" }, { "first": "R\u00f3bert", "middle": [], "last": "Busa-Fekete", "suffix": "" }, { "first": "Krzysztof", "middle": [], "last": "Dembczy\u0144ski", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18", "volume": "", "issue": "", "pages": "6358--6368", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marek Wydmuch, Kalina Jasinska, Mikhail Kuznetsov, R\u00f3bert Busa-Fekete, and Krzysztof Dembczy\u0144ski. 2018. A no-regret generalization of hierarchical soft- max to extreme multi-label classification. In Pro- ceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, page 6358-6368, Red Hook, NY, USA. Curran Asso- ciates Inc.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title", "authors": [ { "first": "Huimin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenting", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Xinyu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Man", "middle": [], "last": "Lan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5214--5223", "other_ids": { "DOI": [ "10.18653/v1/P19-1514" ] }, "num": null, "urls": [], "raw_text": "Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5214-5223, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Attentionxml: Label tree-based attention-aware deep model for high-performance extreme multi-label text classification", "authors": [ { "first": "Ronghui", "middle": [], "last": "You", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ziye", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Suyang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Mamitsuka", "suffix": "" }, { "first": "Shanfeng", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronghui You, Zihan Zhang, Ziye Wang, Suyang Dai, Hiroshi Mamitsuka, and Shanfeng Zhu. 2019. At- tentionxml: Label tree-based attention-aware deep model for high-performance extreme multi-label text classification. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Bootstrapping named entity recognition in Ecommerce with positive unlabeled learning", "authors": [ { "first": "Hanchu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Leonhard", "middle": [], "last": "Hennig", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Alt", "suffix": "" }, { "first": "Changjian", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 3rd Workshop on e-Commerce and NLP", "volume": "", "issue": "", "pages": "1--6", "other_ids": { "DOI": [ "10.18653/v1/2020.ecnlp-1.1" ] }, "num": null, "urls": [], "raw_text": "Hanchu Zhang, Leonhard Hennig, Christoph Alt, Changjian Hu, Yao Meng, and Chao Wang. 2020. Bootstrapping named entity recognition in E- commerce with positive unlabeled learning. In Pro- ceedings of The 3rd Workshop on e-Commerce and NLP, pages 1-6, Seattle, WA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Fast multi-resolution transformer fine-tuning for extreme multi-label text classification", "authors": [ { "first": "Jiong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wei-Cheng", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Hsiang-Fu", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Inderjit", "middle": [], "last": "Dhillon", "suffix": "" } ], "year": 2021, "venue": "Advances in Neural Information Processing Systems", "volume": "34", "issue": "", "pages": "7267--7280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiong Zhang, Wei-Cheng Chang, Hsiang-Fu Yu, and Inderjit Dhillon. 2021. Fast multi-resolution trans- former fine-tuning for extreme multi-label text classi- fication. In Advances in Neural Information Process- ing Systems, volume 34, pages 7267-7280. Curran Associates, Inc.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "OpenTag: Open attribute value extraction from product profiles", "authors": [ { "first": "Guineng", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Subhabrata", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Xin", "middle": [ "Luna" ], "last": "Dong", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '18", "volume": "", "issue": "", "pages": "1049--1058", "other_ids": { "DOI": [ "10.1145/3219819.3219839" ] }, "num": null, "urls": [], "raw_text": "Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. OpenTag: Open attribute value extraction from product profiles. In Proceed- ings of the 24th ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, KDD '18, pages 1049-1058, New York, NY, USA. Association for Computing Machinery.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "text": "Extreme multi-label classification model with label masking for attribute value extraction.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Example of product data. The top shows the original data and the bottom shows its translation.", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "num": null, "text": "CLS Northwave Espresso Original\u2026 SEP Perfect sneakers for\u2026", "content": "
Relevant attributes and
values for all categories
Cat.Attr.Val.
Sneakers ColorRed
Blue
\u2026
MaterialLeather
Canvas
\u2026
Shoe size US 4
US 4.5
\u2026\u2026
Sandals ColorRed
\u2026\u2026
Input:
Title tDescription d
", "html": null }, "TABREF2": { "type_str": "table", "num": null, "text": "Example of attribute taxonomy.", "content": "", "html": null }, "TABREF3": { "type_str": "table", "num": null, "text": "\u9774\u30b5\u30a4\u30ba(cm), 25.0 \u27e9, \u27e8 \u9774\u30b5\u30a4\u30ba(cm), 26.0 \u27e9, \u27e8 \u9774\u30b5\u30a4\u30ba(cm), 27.0 \u27e9, \u27e8 \u30ab\u30e9\u30fc, \u30ec\u30c3\u30c9 \u27e9", "content": "
\u30b9\u30cb\u30fc\u30ab\u30fc \u27e8 Shoes > Men's shoes \u88fd\u54c1\u8aac\u660e \u843d\u3061\u7740\u3044\u305f\u30ec\u30c3\u30c9\u304c\u8c61 \u5fb4\u7684\u306a\u3001\u8db3\u5143\u306e\u30a2\u30af\u30bb\u30f3\u30c8\u3068\u3057 \u3066\u6700\u9069\u306a1\u8db3\u3002\u8efd\u91cf\u30e9\u30d0\u30fc\u3067\u30bd\u30fc \u30eb\u3082\u8efd\u91cf\u5316\u3055\u308c\u305f\u4eba\u6c17\u30ab\u30e9\u30fc\u306e \u30e2\u30c7\u30eb\u3002 > Sneakers Northwave [northwave] Espresso Original Red Men's / Women's / Sneak-ers Product description. These sneakers are the perfect accent for your feet and come in a soft red color. The sole is made of lightweight rubber to reduce weight. It is a popular color. \u27e8 Shoe size (cm), 25.0 \u27e9, \u27e8 Shoe size (cm), 26.0 \u27e9, \u27e8 Shoe size (cm), 27.0 \u27e9, \u27e8 Color, Red \u27e9
", "html": null }, "TABREF5": { "type_str": "table", "num": null, "text": "Data statistics. These numbers are calculated from both training and test data.", "content": "", "html": null }, "TABREF7": { "type_str": "table", "num": null, "text": "XR-Transformer 92.01 73.80 81.90 45.43 19.93 27.71 90.30 65.68 53.61 BERT 88.77 74.64 81.09 26.57 15.42 19.51 87.59 63.97 52.36 BERT w/ multiple classifiers -leaf 87.79 74.90 80.83 47.24 30.89 37.36 87.79 55.63 44.60 BERT w/ multiple classifiers -top 89.04 79.88 84.21 52.12 34.99 41.87 91.10 65.95 53.75 BERT w/ label masking (ours) 88.90 80.46 84.47 52.82 35.85 42.71 91.57 66.31 54.08", "content": "
", "html": null }, "TABREF8": { "type_str": "table", "num": null, "text": "Performance of each model.", "content": "
Group (# of pairs)Freq.Micro F1Macro F1
High (76) [10 4 , \u221e) 89.57 (+1.56) 86.15 (+2.31) Med. (454) [10 3 , 10 4 ) 81.98 (+3.90) 78.58 (+5.24) Low (1,457) [10 2 , 10 3 ) 70.79 (+10.59) 66.80 (+14.55) Rare (5,992) [1, 10 2 ) 53.85 (+33.20) 33.19 (+26.97)
", "html": null }, "TABREF9": { "type_str": "table", "num": null, "text": "Micro and macro F 1 scores of our model for each group of attribute-value pairs. Gains over BERT without label masking are enclosed in parentheses.", "content": "", "html": null } } } }