{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:36.575683Z" }, "title": "Attribute Value Generation from Product Title using Language Models", "authors": [ { "first": "Kalyani", "middle": [], "last": "Roy", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT Kharagpur", "location": { "country": "India" } }, "email": "kroy@iitkgp.ac.in" }, { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT Kharagpur", "location": { "country": "India" } }, "email": "pawang@cse.iitkgp.ac.in" }, { "first": "Manish", "middle": [], "last": "Pandey", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "manish.pandey@west.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Identifying the value of product attribute is essential for many e-commerce functions such as product search and product recommendations. Therefore, identifying attribute values from unstructured product descriptions is a critical undertaking for any e-commerce retailer. What makes this problem challenging is the diversity of product types and their attributes and values. Existing methods have typically employed multiple types of machine learning models, each of which handles specific product types or attribute classes. This has limited their scalability and generalization for large scale real world e-commerce applications. Previous approaches for this task have formulated the attribute value extraction as a Named Entity Recognition (NER) task or a Question Answering (QA) task. In this paper we have presented a generative approach to the attribute value extraction problem using language models. We leverage the largescale pretraining of the GPT-2 and the T5 textto-text transformer to create fine-tuned models that can effectively perform this task. We show that a single general model is very effective for this task over a broad set of product attribute values with the open world assumption. Our approach achieves state-of-the-art performance for different attribute classes, which has previously required a diverse set of models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Identifying the value of product attribute is essential for many e-commerce functions such as product search and product recommendations. Therefore, identifying attribute values from unstructured product descriptions is a critical undertaking for any e-commerce retailer. What makes this problem challenging is the diversity of product types and their attributes and values. Existing methods have typically employed multiple types of machine learning models, each of which handles specific product types or attribute classes. This has limited their scalability and generalization for large scale real world e-commerce applications. Previous approaches for this task have formulated the attribute value extraction as a Named Entity Recognition (NER) task or a Question Answering (QA) task. In this paper we have presented a generative approach to the attribute value extraction problem using language models. We leverage the largescale pretraining of the GPT-2 and the T5 textto-text transformer to create fine-tuned models that can effectively perform this task. We show that a single general model is very effective for this task over a broad set of product attribute values with the open world assumption. Our approach achieves state-of-the-art performance for different attribute classes, which has previously required a diverse set of models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Product attributes and their values play an important role in e-commerce platforms. There are hundreds of thousands of products sold online and each type of product has a different set of attributes. These attributes help customers search for products, compare the relevant items and purchase the product of their choice. While details of a product can be found both in its title as well as its description, commonly, the title includes important attributes of the product. Everyday many new products are added to the product catalogue often with new at- Figure 1 : An example of a product with its title, attributes and values. There is no value for the attribute 'Fingerboard Material' and it is represented as NULL.", "cite_spans": [], "ref_spans": [ { "start": 555, "end": 563, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "tributes types and values. However, attribute information is often sparse, noisy and incomplete with missing values. For example, Figure 1 shows a product with its description and attribute value pairs available on the website. It contains attribute values for Brand Name, Type etc., but there are missing attributes, such as \"Dual-coil\" for Pickup Type, \"6\" for Strings etc. Given the wide diversity of products and new products constantly emerging, it is important that attribute value extraction works with the open world assumption, i.e., values for the attributes not seen before.", "cite_spans": [], "ref_spans": [ { "start": 130, "end": 138, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Earlier work (Ghani et al., 2006; Chiticariu et al., 2010; Gopalakrishnan et al., 2012) for attribute value extraction use a rule based approach with the help of a domain specific seed dictionary to identify the key phrases. Other work have formulated this as named entity recognition (NER) problem (Putthividhya and Hu, 2011; More, 2016). However, these approaches do not work under the open world assumption. More recently, various neural network based approaches have been proposed and applied to sequence tagging model for attribute value extraction. Huang et al. (2015) is the first to apply the BiLSTM-CRF model for sequence tagging. Zheng et al. (2018) propose an end-to-end tagging model using BiLSTM, CRF and attention without any dic-tionary or hand-crafted features. Most of these approaches create separate models for different attributes. Also, for each attribute a, they have one set of tags to denote beginning (B a ) and inside (I a ) of that attribute. Hence, these methods are not scalable for large set of attributes and these models can not identify emerging values for unseen attributes. Recent works (Xu et al., 2019; Wang et al., 2020) have set up this task as question answering (QA) task. Question answering in machine reading comprehension (MRC) selects a span of text from the given context to answer the question. Xu et al. (2019) considers product title as context, attribute as query, and proposes to find the attribute value using only global set of BIO tags. Although the sequence tagging models (Zheng et al., 2018; Xu et al., 2019) achieve promising result, they do not work well for discovering new attributes values.", "cite_spans": [ { "start": 13, "end": 33, "text": "(Ghani et al., 2006;", "ref_id": "BIBREF2" }, { "start": 34, "end": 58, "text": "Chiticariu et al., 2010;", "ref_id": "BIBREF0" }, { "start": 59, "end": 87, "text": "Gopalakrishnan et al., 2012)", "ref_id": "BIBREF3" }, { "start": 555, "end": 574, "text": "Huang et al. (2015)", "ref_id": "BIBREF4" }, { "start": 640, "end": 659, "text": "Zheng et al. (2018)", "ref_id": "BIBREF13" }, { "start": 1122, "end": 1139, "text": "(Xu et al., 2019;", "ref_id": "BIBREF12" }, { "start": 1140, "end": 1158, "text": "Wang et al., 2020)", "ref_id": "BIBREF11" }, { "start": 1342, "end": 1358, "text": "Xu et al. (2019)", "ref_id": "BIBREF12" }, { "start": 1528, "end": 1548, "text": "(Zheng et al., 2018;", "ref_id": "BIBREF13" }, { "start": 1549, "end": 1565, "text": "Xu et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to past extractive or classification based approaches, we have taken a generative approach to identify attribute values. Text generation using language models has several applications in real-world tasks such as text-editing, article writing, sentence completion, etc. Text infilling aims to fill the missing part of a given sentence. Motivated by their success as well as to leverage the large scale pretraining of the language models, we formulate the attribute value extraction as an instance of text infilling task as well as an answer generation task. We utilize Infilling by Language Modeling (ILM) (Donahue et al., 2020) for the infilling approach and we fine-tune Text-to-Text Transfer Transformer (T5) (Raffel et al., 2020 ) as an answer generation task. We summarize the main contribution of this work as follows:", "cite_spans": [ { "start": 617, "end": 639, "text": "(Donahue et al., 2020)", "ref_id": "BIBREF1" }, { "start": 723, "end": 743, "text": "(Raffel et al., 2020", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a language modeling approach for attribute value extraction. \u2022 We empirically demonstrate that this approach achieves state-of-the-art results on discovering new attribute values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we formally define the problem of attribute value generation. Given a product context T = (w t 1 , w t 2 , ..., w t m ) and its attribue A = (w a 1 , w a 2 , ..., w a n ), our goal is to generate the value", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "V = (w v 1 , w v 2 , ..., w v e ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "For example, the context of the product in Figure 1 is \"ammoon Electric tributes Type and Fingerboard Material. We want to generate the value \"Electric Guitar\" for the attribute Type and NULL for the attribute Fingerboard Material as this attribute is not present in the context. In this work, first, we formulate this problem as a (i) text infilling task and then as an (ii) answer generation task. For text infilling, we combine the context, T , attribute, A, and the value, V , in a sentence as \"T . A is V .\" where the attribute value V is masked as blank. Our objective is to generate the missing span in this sentence to predict this value. Let the incomplete sentence b\u1ebd S = (w s 1 , w s 2 , ..., w s p ). Our model outputs the best attribute value sequence\u1e7c by learning the distri-bution\u1e7c = P (V |S). In the answer generation approach, our aim is to generate V as the answer, considering T as the context and A as the question.", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 51, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "We have used publicly available dataset 1 which is collected from Sports & Entertainment category of AliExpress (Xu et al., 2019) . This dataset contains 110, 484 examples. Each example contains a triple, i.e., context as product title, an attribute and its value. We preprocessed the dataset to handle noisy data, and removed triples with empty values and triples with '-', '/' as value. This led to a dataset comprising of 109, 957 triples which we refer to as AV-109K. There are 2, 157 unique attributes and 11, 847 uniques values in this dataset. Also, not all the attributes have a value in the context and these are represented as NULL. There are 21, 461 such triples in AV-109K. We randomly split the data into 7:1:2 ratio, i.e., we randomly select 76, 970 triples as training set, 10, 996 triples as validation set, and the remaining 21, 991 triples as the test set. To further examine the model's ability to generate values for unseen attributes, we select five attributes with relatively low frequency (< 0.1%) in the dataset: Frame Color, Lenses Color, Shell Material, Wheel Material and Product Type and the number of triples for these attributes are 108, 62, 36, 23, and 523, respectively. All the triples with these attributes are included in the test set. From the remainder of the dataset, we pick 10% as validation set and the rest as the training set. We refer to this dataset as AV-zero.", "cite_spans": [ { "start": 112, "end": 129, "text": "(Xu et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3.1" }, { "text": "To evaluate the models, we use the Exact Match (EM ) metric on the generated values where the whole sequence of the value must match. Since values can contain more than one tokens and models may generate tokens in any order, we have also computed average bag of word precision, recall and F 1 score as our evaluation measure which are denoted as P , R and F 1 , respectively. Let N be the size of the dataset, V = {v 1 , v 2 , .., v N } be the gold standard values, G = {g 1 , g 2 , ..., g N } be the generated values, and |v i \u2229 g i | denotes the bag of words overlap between the gold standard and the generated values corresponding to the i th triple. The computation of P and R is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.2" }, { "text": "P = 1 N N i=1 |v i \u2229g i | |g i | R = 1 N N i=1 |v i \u2229g i | |v i |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.2" }, { "text": "We compare our models with BiLSTM-CRF (Huang et al., 2015) and SUOTag (Scaling Up Open Tag) (Xu et al., 2019) 2 .", "cite_spans": [ { "start": 38, "end": 58, "text": "(Huang et al., 2015)", "ref_id": "BIBREF4" }, { "start": 92, "end": 109, "text": "(Xu et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.3" }, { "text": "\u2022 BiLSTM-CRF (Huang et al., 2015) is considered to be the state-of-the-art sequence tagging model for NER tasks. It uses the word embedding from pretrained BERT model and applies a BiLSTM layer over it to the contextual representation. Finally a Conditional Random Fields (CRF) (Lafferty et al., 2001) layer is applied over this BiLSTM.", "cite_spans": [ { "start": 13, "end": 33, "text": "(Huang et al., 2015)", "ref_id": "BIBREF4" }, { "start": 278, "end": 301, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.3" }, { "text": "\u2022 SUOTag (Xu et al., 2019) uses two separate BiLSTMs over the BERT based pretrained word embeddings to represent the context and attribute. Then, it applies a cross attention between these two representations followed by a CRF layer.", "cite_spans": [ { "start": 9, "end": 26, "text": "(Xu et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.3" }, { "text": "All the models are implemented with Py-Torch (Paszke et al., 2019) . We train each model for 5 epochs. The model that performs the best on the validation set is used for evaluating the test set. The minibatch size is fixed to 32. We use AdamW optimizer and a learning rate of 5e-5. We use pretrained GPT-2 small (Radford et al., 2019) model to train ILM and we use the validation set perplexity of the model on the masked token. We fine-tune T5-Base for the answer generation framework.", "cite_spans": [ { "start": 45, "end": 66, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF7" }, { "start": 312, "end": 334, "text": "(Radford et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "3.4" }, { "text": "We conduct experiments on different settings to (1) explore the scalability on large attribute set, (2) compare the performance on four frequent attributes, and (3) examine the model's ability to discover new attributes. Table 2 reports the performance on the AV-109K dataset. Since BiLSTM-CRF requires to tag each of the attributes a with separate B a and I a tags, it is not suitable for a large attribute set. So, we did not consider this model. The overall result shows that both ILM and T5 have the capability to a handle large number of attributes. Next, we examine the models for various interesting cases such as (a) when the values are NULL, (b) when the attributes appear in the context vs. when the attributes do not appear in the context, (c) when the values contain multiple words, and (d) when value has numerical", "cite_spans": [], "ref_spans": [ { "start": 221, "end": 228, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "3.5" }, { "text": "https://raw.githubusercontent.com/ lanmanok/ACL19_Scaling_Up_Open_Tagging/ master/publish_data.txt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "AVEQA (Attribute Value Extraction via Question Answering)(Wang et al., 2020) is also a recent work that could potentially be a baseline, but we could not get the numbers as the code was not publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We would like to note that inTable 5, for some of the attributes, all the evaluation metrics are identical. This occurs because for those attributes, the predicted value is a single token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Attributes Model EM (%) P (%) R(%) F1(%) data. The details are summarized in Table 3 . ILM performs better than other models in identifying triples having NULL values. Specifically, language models give a much better precision in this case. There are 19.26% NULL values in AV-109K, but SUOTag predicts 43.83% data as NULL. Hence, it has such high recall. There are very few triples where the attributes appear in the context -only 1.50% in train dataset and 1.59% in test dataset. So, when the attribute appears in the context, the performance of all the models is poor in comparison with when the attribute does not appear in the context. In the AV-109K dataset, there are 4, 058 triples whose value consist of multiple words. T5 performs the best in finding the values having more than one word. There are 8.5% numerical data in the test set and T5 gives much better results than other models in identifying them.The second experiment is conducted on the four most frequent attributes of the AV-109K dataset. Table 4 shows the result. T5 performs better than other models in Brand Name and Color. For Material and Category, ILM has the best performance. We have looked into the predictions of the values in these two categories and found that T5 is not correctly identifying the NULL values. On closer look at the dataset, we find that most of those NULL values are incorrectly annotated, e.g., \"new 1pcs Golf Sports Mens Right Left Hand Golf Gloves Sweat Absorbent Microfiber Cloth Soft Breathable Abrasion Gloves\" -the material of this product is microfiber, but it is annotated as NULL. T5 has pre-Attributes Model EM (%) P (%) R(%) F1(%) , but the annotation is NULL. Although T5 has identified the correct value of the attribute, it is marked as incorrect due to faulty annotation. The last experiment is performed on AV-zero dataset. Table 5 shows the result of discovering values of five new attributes. ILM is the best in identifying \"Product Type\". The value of most of the \"Product Type\" is Fishing Float, but T5 either predicted the product type to be NULL or the type of the float, e.g., Luminous Fishing Float, Ice Fishing Float, etc. For the remaining three attributes, T5 outperforms other models. 3 Both T5 and ILM perform better than SUOtag in discovering unseen attribute values.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 3", "ref_id": null }, { "start": 1011, "end": 1018, "text": "Table 4", "ref_id": null }, { "start": 1842, "end": 1849, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "In this work, we present a formulation to generate product attribute values as (i) an instance of text infilling task and (ii) as an answer generation task. We show that we can leverage GPT-2 based and T5 text-to-text transformer models for this task. The models achieve strong results over a broad set of attributes. T5 performs better at multi-word values, and ILM is better at predicting null values. Additionally, our approach outperforms the state-of-theart models for discovering new attribute values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Domain adaptation of rule-based annotators for named-entity recognition tasks", "authors": [ { "first": "Laura", "middle": [], "last": "Chiticariu", "suffix": "" }, { "first": "Rajasekar", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Yunyao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Frederick", "middle": [], "last": "Reiss", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1002--1012", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Chiticariu, Rajasekar Krishnamurthy, Yunyao Li, Frederick Reiss, and Shivakumar Vaithyanathan. 2010. Domain adaptation of rule-based annotators for named-entity recognition tasks. In Proceed- ings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing, pages 1002- 1012, Cambridge, MA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enabling language models to fill in the blanks", "authors": [ { "first": "Chris", "middle": [], "last": "Donahue", "suffix": "" }, { "first": "Mina", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2492--2501", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.225" ] }, "num": null, "urls": [], "raw_text": "Chris Donahue, Mina Lee, and Percy Liang. 2020. En- abling language models to fill in the blanks. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2492- 2501, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Text mining for product attribute extraction", "authors": [ { "first": "Rayid", "middle": [], "last": "Ghani", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Probst", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Marko", "middle": [], "last": "Krema", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Fano", "suffix": "" } ], "year": 2006, "venue": "SIGKDD Explor. Newsl", "volume": "8", "issue": "1", "pages": "41--48", "other_ids": { "DOI": [ "10.1145/1147234.1147241" ] }, "num": null, "urls": [], "raw_text": "Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, and Andrew Fano. 2006. Text mining for prod- uct attribute extraction. SIGKDD Explor. Newsl., 8(1):41-48.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Matching product titles using web-based enrichment", "authors": [ { "first": "", "middle": [], "last": "Vishrawas Gopalakrishnan", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Suresh Parthasarathy Iyengar", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Madaan", "suffix": "" }, { "first": "Srinivasan", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "", "middle": [], "last": "Sengamedu", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12", "volume": "", "issue": "", "pages": "605--614", "other_ids": { "DOI": [ "10.1145/2396761.2396839" ] }, "num": null, "urls": [], "raw_text": "Vishrawas Gopalakrishnan, Suresh Parthasarathy Iyen- gar, Amit Madaan, Rajeev Rastogi, and Srinivasan Sengamedu. 2012. Matching product titles using web-based enrichment. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12, page 605-614, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bidirectional lstm-crf models for sequence tagging", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01991" ] }, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, page 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Attribute extraction from product titles in ecommerce", "authors": [ { "first": "Ajinkya", "middle": [], "last": "More", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.04670" ] }, "num": null, "urls": [], "raw_text": "Ajinkya More. 2016. Attribute extraction from product titles in ecommerce. arXiv preprint arXiv:1608.04670.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "8026--8037", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in neural information processing systems, pages 8026-8037.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bootstrapped named entity recognition for product attribute extraction", "authors": [ { "first": "Duangmanee", "middle": [], "last": "Putthividhya", "suffix": "" }, { "first": "Junling", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1557--1567", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duangmanee Putthividhya and Junling Hu. 2011. Boot- strapped named entity recognition for product at- tribute extraction. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, pages 1557-1567, Edinburgh, Scotland, UK. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning to extract attribute value from product via question answering: A multi-task approach", "authors": [ { "first": "Qifan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Bhargav", "middle": [], "last": "Kanagal", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Sanghai", "suffix": "" }, { "first": "D", "middle": [], "last": "Sivakumar", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Zac", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Elsas", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20", "volume": "", "issue": "", "pages": "47--55", "other_ids": { "DOI": [ "10.1145/3394486.3403047" ] }, "num": null, "urls": [], "raw_text": "Qifan Wang, Li Yang, Bhargav Kanagal, Sumit Sang- hai, D. Sivakumar, Bin Shu, Zac Yu, and Jon Elsas. 2020. Learning to extract attribute value from prod- uct via question answering: A multi-task approach. In Proceedings of the 26th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining, KDD '20, page 47-55, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title", "authors": [ { "first": "Huimin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenting", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Xinyu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Man", "middle": [], "last": "Lan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5214--5223", "other_ids": { "DOI": [ "10.18653/v1/P19-1514" ] }, "num": null, "urls": [], "raw_text": "Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5214-5223, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Opentag: Open attribute value extraction from product profiles", "authors": [ { "first": "Guineng", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Subhabrata", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Xin", "middle": [ "Luna" ], "last": "Dong", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '18", "volume": "", "issue": "", "pages": "1049--1058", "other_ids": { "DOI": [ "10.1145/3219819.3219839" ] }, "num": null, "urls": [], "raw_text": "Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In Proceed- ings of the 24th ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, KDD '18, page 1049-1058, New York, NY, USA. Association for Computing Machinery.", "links": null } }, "ref_entries": { "TABREF3": { "content": "", "num": null, "text": "Performance comparison on the AV-109K dataset", "type_str": "table", "html": null }, "TABREF5": { "content": "
", "num": null, "text": "Performance of models on AV-109K dataset in different scenarios.", "type_str": "table", "html": null } } } }