{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:53.931386Z" }, "title": "OpenBrand: Open Brand Value Extraction from Product Descriptions", "authors": [ { "first": "Kassem", "middle": [], "last": "Sabeh", "suffix": "", "affiliation": {}, "email": "ksabeh@unibz.it" }, { "first": "Mouna", "middle": [], "last": "Kacimi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Johann", "middle": [], "last": "Gamper", "suffix": "", "affiliation": {}, "email": "jgamper@unibz.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Extracting attribute-value information from unstructured product descriptions continue to be of a vital importance in e-commerce applications. One of the most important product attributes is the brand which highly influences customers' purchasing behaviour. Thus, it is crucial to accurately extract brand information dealing with the main challenge of discovering new brand names. Under the open world assumption, several approaches have adopted deep learning models to extract attribute-values using sequence tagging paradigm. However, they did not employ finer grained data representations such as character level embeddings which improve generalizability. In this paper, we introduce OpenBrand, a novel approach for discovering brand names. OpenBrand is a BiLSTM-CRF-Attention model with embeddings at different granularities. Such embeddings are learned using CNN and LSTM architectures to provide more accurate representations. We further propose a new dataset for brand value extraction, with a very challenging task on zero-shot extraction. We have tested our approach, through extensive experiments, and shown that it outperforms state-of-the-art models in brand name discovery.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Extracting attribute-value information from unstructured product descriptions continue to be of a vital importance in e-commerce applications. One of the most important product attributes is the brand which highly influences customers' purchasing behaviour. Thus, it is crucial to accurately extract brand information dealing with the main challenge of discovering new brand names. Under the open world assumption, several approaches have adopted deep learning models to extract attribute-values using sequence tagging paradigm. However, they did not employ finer grained data representations such as character level embeddings which improve generalizability. In this paper, we introduce OpenBrand, a novel approach for discovering brand names. OpenBrand is a BiLSTM-CRF-Attention model with embeddings at different granularities. Such embeddings are learned using CNN and LSTM architectures to provide more accurate representations. We further propose a new dataset for brand value extraction, with a very challenging task on zero-shot extraction. We have tested our approach, through extensive experiments, and shown that it outperforms state-of-the-art models in brand name discovery.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Brand name plays a very important role in influencing customers' behaviour (Chovanov\u00e1 et al., 2015; Shahzad et al., 2014) . Typically, as customers are aware of the brand, they can deduce knowledge about other product attributes. Let us take the example of the toy shown in Figure 1 . The brand of this product is \"Gentle Monster\". By knowing the brand, customers would have some kind of associations, like this toy would be of \"a soft and smooth wood\", have \"bright colors\", and contain \"small pieces which is suitable for older kids\". So, when shopping for toys, they would pick a particular brand based on the attributes they find important. Such correlations between brands and product attributes make it crucial for e-commerce applications to accurately extract brand names from product descriptions.", "cite_spans": [ { "start": 75, "end": 99, "text": "(Chovanov\u00e1 et al., 2015;", "ref_id": "BIBREF5" }, { "start": 100, "end": 121, "text": "Shahzad et al., 2014)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 274, "end": 282, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Retrieving brand names is addressed in the literature within the general problem of attribute-value extraction from product descriptions (Kovelamudi et al., 2011; Vandic et al., 2012; Ghani et al., 2006; Kozareva et al., 2016; Zheng et al., 2018; Xu et al., 2019) . Early approaches rely on rule-based techniques which use domain-specific knowledge to identify attributes and values (Kovelamudi et al., 2011; Vandic et al., 2012; Ghani et al., 2006) . Such approaches adopt a closed world assumption requiring the possible set of values to be known beforehand by mean of dictionaries or hand-crafted rules. Consequently, they are not suitable for discovering unseen values such as newly emerging brands. To tackle this problem, most recent approaches model the extraction task as sequence tagging (Kozareva et al., 2016; Zheng et al., 2018; Xu et al., 2019) and solve it using deep learning models such as BiLSTM enhanced by Conditional Random Field (CRF) and Attention layers. These new approaches achieve promising results, however, they limit the representation of their data to word embeddings which can capture context but penalizes generalizability to new brands.", "cite_spans": [ { "start": 137, "end": 162, "text": "(Kovelamudi et al., 2011;", "ref_id": "BIBREF11" }, { "start": 163, "end": 183, "text": "Vandic et al., 2012;", "ref_id": "BIBREF24" }, { "start": 184, "end": 203, "text": "Ghani et al., 2006;", "ref_id": "BIBREF6" }, { "start": 204, "end": 226, "text": "Kozareva et al., 2016;", "ref_id": "BIBREF12" }, { "start": 227, "end": 246, "text": "Zheng et al., 2018;", "ref_id": "BIBREF28" }, { "start": 247, "end": 263, "text": "Xu et al., 2019)", "ref_id": "BIBREF27" }, { "start": 383, "end": 408, "text": "(Kovelamudi et al., 2011;", "ref_id": "BIBREF11" }, { "start": 409, "end": 429, "text": "Vandic et al., 2012;", "ref_id": "BIBREF24" }, { "start": 430, "end": 449, "text": "Ghani et al., 2006)", "ref_id": "BIBREF6" }, { "start": 797, "end": 820, "text": "(Kozareva et al., 2016;", "ref_id": "BIBREF12" }, { "start": 821, "end": 840, "text": "Zheng et al., 2018;", "ref_id": "BIBREF28" }, { "start": 841, "end": 857, "text": "Xu et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose to use character level embeddings in sequence tagging models for discovering brand names. In addition to word embeddings, character level embeddings were employed in Named Entity Recognition (NER) tasks (Lample et al., 2016) to handle out-of-vocabulary words. The problem of unseen words is particularly emphasized in brands because of sub-branding, brand fragmentation, or simply emerging businesses. Unseen brand names can be completely new, like in brand fragmentation where new brands share the same parent brand maintaining minimal links between the new and the existing identities. For example, \"Audi\" and \"Porsche\" do not have any similarity although they have the same parent brand \"Volkswagen\". By contrast, sub-branding would maintain stronger links between existing brands and the new generated ones, which can be reflected by similarities in brand names. Examples include \"Uber\" and \"UberPool\", \"McDonalds\" and \"Mc-Cafe\", or \"Samsung\" and \"Samsung Evo\". Thus, the use of character level embedding is crucial for capturing variations in brand names and the occurrence of unseen brands.", "cite_spans": [ { "start": 229, "end": 250, "text": "(Lample et al., 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We summarize the main contributions of this work as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We propose OpenBrand, a BiLSTM-CRF-Attention model that combines word embeddings with character level word embeddings. In contrast to previous approaches, we learn character level embeddings based on CNN and LSTM architectures to obtain specific representations of our data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We provide a large real world dataset 1 focusing on brand names to have a thorough analysis of the impact of character level embeddings. We experimentally show that our dataset is challenging on brand name extraction, especially those zero-shot brand values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. We empirically demonstrate significant improvements in F1 score over several stateof-the-art baselines on brand name extraction. Additionally, we show that OpenBrand guarantees a better generalizability over new brands and deals more effectively with compound brand names.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we formally define the problem of open brand value extraction. Given a product title, represented as an unstructured text data, and a Table 1: Example of an input/output {B,I,O} tag sequence for the brand of a product description. target attribute (eg. brand), our goal is to extract the appropriate values for the corresponding attribute from the product title. In this context, we want to discover new values that have not been encountered before. We formalize the attribute-value extraction as per the following definition:", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 247, "text": "Table 1: Example of an input/output {B,I,O} tag sequence for the brand of a product description.", "ref_id": null } ], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "Definition Given a product title X. The title X is represented as a sequence of tokens X t = {x 1 , ..., x T }, where T is the sequence length. Consider a target attribute A. Attribute-value extraction automatically identifies a sub-sequence of tokens from X t as applicable attribute-value pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "A v = {x i , x i+1 , ..., x k }, for 1 \u2264 i \u2264 k \u2264 T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "For example, consider the title for the product given in the example of The tokenization of X yields: X t = {x 1 , x 2 , ..., x 25 } = {\"Wooden\", \"Stacking\", \"Board\", .., \"Dice\"}, where T = 25. For the target attribute: A = {\"Brand\"}. We want to extract: Brand = {x 12 , x 13 } = {\"Gentle\", \"Monster\"}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "In order to identify these sub-sequences, the sequence of tokens X t need to be tagged to capture sequential and positional information. For this purpose, we adopt the sequence tagging model and associate a tag from a given tag-set to the sequence of input tokens X t . We experimented with different tagging strategies and, inline with previous work in the literature (Xu et al., 2019) , we found that the {B,I,O} tagging scheme produced the best results, where \"B\", \"I\", and \"O\" represent the beginning, inside, and outside of an attribute, respectively. (A sequence of \"O\" tags corresponds to the absence of an attribute). Table 1 shows an input/output example of the {B,I,O} tagging strategy. our OpenBrand model architecture, which is composed of three main layers: an embedding layer that encodes the input sequence, a contextual layer that captures complex relationships among the input sequence, and an output layer that produces the output labels.", "cite_spans": [ { "start": 369, "end": 386, "text": "(Xu et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 626, "end": 633, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "In the embedding layer, we map every word in the product description into a d-dimensional embedding vector. The embeddings of the words are obtained by concatenating the word embeddings and character level embeddings. Word embeddings are obtained from the pre-trained GloVe (Pennington et al., 2014) word representations, which are trained over large unlabeled corpus. Pre-trained word embeddings, such as GloVe and Word2Vec (Mikolov et al., 2013) , offer a single representation for each word, which is not useful in the case where words have different meanings depending on the context. To allow our model to learn different representations of embeddings depending on the context, we learn and generate different representations of tokens in the input sequence. For this reason, the weights of our embedding layer are considered to be learnable parameters and not fixed. An important distinction of our approach, compared to previous work on attribute-value extraction, is that we learn character level features in our model. For character level embeddings, we use two different architectures: CNN-based and LSTMbased character level representations. Learning character level embeddings has the advantage of learning task-specific representations. Convolutional Neural Networks (CNN) are designed to discover position-invariant features and they are highly effective in extracting morphological infor-mation (ex. prefix or suffix of words) (Chiu and Nichols, 2016). On the other hand, LSTMs are capable of encoding long sequences, and are thus capable of extracting position dependent character features. These features are crucial to model the relationships between words and their characters. Given a token of our input sequence x t , the embedding layer maps x t in to the vector:", "cite_spans": [ { "start": 425, "end": 447, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Layer", "sec_num": "3.1" }, { "text": "e t = [w t ; c t ],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Layer", "sec_num": "3.1" }, { "text": "where w t and c t are the word and character level representations of x t , respectively. The embedding representation of the whole input sequence X t would be {e 1 , e 2 , ..., e T }. Figure 3 illustrates the two architectures used to encode the character representations. These character representations are then concatenated with the word embeddings and fed as input to our contextual layer.", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 193, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Embedding Layer", "sec_num": "3.1" }, { "text": "The contextual layer captures contextualized representations for every word in the input sequence. In our model, the input sequence to the contextual layer is the concatenation of the character level representations and word embeddings, both mapped by the underlying embedding layer. In this stage, we employ a BiLSTM contextual layer followed by a self-attention layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "Long Short Term Memory Networks (Hochreiter and Schmidhuber, 1997) address the vanishing gradient problems of Recurrent Neural Networks and are thus capable of modeling long-term dependencies between tokens in a sequence. Bidirectional LSTM (BiLSTM) can capture both past and future time steps jointly by using two LSTM layers to produce both forward and backwards states, respectively. Given the input e t (embedding of a token x t ), the hidden vector representations from the backward and forward LSTMs ( \u2212 \u2192 h t and \u2190 \u2212 h t ) is:", "cite_spans": [ { "start": 32, "end": 66, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "h t = \u2206([ \u2212 \u2192 h t ; \u2190 \u2212 h t ])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "where \u2206 denotes a non-linear transformation. The hidden representation of the whole input sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "X t is H t = {h 1 , h 2 , ...., h T }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "In reality, not all hidden states generated by the BiLSTM layer are equally important for the labeling decisions. A mechanism that allows the output layer to be aware of the important features of the sequence can improve the prediction model. This is exactly what attention does. Attention mechanisms have achieved great success in Natural Language Processing (NLP) and were first introduced in the Neural Machine Translation task (Bahdanau et al., 2015) . In the contextual layer, we use a self-attention mechanism to highlight important concepts in the sequence rather than focusing on everything. The model learns to attend to the important parts of the input states based on the output produced so far. We first compute the similarity between all hidden states representations to obtain an attention matrix A \u2208 R T \u00d7T where", "cite_spans": [ { "start": 431, "end": 454, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "\u03b1 t,t \u2032 = \u03c3(w \u03b1 g t,t \u2032 + b \u03b1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "is the element of matrix A representing the mutual interaction between hidden states h t and h t \u2032 . \u03c3 is the element-wise sigmoid function, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "g t,t \u2032 = tanh(W 1 h t + W 2 h t \u2032 + b g )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "where W 1 , W 2 , w \u03b1 are trainable attention matrices, and b g , b \u03b1 are trainable biases. The contextualized hidden states can be computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "h t = T t \u2032 =1 \u03b1 t,t \u2032 \u2022 h t \u2032", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "The contextualized hidden state of the whole input sequence X t is H", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "t = { h 1 , h 2 , ... h T }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Layer", "sec_num": "3.2" }, { "text": "In sequence labeling tasks, it is important to consider the dependencies between output tags in a neighborhood. Conditional Random Fields (CRF) allow us to capture the correlation between labels and model their sequence jointly. For example, if we already know the tag of a token is I, then this increases the probability of the next token to be I or O, rather than being B. We feed the contextualized hidden states H t = { h 1 , h 2 , ... h T } to our output CRF layer to get the sequence of labels with highest probabilities. The joint probability distribution of a tag y given the hidden state h t and previous tag y t\u22121 is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "3.3" }, { "text": "P r(y|x; \u03c8) \u221d T t=1 exp K k=1 \u03c8 k f k (y t\u22121 , y t , h t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "3.3" }, { "text": "where \u03c8 k is the corresponding learnable weight, f k is the feature function, and K is the number of features. The final output label is the label with the highest conditional probability, given as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "3.3" }, { "text": "y * = argmax y P r(y i |x i ; \u03c8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "3.3" }, { "text": "where y * \u2208 {B, I, O} is the output tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "3.3" }, { "text": "In Section 5.2, we will study in detail the effect of the attention and CRF layers on the discovery of brands in comparison with the embeddings layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "3.3" }, { "text": "This section presents the experimental settings of our empirical approach for comparing state-of-theart models on the task of brand value extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "To evaluate the effectiveness of OpenBrand, we have collected a dataset that contains information about products from Amazon. Our dataset is derived from a public product collection -the Amazon Review Dataset (Ni et al., 2019) 2 . The categories of the collected dataset contained a large amount of overlapping brands, which might bias the results of the experiments. Thus, we have selected a subset to have a diverse set of brands with minimal overlapping across categories. We also processed the dataset to handle noise, and removed samples with empty values. This led to a dataset comprising over 250k product titles with more than 50k unique values, which we refer to as AZ-base dataset in our experiments. The AZ-base dataset contains information about products in five main categories:", "cite_spans": [ { "start": 209, "end": 226, "text": "(Ni et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Grocery & Gourmet Food, Toys & Games, Sports & Outdoors, Electronics and Automotive. We randomly sample 70% of the data for training, 10% for validation, and 20% for testing. Table 2 shows the statistical details of the AZ-base dataset.", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 182, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "To further examine the generalization ability of our model, we divide the AZ-base dataset into another training and test split with no overlapping brand values. In other words, none of the values in the test set are encountered during training. We refer to this data split as AZ-zero-shot, as it is designed for evaluating zero-shot extraction. The test set of AZ-zero-shot contains more than 8k new and unique brand values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "In addition, we have also chosen another subset of products from our collected data with another set of categories. The purpose of this dataset is to test the models capabilities in detecting brand values across different category domains. The dataset contains information about products in three new categories as shown in Table 3 . We refer to this dataset as AZ-new-cat, as it is designed to evaluate the model on a new set of product categories.", "cite_spans": [], "ref_spans": [ { "start": 324, "end": 331, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "We implemented and compared three state-of-theart baseline models on attribute-value extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Under Comparison", "sec_num": "4.2" }, { "text": "BiLSTM (Hochreiter and Schmidhuber, 1997) which uses word embeddings from pretrained GloVe (Pennington et al., 2014) for word level representation, then applies BiLSTM to produce the contextual embeddings.", "cite_spans": [ { "start": 7, "end": 41, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF7" }, { "start": 91, "end": 116, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Models Under Comparison", "sec_num": "4.2" }, { "text": "BiLSTM-CRF (Huang et al., 2015) on top to model the tagging decisions jointly. This model is considered state-of-the-art sequence tagging model for NER.", "cite_spans": [ { "start": 11, "end": 31, "text": "(Huang et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Models Under Comparison", "sec_num": "4.2" }, { "text": "OpenTag (Zheng et al., 2018) which adds a self attention mechanism between the contextual BiL-STM layer and the CRF decoding layer. OpenTag is considered the pioneer sequence tagging model for attribute-value extraction. We compare the above baseline models with the OpenBrand models we proposed in Section 3.", "cite_spans": [ { "start": 8, "end": 28, "text": "(Zheng et al., 2018)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Models Under Comparison", "sec_num": "4.2" }, { "text": "OpenBrand-LSTM In this approach, character level information is obtained by applying a BiL-STM encoder on the sequence of characters in each word. This character level information is used in combination with word-level embeddings as input to the BiLSTM-CRF-Attention model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Under Comparison", "sec_num": "4.2" }, { "text": "OpenBrand-CNN This approach is similar to the above model, but CNNs are used instead of LSTMs to encode character level information in the word sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Under Comparison", "sec_num": "4.2" }, { "text": "We use precision P , recall R and F 1 score as evaluation metrics based on the number of true positives (TP), false positives (FP), and false negatives (FN). We use Exact Match criteria (Rajpurkar et al., 2016) , in our evaluation, with either full or no credit. The implementation details are provided in the Appendix.", "cite_spans": [ { "start": 186, "end": 210, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Models Under Comparison", "sec_num": "4.2" }, { "text": "P = T P T P + F P R = T P T P + F N F1 = 2 \u00d7 P \u00d7 R P + R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Under Comparison", "sec_num": "4.2" }, { "text": "We conducted a series of experiments on AZ-base, AZ-zero-shot, and AZ-new-cat datasets under various settings to evaluate the performance of Open-Brand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "In the first experiment, we compare the performance of OpenBrand with the three state-of-the-art baselines mentioned in Section 4.2 for identifying brand values from product descriptions. 4 reports the comparison results of our two models (OpenBrand-LSTM and OpenBrand-CNN) and three baselines across all categories in the AZ-base dataset. From these evaluation results, we can observe that our models substantially outperform the other compared models in all categories. Open-Brand with LSTM character level and CNN character level embeddings are consistently ranked the best over all competing baselines. The overall improvement in F 1 score is up to 6.1% as compared to OpenTag. The main reason for this result is that our model learns both character and word embeddings during training, thus allowing to learn more effective contextual embeddings that are more suitable for the task of extracting brand values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Performance Comparison", "sec_num": "5.1" }, { "text": "To understand the effect of character level representations on brand-value extraction, we extend all baseline models with character level embeddings and test them on the AZ-base dataset. Table 5 shows the average F 1 score of baseline models on the AZ-base dataset after adding character level representations. The results show that character level embeddings significantly improve the overall performances of all models. An interesting observation is that character level embeddings improve the model much more effectively than CRF or attention layers. For example, and as shown in the last two rows of Table 5 , adding a CNN-representation to a BiLSTM-CRF model improves the model by 1.15%, while adding an attention layer only improves the model by 0.14%. The experiments also show that using either CNN-char or LSTM-char both lead to an improvement with comparable overall F 1 score. However, CNNs have less training complexity as compared to LSTM models under similar experimental settings. In our experiments, the average training time of models with LSTM-char increased by 59% relative to the baseline BiLSTM-CRF-Att model, while it only increased by 22% with CNN-char, as detailed in Table 6 . CNN-char also produces better performances than LSTM-char as shown in Table 5 . We conclude that CNN character representations are preferable to LSTM based representations for brand-value extraction.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 194, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 604, "end": 611, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 1192, "end": 1199, "text": "Table 6", "ref_id": null }, { "start": 1272, "end": 1279, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Impact of Character level Representations", "sec_num": "5.2" }, { "text": "Average Training Time per Epoch (seconds) Difference (\u2206%)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "BiLSTM-CRF-Att 63 0 +LSTM-char 100 +59% +CNN-char 77 +22% Table 6 : Average training time of our BiLSTM-CRF-Att models computed on a TPU.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 65, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We conduct zero-shot extraction experiment to evaluate the generalization ability of our models on unseen brand values. Table 7 reports the zero-shot extraction results. It can be seen that our model achieves better performance than OpenTag on unseen data. This is because our model can leverage the sub-sequence level similarities in brand names between the train set and test set, through the character level embeddings. However, it is clear that the overall performance of all models is worse as compared to the results in with our expectations as there are no training samples for the zero-shot brand values. This indicates that it is truly a difficult zero-shot extraction task.", "cite_spans": [], "ref_spans": [ { "start": 120, "end": 127, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Discovering New Brand Values", "sec_num": "5.3" }, { "text": "To further examine the ability of OpenBrand in discovering brand values in new categories, we train the models on the AZ-base dataset, and test them on the AZ-new-cat dataset introduced in Section 4.1. Table 8 reports the results across three different categories in the AZ-new-cat dataset. It is clear that OpenBrand achieves much better performance with gains up to 2.7% in F 1 score as compared to OpenTag. This indicates that our model has good generalization and is able to transfer to other domains. Also, the results are much better than zero-shot extractions. This is because some data in the training set are semantically related to the brand values in AZ-new-cat and thus they provide hints that guide the extraction. For example, many of the brands in Cell Phones & Accessories category (eg. Samsung Galaxy) are sub-brands of products in Electronics category (eg. Samsung). Table 8 : Performance comparison between models on the AZ-new-cat dataset.", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 8", "ref_id": null }, { "start": 885, "end": 892, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Discovering New Brand Values", "sec_num": "5.3" }, { "text": "We also conducted experiments to explore the relationship between the number of entities that consti- tute the brand and the performance of the models. Since we use Exact Match criteria in our evaluations, detecting brand values with more than one entity becomes very challenging in general. We divide the test set of our AZ-base dataset into four subsets according to the number of entities inside a brand (see Figure 4 ). While OpenTag achieves good overall F 1 performance with brand values consisting of single entities (88%), it is much worse on brand values with three or more entities (67% and 61% respectively). OpenBrand, on the other hand, still performs well even on brands with two or more entities (71% and 65% respectively).", "cite_spans": [], "ref_spans": [ { "start": 412, "end": 420, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Impact of Brand Entities", "sec_num": "5.4" }, { "text": "Our experimental results show that, for the task of extracting brand values, OpenBrand outperforms baseline approaches by a significant margin. Besides the general F1 score, the gains can be seen in both precision and recall which go up to 2.2% and 11.5%, respectively. This means that character embeddings do not only help discover more brand values but they also improve the accuracy of the extracted information. Furthermore, the gains in recall are also high for the AZ-new-cat and AZzero-shot datasets, reaching 3.3% and 1.46% of improvement respectively. Thus, OpenBrand performs particularly well for unseen data which confirms our initial claim that character embeddings enhance model generalizability. Another important finding of our study is that the performance of OpenBrand depends on the product category. We can observe that, for the Automotive category, the gain in precision is 0.2% while it goes up to 2.2% for the Toys & Games category. This is mainly due to an ambiguity problem in the product descriptions of the Automotive category. Some product descriptions might contain values of other brands other than the one that needs to be detected. Let us take the following product description: \"Honda Shadow 750 Aero Cobra Saddlebag Guards Supports\". This is about a \"Saddlebag Guards Supports\" that is compatible for \"Honda\" cars. The brand of this product is \"Cobra\" but the presence of \"Honda\" in the description can be confusing for the model leading to wrong extractions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.5" }, { "text": "We additionally observe that compound brand values are best handled by OpenBrand. This is due to the fact that the combination of character and word embeddings contributes to more meaningful representations. The results also show that OpenBrand-LSTM tends to perform worse, as compared to OpenBrand-CNN. This is inline with prior observations (Bradbury et al., 2017) that LSTM can be difficult to apply on long sequences of input.", "cite_spans": [ { "start": 343, "end": 366, "text": "(Bradbury et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.5" }, { "text": "There has been significant research on the task of attribute-value extraction from product descriptions (Wong et al., 2009) . Initial approaches (Vandic et al., 2012) formulated the problem as a classification task relying on supervised learning techniques. (Ghani et al., 2006 ) use a Naive Bayes classifier to extract values that correspond to a predefined set of product attributes. (Putthividhya and Hu, 2011) focus on annotating brands in product listings of apparel products on eBay. (Kovelamudi et al., 2011) propose a domain independent supervised system that can automatically discover product attributes from user reviews using Wikipedia. Similarly, (Ling and Weld, 2012) propose an automatic labeling process of entities by making use of anchor links from Wikipedia text. Other approaches exploited unsupervised learning techniques like (Shinzato and Sekine, 2013) in their task of extracting attribute-values from e-commerce product pages. Following a similar line, (Charron et al., 2016) use consumer patterns to create annotations for datadriven products. (Bing et al., 2016) focus on the discovery of hidden patterns in costumer reviews to improve attribute-value extraction. The above approaches provide promising results, however they poorly handle the discovery of new values due to their closed world assumption.", "cite_spans": [ { "start": 104, "end": 123, "text": "(Wong et al., 2009)", "ref_id": "BIBREF26" }, { "start": 145, "end": 166, "text": "(Vandic et al., 2012)", "ref_id": "BIBREF24" }, { "start": 258, "end": 277, "text": "(Ghani et al., 2006", "ref_id": "BIBREF6" }, { "start": 490, "end": 515, "text": "(Kovelamudi et al., 2011)", "ref_id": "BIBREF11" }, { "start": 660, "end": 681, "text": "(Ling and Weld, 2012)", "ref_id": "BIBREF14" }, { "start": 978, "end": 1000, "text": "(Charron et al., 2016)", "ref_id": "BIBREF3" }, { "start": 1070, "end": 1089, "text": "(Bing et al., 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "The most recent approaches (Kozareva et al., 2016; Zheng et al., 2018; Xu et al., 2019) make instead an open world assumption using sequence tagging models, similarly to NER tasks (Ma and Hovy, 2016; Huang et al., 2015) . (Kozareva et al., 2016) use a BiLSTM-CRF model to tag several product attributes for brands and models with handcrafted features. (Zheng et al., 2018) develop an end-to-end tagging model utilizing BiLSTM and CRF without using any dictionary or hand-crafted features. After that, (Xu et al., 2019) adopted only one global set of BIO tags for any attributes to scale up the semantic representation models of product titles. In this context, (Karamanolakis et al., 2020) proposed a taxonomy aware knowledge extraction model that takes advantage of the hierarchical relationships between product categories. The latest approaches extend the open world assumption also to attributes and use question answering (QA) models (Wang et al., 2020) to scale to a larger number of attributes. Sequence tagging approaches are the most relevant to our work since extracting brand names does not require scalability. However, these models did not exploit character level embeddings which are crucial for improving generalizability. In our work, we enhance such models using different granularities of embeddings.", "cite_spans": [ { "start": 27, "end": 50, "text": "(Kozareva et al., 2016;", "ref_id": "BIBREF12" }, { "start": 51, "end": 70, "text": "Zheng et al., 2018;", "ref_id": "BIBREF28" }, { "start": 71, "end": 87, "text": "Xu et al., 2019)", "ref_id": "BIBREF27" }, { "start": 180, "end": 199, "text": "(Ma and Hovy, 2016;", "ref_id": "BIBREF15" }, { "start": 200, "end": 219, "text": "Huang et al., 2015)", "ref_id": "BIBREF8" }, { "start": 222, "end": 245, "text": "(Kozareva et al., 2016)", "ref_id": "BIBREF12" }, { "start": 352, "end": 372, "text": "(Zheng et al., 2018)", "ref_id": "BIBREF28" }, { "start": 501, "end": 518, "text": "(Xu et al., 2019)", "ref_id": "BIBREF27" }, { "start": 661, "end": 689, "text": "(Karamanolakis et al., 2020)", "ref_id": "BIBREF9" }, { "start": 939, "end": 958, "text": "(Wang et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper we have addressed the problem of extracting brand values from product descriptions. Previous state-of-the-art sequence tagging methods faced the challenge of discovering new values that have not been encountered before. To tackle this issue we proposed OpenBrand, a novel attribute-value extraction model with the integration of character level representations to improve generalizability. We presented experiments on realworld datasets in different categories which show that OpenBrand outperforms state-of-the-art approaches and baselines. By exploiting character level embeddings, OpenBrand is capable of learning accurate representations to discover new brand values. Our experiments also show that CNN based representations outperform LSTM based representations in both performance and computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "A natural extension of this work is to deal with the problem of disambiguation discussed in Section 5.5. To this end, we need to have more training data which helps understating the patterns in a better way. Moreover, we need to extend the tagging model to capture ambiguous product descriptions. This extension can be very important when brand values need to be extracted from other data sources other than concise product descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Data is available at https://github.com/ kassemsabeh/open-brand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "OpenBrand ModelTo address the open brand value extraction problem, we propose a BiLSTM-CRF-Attention model with character level embeddings.Figure 2 shows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nijianmo.github.io/amazon/ index.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "A.1 Implementation DetailsOur models are implemented with Tensorflow 3 and Keras 4 , and they are trained using TPUs on the 3 https://www.tensorflow.org/. 4 https://keras.io/. cloud. We used the validation set of AZ-base to select the optimal hyper-parameters of our model, while the test set was used to report the final results.During training, optimization is performed with Adam optimizer (Kingma and Ba, 2015) using a 1e \u22123 initial learning rate. For all models, we employed pre-trained 100-dimensional word vectors from GloVe (Pennington et al., 2014) . All models use a dropout layer (Srivastava et al., 2014) of size 0.3 both before and after the BiLSTM layer. The minibatch size is fixed to 128. The BIO tagging scheme is adopted. In the training process, we used the loss score on the validation set to assess model improvement. The models were trained for a total of 100 epochs, and early stopping was applied if there was no improvement for a period of 10 epochs. The average training time for each epoch was also recorded. Tables 9 and 10 show the selected hyperparameters in the CNN-based and LSTM-based models respectively, based on the performance on the validation set. These include the character embeddings dimension. The tables also show the total number of trainable parameters for each model. The difference in number of trainable parameters shows that CNNs have less training complexity as compared to LSTM models under similar experimental settings.", "cite_spans": [ { "start": 532, "end": 557, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF18" }, { "start": 591, "end": 616, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 1036, "end": 1051, "text": "Tables 9 and 10", "ref_id": null } ], "eq_spans": [], "section": "A Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised extraction of popular product attributes from e-commerce web sites by considering customer reviews", "authors": [ { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Tak-Lam", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" } ], "year": 2016, "venue": "ACM Trans. Internet Technol", "volume": "16", "issue": "2", "pages": "", "other_ids": { "DOI": [ "10.1145/2857054" ] }, "num": null, "urls": [], "raw_text": "Lidong Bing, Tak-Lam Wong, and Wai Lam. 2016. Un- supervised extraction of popular product attributes from e-commerce web sites by considering customer reviews. ACM Trans. Internet Technol., 16(2).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Quasi-recurrent neural networks", "authors": [ { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Merity", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-recurrent neural net- works. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open- Review.net.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Extracting semantic information for ecommerce", "authors": [ { "first": "Bruno", "middle": [], "last": "Charron", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Hirate", "suffix": "" }, { "first": "David", "middle": [], "last": "Purcell", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Rezk", "suffix": "" } ], "year": 2016, "venue": "The Semantic Web -ISWC 2016", "volume": "", "issue": "", "pages": "273--290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bruno Charron, Yu Hirate, David Purcell, and Martin Rezk. 2016. Extracting semantic information for e- commerce. In The Semantic Web -ISWC 2016, pages 273-290, Cham. Springer International Publishing.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4", "authors": [ { "first": "P", "middle": [ "C" ], "last": "Jason", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans- actions of the Association for Computational Linguis- tics, 4.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "International Scientific Conference: Business Economics and Management (BEM2015)", "authors": [ { "first": "Henrieta", "middle": [], "last": "Hrablik Chovanov\u00e1", "suffix": "" }, { "first": "Aleksander", "middle": [], "last": "Ivanovich Korshunov", "suffix": "" }, { "first": "Dagmar", "middle": [], "last": "Bab\u010danov\u00e1", "suffix": "" } ], "year": 2015, "venue": "", "volume": "34", "issue": "", "pages": "615--621", "other_ids": { "DOI": [ "10.1016/S2212-5671(15)01676-7" ] }, "num": null, "urls": [], "raw_text": "Henrieta Hrablik Chovanov\u00e1, Aleksander Ivanovich Ko- rshunov, and Dagmar Bab\u010danov\u00e1. 2015. Impact of brand on consumer behavior. Procedia Economics and Finance, 34:615-621. International Scientific Conference: Business Economics and Management (BEM2015).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Text mining for product attribute extraction", "authors": [ { "first": "Rayid", "middle": [], "last": "Ghani", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Probst", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Marko", "middle": [], "last": "Krema", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Fano", "suffix": "" } ], "year": 2006, "venue": "SIGKDD Explor. Newsl", "volume": "8", "issue": "1", "pages": "41--48", "other_ids": { "DOI": [ "10.1145/1147234.1147241" ] }, "num": null, "urls": [], "raw_text": "Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, and Andrew Fano. 2006. Text mining for prod- uct attribute extraction. SIGKDD Explor. Newsl., 8(1):41-48.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Comput", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": { "DOI": [ "10.1162/neco.1997.9.8.1735" ] }, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bidirectional lstm-crf models for sequence tagging", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "TXtract: Taxonomy-aware knowledge extraction for thousands of product categories", "authors": [ { "first": "Giannis", "middle": [], "last": "Karamanolakis", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Xin Luna", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8489--8502", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.751" ] }, "num": null, "urls": [], "raw_text": "Giannis Karamanolakis, Jun Ma, and Xin Luna Dong. 2020. TXtract: Taxonomy-aware knowledge extrac- tion for thousands of product categories. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8489-8502, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Domain independent model for product attribute extraction from user reviews using Wikipedia", "authors": [ { "first": "Sudheer", "middle": [], "last": "Kovelamudi", "suffix": "" }, { "first": "Sethu", "middle": [], "last": "Ramalingam", "suffix": "" }, { "first": "Arpit", "middle": [], "last": "Sood", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2011, "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1408--1412", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sudheer Kovelamudi, Sethu Ramalingam, Arpit Sood, and Vasudeva Varma. 2011. Domain independent model for product attribute extraction from user re- views using Wikipedia. In Proceedings of 5th In- ternational Joint Conference on Natural Language Processing, pages 1408-1412, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Recognizing salient entities in shopping queries", "authors": [ { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zornitsa Kozareva, Qi Li, Ke Zhai, and Weiwei Guo. 2016. Recognizing salient entities in shopping queries. In ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": { "DOI": [ "10.18653/v1/N16-1030" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Fine-grained entity recognition", "authors": [ { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, AAAI'12", "volume": "", "issue": "", "pages": "94--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao Ling and Daniel S. Weld. 2012. Fine-grained en- tity recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, AAAI'12, page 94-100. AAAI Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1064--1074", "other_ids": { "DOI": [ "10.18653/v1/P16-1101" ] }, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed representa- tions of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Justifying recommendations using distantly-labeled reviews and fine-grained aspects", "authors": [ { "first": "Jianmo", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Jiacheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "188--197", "other_ids": { "DOI": [ "10.18653/v1/D19-1018" ] }, "num": null, "urls": [], "raw_text": "Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 188-197, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bootstrapped named entity recognition for product attribute extraction", "authors": [ { "first": ";", "middle": [], "last": "Duangmanee", "suffix": "" }, { "first": "Junling", "middle": [], "last": "Putthividhya", "suffix": "" }, { "first": "", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1557--1567", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duangmanee (Pew) Putthividhya and Junling Hu. 2011. Bootstrapped named entity recognition for product attribute extraction. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Processing, EMNLP '11, page 1557-1567, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Influence of brand name on consumer choice & decision", "authors": [ { "first": "Umer", "middle": [], "last": "Shahzad", "suffix": "" }, { "first": "Salman", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Kashif", "middle": [], "last": "Iqbal", "suffix": "" } ], "year": 2014, "venue": "IOSR Journal of Business and Management", "volume": "16", "issue": "", "pages": "72--76", "other_ids": { "DOI": [ "10.9790/487X-16637276" ] }, "num": null, "urls": [], "raw_text": "Umer Shahzad, Salman Ahmad, Kashif Iqbal, Muham- mad Nawaz, and Saqib Usman. 2014. Influence of brand name on consumer choice & decision. IOSR Journal of Business and Management, 16:72-76.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Unsupervised extraction of attributes and their values from product description", "authors": [ { "first": "Keiji", "middle": [], "last": "Shinzato", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1339--1347", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keiji Shinzato and Satoshi Sekine. 2013. Unsupervised extraction of attributes and their values from product description. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1339-1347, Nagoya, Japan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "J. Mach. Learn. Res", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural net- works from overfitting. J. Mach. Learn. Res., 15(1):1929-1958.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Faceted product search powered by the semantic web", "authors": [ { "first": "Damir", "middle": [], "last": "Vandic", "suffix": "" }, { "first": "Jan-Willem", "middle": [], "last": "Dam", "suffix": "" }, { "first": "Flavius", "middle": [], "last": "Frasincar", "suffix": "" } ], "year": 2012, "venue": "Decision Support Systems", "volume": "53", "issue": "", "pages": "425--437", "other_ids": { "DOI": [ "10.1016/j.dss.2012.02.010" ] }, "num": null, "urls": [], "raw_text": "Damir Vandic, Jan-Willem Dam, and Flavius Frasincar. 2012. Faceted product search powered by the seman- tic web. Decision Support Systems, 53:425-437.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning to extract attribute value from product via question answering: A multi-task approach", "authors": [ { "first": "Qifan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Bhargav", "middle": [], "last": "Kanagal", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Sanghai", "suffix": "" }, { "first": "D", "middle": [], "last": "Sivakumar", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Zac", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Elsas", "suffix": "" } ], "year": 2020, "venue": "KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "47--55", "other_ids": { "DOI": [ "10.1145/3394486.3403047" ] }, "num": null, "urls": [], "raw_text": "Qifan Wang, Li Yang, Bhargav Kanagal, Sumit Sanghai, D. Sivakumar, Bin Shu, Zac Yu, and Jon Elsas. 2020. Learning to extract attribute value from product via question answering: A multi-task approach. In KDD '20: The 26th ACM SIGKDD Conference on Knowl- edge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 47-55. ACM.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Scalable attribute-value extraction from semi-structured text", "authors": [ { "first": "Yuk", "middle": [ "Wah" ], "last": "Wong", "suffix": "" }, { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Lokovic", "suffix": "" }, { "first": "Kamal", "middle": [], "last": "Nigam", "suffix": "" } ], "year": 2009, "venue": "ICDM Workshop on Large-scale Data Mining: Theory and Applications", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "http://www.computer.org/portal/web/csdl/doi/10.1109/ICDMW.2009.81" ] }, "num": null, "urls": [], "raw_text": "Yuk Wah Wong, Dominic Widdows, Tom Lokovic, and Kamal Nigam. 2009. Scalable attribute-value extrac- tion from semi-structured text. In ICDM Workshop on Large-scale Data Mining: Theory and Applica- tions.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title", "authors": [ { "first": "Huimin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenting", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Xinyu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Man", "middle": [], "last": "Lan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5214--5223", "other_ids": { "DOI": [ "10.18653/v1/P19-1514" ] }, "num": null, "urls": [], "raw_text": "Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5214-5223, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Opentag: Open attribute value extraction from product profiles", "authors": [ { "first": "Guineng", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Subhabrata", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Xin", "middle": [ "Luna" ], "last": "Dong", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18", "volume": "", "issue": "", "pages": "1049--1058", "other_ids": { "DOI": [ "10.1145/3219819.3219839" ] }, "num": null, "urls": [], "raw_text": "Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In Proceed- ings of the 24th ACM SIGKDD International Con- ference on Knowledge Discovery & Data Mining, KDD '18, page 1049-1058, New York, NY, USA. Association for Computing Machinery.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "An example of a product description.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Figure 1: X = \"Wooden Stacking Board Games 54 Pieces for Kids Adult and Families, Gentle Monster Wooden Blocks Toys for Toddlers, Colored Building Blocks -6 Colors 2 Dice.\"", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "OpenBrand Architecture: BiLSTM-CRF-Attention with character level representations.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "LSTM-based character level representation.", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "Architecture of character level encoders.", "num": null }, "FIGREF5": { "uris": null, "type_str": "figure", "text": "Impact of number of entities on the model performance.", "num": null }, "TABREF0": { "type_str": "table", "html": null, "content": "
Output OOOB-Brand I-BrandOOO
", "num": null, "text": "Input Kids Adult Families Gentle Monster Wooden Blocks Toys" }, "TABREF2": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Statistics of AZ-base dataset with five categories." }, "TABREF4": { "type_str": "table", "html": null, "content": "
", "num": null, "text": "Number of samples in AZ-new-cat dataset." }, "TABREF5": { "type_str": "table", "html": null, "content": "
PRF1
BiLSTM70.4 65.9 68.1
Grocery & Gourmet FoodBiLSTM-CRF OpenTag74.9 66.0 70.2 76.0 65.4 70.3
BiLSTM73.7 69.1 71.3
BiLSTM-CRF78.9 70.5 74.5
Toys & GamesOpenTag79.1 70.3 74.5
OpenBrand-LSTM 80.2 72.4 76.1 OpenBrand-CNN 81.3 72.0 76.4
BiLSTM80.3 75.8 78.0
BiLSTM-CRF84.1 75.4 79.5
Sports & OutdoorsOpenTag84.9 75.0 79.6
OpenBrand-LSTM 85.7 76.8 81.0
OpenBrand-CNN 86.1 77.3 81.5
BiLSTM86.2 80.4 83.2
BiLSTM-CRF87.8 81.5 84.5
ElectronicsOpenTag89.2 79.6 84.2
OpenBrand-LSTM 89.1 80.8 84.8 OpenBrand-CNN 89.7 80.5 84.9
BiLSTM88.5 84.3 86.4
BiLSTM-CRF90.9 85.0 87.9
AutomotiveOpenTag91.6 84.6 87.9
OpenBrand-LSTM 91.7 85.0 88.2
OpenBrand-CNN 91.8 85.4 88.5
Table 4: Performance comparison between different models on AZ-base dataset.
", "num": null, "text": "OpenBrand-LSTM 75.9 77.5 71.8 OpenBrand-CNN 77.5 75.4 76.4" }, "TABREF7": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Effect of character embeddings on the performance of the models (F 1 score)." }, "TABREF8": { "type_str": "table", "html": null, "content": "
, which is inline
", "num": null, "text": "" }, "TABREF9": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Zero-shot extraction results on AZ-zero-shot dataset." } } } }