{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:33.223973Z" }, "title": "Learning Cross-Task Attribute -Attribute Similarity for Multi-task Attribute-Value Extraction", "authors": [ { "first": "Mayank", "middle": [], "last": "Jain", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT Kharagpur", "location": { "country": "India" } }, "email": "" }, { "first": "Sourangshu", "middle": [], "last": "Bhattacharya", "suffix": "", "affiliation": { "laboratory": "", "institution": "IIT Kharagpur", "location": { "country": "India" } }, "email": "sourangshu@iitkgp.ac.in" }, { "first": "Harshit", "middle": [], "last": "Jain", "suffix": "", "affiliation": {}, "email": "harshit.jain@flipkart.com" }, { "first": "Karimulla", "middle": [], "last": "Shaik", "suffix": "", "affiliation": {}, "email": "karimulla.shaik@flipkart.com" }, { "first": "Muthusamy", "middle": [], "last": "Chelliah", "suffix": "", "affiliation": {}, "email": "muthusamy.c@flipkart.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic extraction of product attributevalue pairs from unstructured text like product descriptions is an important problem for ecommerce companies. The attribute schema typically varies from one category of products (which will be referred as vertical) to another. This leads to extreme annotation efforts for training of supervised deep sequence labeling models such as LSTM-CRF, and consequently not enough labeled data for some verticalattribute pairs. In this work, we propose a technique for alleviating this problem by using annotated data from related verticals in a multitask learning framework. Our approach relies on availability of similar attributes (labels) in another related vertical. Our model jointly learns the similarity between attributes of the two verticals along with the model parameters for the sequence tagging model. The main advantage of our approach is that it does not need any prior annotation of attribute similarity. Our system has been tested with datasets of size more than 10000 from a large e-commerce company in India. We perform detailed experiments to show that our method indeed increases the macro-F1 scores for attribute value extraction in general, and for labels with low training data in particular. We also report top labels from other verticals that contribute towards learning of particular labels.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Automatic extraction of product attributevalue pairs from unstructured text like product descriptions is an important problem for ecommerce companies. The attribute schema typically varies from one category of products (which will be referred as vertical) to another. This leads to extreme annotation efforts for training of supervised deep sequence labeling models such as LSTM-CRF, and consequently not enough labeled data for some verticalattribute pairs. In this work, we propose a technique for alleviating this problem by using annotated data from related verticals in a multitask learning framework. Our approach relies on availability of similar attributes (labels) in another related vertical. Our model jointly learns the similarity between attributes of the two verticals along with the model parameters for the sequence tagging model. The main advantage of our approach is that it does not need any prior annotation of attribute similarity. Our system has been tested with datasets of size more than 10000 from a large e-commerce company in India. We perform detailed experiments to show that our method indeed increases the macro-F1 scores for attribute value extraction in general, and for labels with low training data in particular. We also report top labels from other verticals that contribute towards learning of particular labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Online e-commerce marketplaces (e.g., Flipkart) operate by efficiently matching customer queries and browsing habits to appropriate seller inventory. Inventory is stored in a catalog which consists of images, structured attributes (keyvalue pairs) and unstructured textual description as shown in figure 1. Products of same kind (e.g., digital camera) are thus described using a unique set of attributes (e.g., zoom, resolution) -helping faceted navigation, merchandizing, search ranking and comparative summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Onboarding products in a catalog requires populating the structured as well as unstructured parts. The time a seller has to spend on a product addition request is proportional to the quantum of information that he/she has to provide. On the other hand, correctness and completeness of catalog results in better product discovery, leading to a trade-off with its onboarding time. A good amount of attributes information is present in product description as well. This motivates us to extract the information from unstructured text instead of explicitly asking sellers for attributes. Additional information in description (e.g., precise features, relation between products) as shown in figure 1 helps to enrich the catalog as well. The extracted attributes can be used to check consistency between unstructured and structured data provided by seller and thus quality control of addition request.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We design supervised deep learning techniques for the problem of attribute value extraction. Figure 2, shows a typical input sentence and the corresponding B, I, O tags. The task of our model is to predict the tags given an input sentence. This problem is related supervised sequence labelling problem (Zheng et al., 2018; Lample et al., 2016) . However, this technique needs a lot of training data points (sentence -label pairs) to perform effectively, which in turn requires massive annotation efforts on the part of e-commerce companiesreduction of which is an ongoing challenge; Open-Tag (Zheng et al., 2018) uses active learning to annotate only most informative examples. E-commerce companies however have the products categorized as different verticals, e.g. dress, jeans, etc. Each of these verticals have a different set of attributes, and hence needs to annotated using different models. A lot of the attributes among these verticals are common, or related though. Hence, it should be possible to borrow information from annotations given in different verticals, to improve the prediction performance of a given vertical. The only challenge is that correspondences between similar labels of different verticals is not available readily.", "cite_spans": [ { "start": 302, "end": 322, "text": "(Zheng et al., 2018;", "ref_id": "BIBREF15" }, { "start": 323, "end": 343, "text": "Lample et al., 2016)", "ref_id": "BIBREF6" }, { "start": 592, "end": 612, "text": "(Zheng et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 93, "end": 99, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contribution here is thus to develop a multi-task learning (MTL) model (Ruder, 2017) which can simultaneously learn attribute extraction and attribute-attribute similarity for multiple verticals (here we report with only two verticals at a time). We do so by using a soft coupling loss function across pairs of similar (context,label) combinations between the two tasks, where similarity is learned using attention mechanism. The naive version of such an objective will be prohibitively large to optimize. We propose to use a cosine similarity based shortlist, which makes the solution feasible.", "cite_spans": [ { "start": 80, "end": 93, "text": "(Ruder, 2017)", "ref_id": "BIBREF11" }, { "start": 328, "end": 343, "text": "(context,label)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We validate our method using a large corpus (more than 10000 product descriptions, across 6 verticals) collected from the e-commerce company -Flipkart. Extensive experimentation shows that our method improves performance of prediction on almost all the verticals, and especially shows upto 50% improvement for many labels which have low number of training examples. This is especially interesting since we find that number of instances with an attribute is highly skewed across the attributes. Detailed analysis also confirms that the attention mechanism indeed discov- ers similar attributes from other verticals to borrrow information from.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Attribute extraction: Various tokens (e.g., Apple) in an offer title are classified into attribute names (e.g., brand) relevant to the product (e.g., smartphone) (Joshi et al., 2015) . For recognizing attributes (e.g., product family) in a short text segment, missing KB entries are leveraged through word embeddings learned on an unlabeled corpus (Kozareva et al., 2016) . (Joshi et al., 2015) investigates whether distributed word vectors benefit NER in the e-commerce domain where entities are item properties (e.g., brand name, color, material, clothing size). (Xu et al., 2019) regards each attribute as a query and adopts only one global set of BIO tags for any attribute to reduce the burden of attribute tag or model explosion. Open-Tag (Zheng et al., 2018) uses active learning along with a deep tagging model to update a product catalog with missing values for many attributes of interest from seller-provided title/description. To create the initial labeled data set, (Rezk et al., 2019) proposes bootstraping of seed data by extracting new values from unstructured text in a domain/language-independent fashion. Through category conditional self-attention and multi-task learning, a knowledge extraction model Attribute prediction and value extraction tasks are jointly modelled (Zhu et al., 2020) from multiple aspects towards interactions between attributes and values. Contrastive entity linkage (Embar et al., 2020) helps identify grocery product attribute pairs that share same value (e.g., brand, manufacturer, product line) and differ from each other (e.g., package size, color). Retailers do not always provide clean data as textual descriptions in product catalog (e.g., non-distinctive names (cotton, black tshirt), blurred distinction (Amazon is a product/vs. brand), homonyms (Apple)). discovers such attribute relationships towards a brand-product knowledge graph from diverse input data sources.", "cite_spans": [ { "start": 162, "end": 182, "text": "(Joshi et al., 2015)", "ref_id": "BIBREF3" }, { "start": 348, "end": 371, "text": "(Kozareva et al., 2016)", "ref_id": "BIBREF5" }, { "start": 374, "end": 394, "text": "(Joshi et al., 2015)", "ref_id": "BIBREF3" }, { "start": 565, "end": 582, "text": "(Xu et al., 2019)", "ref_id": "BIBREF13" }, { "start": 745, "end": 765, "text": "(Zheng et al., 2018)", "ref_id": "BIBREF15" }, { "start": 979, "end": 998, "text": "(Rezk et al., 2019)", "ref_id": "BIBREF9" }, { "start": 1291, "end": 1309, "text": "(Zhu et al., 2020)", "ref_id": "BIBREF16" }, { "start": 1411, "end": 1431, "text": "(Embar et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Multi-task Learning (MTL): Significant theoretical interest exists in MTL since it offers excel-lent generalization performance in domains where training data is scarce (Maurer et al., 2016) . In NLP, (Collobert and Weston, 2008) proposed a unified deep learning architecture for many common tasks e.g. POS tagging, chunking, etc. (Yang and Hospedales, 2017) presented a new representation MTL framework that learns cross-task sharing structure at every layer in a deep network. (Rijula Kar, 2018) proposed a task-sensitive representation learning framework that learns mentiondependent representations for NED, and violates norm to share parameters in the final layer. treats each attribute as a question finding the best answer span corresponding to its value in product context -modelled by a BERT encoder shared across all attributes for scalability. A distilled masked language model improving generalizability is then integrated with the encoder into a unified MTL framework. (Karamanolakis et al., 2020) applies to thousands of product categories organized in a hierarchical taxonomy. However, existing methods do not automatically discover attribute-attribute similarity from data, without taking attribute hierarchy as input.", "cite_spans": [ { "start": 169, "end": 190, "text": "(Maurer et al., 2016)", "ref_id": "BIBREF7" }, { "start": 201, "end": 229, "text": "(Collobert and Weston, 2008)", "ref_id": "BIBREF1" }, { "start": 331, "end": 358, "text": "(Yang and Hospedales, 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we describe a novel multi-task approach to improving the accuracy of a supervised attribute value extraction system. We start with the attribute-value extraction system, based on deep bidirectional LSTM model, described in OpenTag (Zheng et al., 2018) . Our main idea here is to leverage the information contained in instances of related tasks, e.g. in our case related domains / verticals of products. The key challenge in our case is that the set of labels across verticals need not be same, or even aligned. For example, the label PROCESSOR TYPE is a valid label for LAPTOP vertical but does not make sense for DRESS vertical. On the other hand, the set of values for the common label BRAND will be very different for the vertical DRESS compared to the vertical LAPTOP. Hence, our core challenge here is to determine the similarities between labels automatically in the context of each vertical in order to leverage the information from a related vertical. The proposed architecture is described in figure 3.", "cite_spans": [ { "start": 248, "end": 268, "text": "(Zheng et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3" }, { "text": "Each instance of the (single-task) attribute-value extraction problem comes with an input sentence denoted by a sequence of words w = {w 1 , . . . , w n } and a corresponding set of labels y = {y 1 , . . . , y n }. The task is to design a supervised ML algorithm which given the input sentence w, predicts the output labels y. Here, the labels correspond to the attributes, e.g. COLOR, and words correspond to the predicted values. Following common practice, we use 3 types of labels (also called tags): B, I, O. Here B and I are prepended to the label to indicate begining and end of a multi-word tag, respectively, while O refers to no tag for the word. For example, the multi-word color \"light green\" may be tagged as B COLOR and I COLOR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem setup", "sec_num": "3.1" }, { "text": "This is an instance of sequence labeling problem (Lample et al., 2016) , and the LSTM-CRF model proposed by Lample et al. (Lample et al., 2016) is the a state of the art model for this task. For each word w i , we obtain the corresponding word embedding x i using a concatenation of its glove embedding (Pennington et al., 2014 ) and it's character based embedding. The word embeddings of a sentence x = {x 1 , . . . , x n } is passed through a Bidirectional LSTM (BiLSTM) layer to produce the context sensitive word embedding h:", "cite_spans": [ { "start": 49, "end": 70, "text": "(Lample et al., 2016)", "ref_id": "BIBREF6" }, { "start": 108, "end": 143, "text": "Lample et al. (Lample et al., 2016)", "ref_id": "BIBREF6" }, { "start": 303, "end": 327, "text": "(Pennington et al., 2014", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Problem setup", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h = BiLSTM(x)", "eq_num": "(1)" } ], "section": "Problem setup", "sec_num": "3.1" }, { "text": "We call this the the embedding layer for our input which is common to both single and multi-task models. Figure 3 (a) describes the architecture in detail. For the multi-task attribute-value extraction problem, the input is a sentence w t j j = 1, . . . , n, and the output of model is a sequence of labels y t j , j = 1, . . . , n, where t = 1, . . . , T . In this paper we only consider the setting of T = 2, i.e. we learn from 2 tasks at a time, due to scalability reasons. However, in theory our method can be extended to learning from more than 2 tasks. We compute the word embeddings x and context dependent word embeddings h in a similar manner as described above.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 113, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Problem setup", "sec_num": "3.1" }, { "text": "We use the LSTM-CRF model with character embeddings (Lample et al., 2016; Zheng et al., 2018) as our baseline single task model. For a given input sentence the word embeddings x and the con- . . , n is then passed through a fully connected layer to produce the score s(y) for every possible label y. This is parameterized by the matrix W \u2208 R d\u00d7k and b \u2208 R k where d is the dimension of h i and k is the total number of possible labels. Hence the score vector for every label is computed as:", "cite_spans": [ { "start": 52, "end": 73, "text": "(Lample et al., 2016;", "ref_id": "BIBREF6" }, { "start": 74, "end": 93, "text": "Zheng et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Single-task attribute-value extraction", "sec_num": "3.2" }, { "text": "s i (.|x) = W \u00d7 h i + b\u2200i = 1, . . . , n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Single-task attribute-value extraction", "sec_num": "3.2" }, { "text": "where n is the length of sentence. We can interpret the k th component of s i , denoted as s i (y = k|h i ), as the score of class k for word w i . Now, given a sequence of words vectors x , a sequence of score vectors {s 1 (y|x), . . . , s n (y|x)}, and a sequence of keys y, a linear-chain CRF defines a global score C \u2208 R as,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Single-task attribute-value extraction", "sec_num": "3.2" }, { "text": "C(x, y) = n i=1 s i (y i |x) + n\u22121 i=1 T (y i , y i+1 |x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Single-task attribute-value extraction", "sec_num": "3.2" }, { "text": "Here, s(y|x) is the y th component of the s vector and T (y, y ) is the transition score from label y to y , which is used to capture label dependency. A softmax over all possible tag sequences yields a probability for the sequence y. P (y|x) = e C(x,y) y \u2208Y e C(x,y ) During training, we maximize the log-probability of the correct key sequence: log(P (y|x)) = C(x, y) \u2212 log( y \u2208Y e C(x,y ) ) Here Y is the set of all possible labellings for sequence x. Given a dataset of sequences and labels D = {(x j , y j ), j = 1, . . . , m, we can define the CRF loss as the negative log-likelihood: ", "cite_spans": [ { "start": 247, "end": 253, "text": "C(x,y)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Single-task attribute-value extraction", "sec_num": "3.2" }, { "text": "L CRF (W, b) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Single-task attribute-value extraction", "sec_num": "3.2" }, { "text": "As mentioned above, for multi-task attribute-value extraction, we have sequence and label combinations (x t , y t ) for two tasks, t \u2208 {1, 2}. We also note that we have a common set of embedding layers (both word representation and BiLSTM) for the two tasks. However, the feedforward layer used for scoring the labels are specific to the tasks. Hence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task attribute-value extraction", "sec_num": "3.3" }, { "text": "s t i (.|x) = W t \u00d7h i +b t , \u2200i = 1, . . . , n ; \u2200t = {1, 2}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task attribute-value extraction", "sec_num": "3.3" }, { "text": "The score and loss functions can be defined analogously to the single task model as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task attribute-value extraction", "sec_num": "3.3" }, { "text": "C t (x, y) = n i=1 s t i (y i |x) + n\u22121 i=1 T t (y i , y i+1 |x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task attribute-value extraction", "sec_num": "3.3" }, { "text": ", and log(P t (y|x)) = x,y ) ). Given the multi-task dataset D t = {(x t j , y t j ), j = 1, . . . , m t , t = {1, 2}, our loss function can be written as:", "cite_spans": [ { "start": 23, "end": 28, "text": "x,y )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multi-task attribute-value extraction", "sec_num": "3.3" }, { "text": "C t (x, y) \u2212 log( y \u2208Y e C t (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task attribute-value extraction", "sec_num": "3.3" }, { "text": "L CRF (W, b) = 2 t=1 m t j=1 \u2212log(P t (y t j |x t j ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task attribute-value extraction", "sec_num": "3.3" }, { "text": "Hence, only parameters of the embedding layers get affected by the multi-task paradigm here, since those are the only shared layers between the tasks. However, these parameters are independent of the labels and are thus relatively robustly learned by just using a reasonably large corpus of input sentences. Another mechanism for borrowing information between tasks is through \"soft coupling\" (Ruder, 2017) of various scores or parameters which are not explicitly shared. In the next section, we devise a soft coupling loss between instances of the two tasks which achieve transfer of information at the granularity of labels.", "cite_spans": [ { "start": 393, "end": 406, "text": "(Ruder, 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-task attribute-value extraction", "sec_num": "3.3" }, { "text": "The principle we use for coupling of scores s t i (y|x) is: similar labels in similar contexts should have similar scores. Recall that the dataset for multi-task attribute value extraction consists of two sets of instances D 1 and D 2 , for each of the two tasks. Since we are attempting to compare the model predictions for the two tasks, the coupling loss depends on two contexts, one from each task: (x j , y j , i) and (x j , y j , i ). Here, j and j denotes indices of instances for the two tasks, and i and i indices within the each sentence instance to the two tasks. We note that since the are \u223c 1000 instances for each task, and \u223c 10 length sentences for each instance, the total number of terms for this loss will be \u223c 10 8 ((10 \u00d7 1000) 2 ). This is prohibitively large for our training purpose, and also is wasteful, since not all contexts (combination of instance j and position i) are related to each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "Hence, as a first step we create a shortlist of pairs of contexts ((i, j), (i , j )) which can borrow informations from each other, by thresholding on the cosine similarity between the a windows around the contexts u i,j and u i ,j :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "L = {((i, j), (i , j )) | cosine sim(u i,j , u i ,j ) > thresh}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "Here, note that u(i, j) is the word embedding of a window around the context (i, j).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "Context coupling error: Our next challenge is to design a mechanism to figure out similar contexts and similar labels. We use the softmax attention mechanism to automatically learn the similar label-context combinations, simultaneously as we also learn the scoring function. For efficiency of parameters, we use the Luong attention. Hence the attention score for context (i, j) from task 1 over context (i , j ) from task 2 is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "A(j, i, j , i ) = e \u03b1(j,i,j ,i ) (\u0135,\u00ee)\u2208L(j,i) e \u03b1(j,i,\u0135,\u00ee) \u03b1(j, i, j , i ) = u(i, j) T diag(a)u(i , j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "Here, a = (a 1 , ...., a d ) are learnable parameters of same dimension as the word embeddings, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "L(j, i) = {(j , i )|((i, j), (i , j )) \u2208 L}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "The context-coupling error is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "CCE(L, a) = ((j,i),(j ,i ))\u2208L A(j, i, j , i )\u00d7 |(s i (y ji |x j ) \u2212 s j (y j ,i |x j )|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "We note that this score is selecting the similar contexts from second task since it normalizes the attention score over the contexts of the second task. Symmetrically, we can define the attention score of context (i , j ) from task 2 over (i, j) from task 1 as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "A (j, i, j , i ) = e \u03b1 (j,i,j ,i ) (\u0135,\u00ee)\u2208L(j ,i ) e \u03b1(\u0135,\u00ee,j ,i ) \u03b1 (j, i, j , i ) = u(i, j) T diag(a )u(i , j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "Hence the context coupling error in reverse direction is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "CCE (L, a ) = ((j,i),(j ,i ))\u2208L A (j, i, j , i )\u00d7 |(s i (y ji |x j ) \u2212 s j (y j ,i |x j )|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "Label coupling error In addition to the context coupling error defined above, we also take into account the explicit similarity between only labels, using a character k-gram based embedding of the labels in the context (i, j) as: v i,j . Hence, the label coupling error is given as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "LCE((i, j), (i , j )) = SoftMax(v i,j \u2022 v i ,j )\u00d7 |(s i (y ji |x j ) \u2212 s j (y j ,i |x j )|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "LCE is defined analogously. The label embeddings, v i,j , are learned jointly with the model. The total coupling error between contexts (i, j) and (i , j ) from the two tasks respectively, is the sum of context coupling error and the label coupling error:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "T CE(L, a, a , v) = ((i,j),(i ,j ))\u2208L [CCE((i, j), (i , j )) + CCE ((i, j), (i , j )) + LCE((i, j), (i , j )) + LCE ((i, j), (i , j ))]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "We optimize the sum total of all the CRF losses and total coupling error in order to obtain model parameters. We use stochatic gradient descent, where minibatches are constructed from three lists: D 1 , D 2 , and L. Samples from the first two lists are used to calculate the CRF losses, while samples from L are used to calculate total coupling error, and the corresponding updates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling loss", "sec_num": "3.4" }, { "text": "In this section, we report results from our proposed method for multi-task attribute extraction, against single task attribute extraction. We implemented our model using tensorflow on a 8-core Centos machine. We used 300 dimensional pre-trained Glove vectors. We have also experimented with other customized word embeddings e.g. fasttext, but did not achieve significantly better results. For this work, we use single layer BiLSTM as the embedding layer. The hidden layer size for BiLSTM layer was set to 700. We have experimented with other embedding layer architectures, e.g. hidden layer sizes ranging from 300 to 900, and also two layer BiLSTMs with hidden layer sizes (500,700). However, the performance of single layer LSTM with hidden layer size 700 was found to be similar or better than others. For training, the batch size was chosen to be 30 for both the CRF loss batches and for coupling loss batches sampled from the shortlist L. ADAM was used as optimizer and we trained for maximum of 30 epochs. We trained the model for 30 epochs. Evaluation Metric As reported below, the datasets for this problem show extreme skew in terms of occurrence of labels. Hence, we use the standard metrics of macro precision, macro recall, and macro F1 score. We also report the microaccuracy. While computing the macro-metrics (precision, recall and F1), we ignore the 'O' label. It is clear that macro-F1 score without the 'O' label, is the most representative metric here, from an application point of view.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4" }, { "text": "The dataset used here are taken from actual systems for product delivery used in Flipkart. We performed our experiments using data (both product descriptions, and ground truth annotations) from six verticals: Dress, Jean, Mangalsutra 1 , Chain, Trouser and Jewellery available on Flipkart. These verticals are chosen based on three factors (1) GMV (Gross Merchandise Value), (2) Volume of data available and (3) Verticals with rich product descriptions. Number of labels in each vertical and number of tagged description in train and test data for each vertical is shown in table 1. The words in product descriptions for each vertical are tagged using B,I,O (short for beginning, inside, and outside) format where the B prefix before a tag indicates that the token is the beginning of a tag, and an I prefix before a tag indicates that the token is inside a tag and An O tag indicates that a token belongs to no tag. Table 2 shows the pairs of similar tasks (verticals) which were trained togather for MTL. The pairs were chosen manually based on probability of occurrence similar labels in these tasks. The results for the each of the verticals is the best achieved for these pairs of tasks. Note that, while we have to manually provide a similar pair of tasks, the similarity between labels is automatically deciphered. Table 3 : Comparison of macro-F1 scores between single-task and multi-task models for various verticals that attention mechanism is indeed choosing the similar labels between the pairs of tasks, irrespective of whether there is an improvement in accuracy for the pair of labels.", "cite_spans": [], "ref_spans": [ { "start": 917, "end": 924, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1322, "end": 1329, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "In figure 4-(b) , we report the topmost pairs of labels with the highest attention scores, along with the corresponding increase in accuracy. The left column borrower labels(Chain) and right column shows Lender labels(Mangalsutra) which got the highest average attention weights across all contexts-pairs in the list L. The value in brackets shows the attention value. The bold entries appear in top-5 attributes, with highest F1-scores in table 4. One can also see non-obvious correspondences, e.g. Necklace type from chain can borrow all the information from Gemstone from lender vertical Mangalsutra. We can also see that in most of the cases, the labels from task 1 borrow information from corresponding labels of task 2, even though this information was not explicitly furnished. This observation provides us further confidence that the attention mechanism used for discovery of similar labels and similar contexts, indeed works effectively.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 15, "text": "figure 4-(b)", "ref_id": null } ], "eq_spans": [], "section": "Performance Comparison", "sec_num": "4.2" }, { "text": "This observation further validates the effectiveness of our attention model in extracting similar pairs of labels between two tasks using the coupling loss. We believe this mechanism can be applied in many more situations to shortlist important and similar attributes in other contexts, while jointly learning a prediction model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Comparison", "sec_num": "4.2" }, { "text": "In this paper, we study attribute-value extraction from production description in the e-commerce domain. Many of the attributes occur in very few descriptions. Hence the amount of supervised training data available for these attributes is very low, which leads to low prediction performance We thus propose a novel multi-task learning based algorithm which borrows information from related domains (i.e., category/vertical) in order to improve prediction performance of infrequently occurring attributes. We validate the proposed method with extensive experimental evaluation on a large dataset of six verticals from a prominent, e-commerce company. The proposed technique not only achieves higher accuracy on verticals with similar labels, but also can be used for discovering attribute similarities across verticals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "A type of Necklace", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Prec. Recall Acc. In this section, we illustrate the effectiveness of our multi-task learning method. Table 3 , reports the best performances of single and multitask models for all the six verticals studied here. We can see that except for jewellery, multi-task model improve performance in terms of F1 score for all other verticals. For some verticals, e.g. chain, the improvement is more than 5 percent, while for other verticals the improvement lies in the 2 percent range. We note that the improvement depends on two main factors: whether we can find a close enough vertical to borrow from, and the number of examples already present in the current vertical. For example we can see that the vertical \"Jewellery\" has about 5000 examples, and also does not have a very close other vertical to borrow information from. Hence in it's case MTL is not able to improve the performance.In table 4, we report the fine-grained improvements of top 5 labels for the verticals: Trouser, Jean, Mangalsutra, and Chain. We note that the top improvements for these verticals are in the range of 51%, 46%, 29% and 22% respectively. We also note that number of examples for these labels in the training dataset (#ex column) are respectively 6, 15, 6, and 7. Hence this table further corroborates our claim that MTL improves the performance for labels with lower amount of information in the single task training set. ", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Vertical", "sec_num": null }, { "text": "In this section, we validate the learned attributeattribute similarity, by studying the attribute-wise F1-scores for the similar attribute pairs. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validation of Attribute Similarity", "sec_num": "4.3" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised Construction of a Product Knowledge Graph", "authors": [ { "first": "Omar", "middle": [], "last": "Alonso", "suffix": "" }, { "first": "Vasileios", "middle": [], "last": "Kandylas", "suffix": "" }, { "first": "Rukmini", "middle": [], "last": "Iyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the SIGIR 2019 Workshop on eCommerce, co-located with the 42st International ACM SIGIR Conference on Research and Development in Information Retrieval, eCom@SIGIR 2019", "volume": "2410", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omar Alonso, Vasileios Kandylas, and Rukmini Iyer. 2019. Unsupervised Construction of a Product Knowledge Graph. In Proceedings of the SIGIR 2019 Workshop on eCommerce, co-located with the 42st International ACM SIGIR Conference on Re- search and Development in Information Retrieval, eCom@SIGIR 2019, Paris, France, July 25, 2019 (CEUR Workshop Proceedings), Jon Degenhardt, Surya Kallumadi, Utkarsh Porwal, and Andrew Trotman (Eds.), Vol. 2410. CEUR-WS.org. http: //ceur-ws.org/Vol-2410/paper23.pdf", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A unified architecture for natural language processing: deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "ICML '08", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML '08.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Contrastive Entity Linkage: Mining Variational Attributes from Large Catalogs for Entity Linkage", "authors": [ { "first": "Varun", "middle": [], "last": "Embar", "suffix": "" }, { "first": "Bunyamin", "middle": [], "last": "Sisman", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Xin", "middle": [ "Luna" ], "last": "Dong", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Faloutsos", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2020, "venue": "Automated Knowledge Base Construction", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Varun Embar, Bunyamin Sisman, Hao Wei, Xin Luna Dong, Christos Faloutsos, and Lise Getoor. 2020. Contrastive Entity Linkage: Mining Variational Attributes from Large Catalogs for Entity Link- age. In Automated Knowledge Base Construc- tion. https://openreview.net/forum? id=fR44nF03Rb", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Distributed Word Representations Improve NER for e-Commerce", "authors": [ { "first": "Mahesh", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Hart", "suffix": "" }, { "first": "Mirko", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Jean-David", "middle": [], "last": "Ruvini", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "160--167", "other_ids": { "DOI": [ "10.3115/v1/W15-1522" ] }, "num": null, "urls": [], "raw_text": "Mahesh Joshi, Ethan Hart, Mirko Vogel, and Jean- David Ruvini. 2015. Distributed Word Represen- tations Improve NER for e-Commerce. In Proceed- ings of the 1st Workshop on Vector Space Model- ing for Natural Language Processing. Association for Computational Linguistics, Denver, Colorado, 160-167. https://doi.org/10.3115/v1/ W15-1522", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories", "authors": [ { "first": "Giannis", "middle": [], "last": "Karamanolakis", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giannis Karamanolakis, Jun Ma, and Xin Dong. 2020. TXtract: Taxonomy-Aware Knowledge Ex- traction for Thousands of Product Categories. ArXiv abs/2004.13852 (2020).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Recognizing Salient Entities in Shopping Queries", "authors": [ { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "107--111", "other_ids": { "DOI": [ "10.18653/v1/P16-2018" ] }, "num": null, "urls": [], "raw_text": "Zornitsa Kozareva, Qi Li, Ke Zhai, and Weiwei Guo. 2016. Recognizing Salient Entities in Shopping Queries. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, 107- 111. https://doi.org/10.18653/v1/ P16-2018", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural Architectures for Named Entity Recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "260--270", "other_ids": { "DOI": [ "10.18653/v1/N16-1030" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recogni- tion. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. Association for Computational Linguis- tics, San Diego, California, 260-270. https: //doi.org/10.18653/v1/N16-1030", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Benefit of Multitask Representation Learning", "authors": [ { "first": "Andreas", "middle": [], "last": "Maurer", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Pontil", "suffix": "" }, { "first": "Bernardino", "middle": [], "last": "Romera-Paredes", "suffix": "" } ], "year": 2016, "venue": "Journal of Machine Learning Research", "volume": "17", "issue": "", "pages": "1--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. 2016. The Benefit of Multi- task Representation Learning. Journal of Machine Learning Research 17, 81 (2016), 1-32. http: //jmlr.org/papers/v17/15-242.html", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP). 1532-1543.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Accurate Product Attribute Extraction on the Field", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Rezk", "suffix": "" }, { "first": "Laura", "middle": [ "Alonso" ], "last": "Alemany", "suffix": "" }, { "first": "Lasguido", "middle": [], "last": "Nio", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "IEEE 35th International Conference on Data Engineering (ICDE)", "volume": "", "issue": "", "pages": "1862--1873", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Rezk, Laura Alonso Alemany, Lasguido Nio, and Ted Zhang. 2019. Accurate Product Attribute Extraction on the Field. 2019 IEEE 35th Inter- national Conference on Data Engineering (ICDE) (2019), 1862-1873.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Task-Specific Representation Learning for Web-scale Entity Disambiguation", "authors": [], "year": 2018, "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sourangshu Bhattacharya Anirban Dasgupta Soumen Chakrabarti Rijula Kar, Susmija Reddy. 2018. Task-Specific Representation Learning for Web-scale Entity Disambiguation. In Association for the Advancement of Artificial Intelligence (AAAI).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An Overview of Multi-Task Learning in Deep Neural Networks. ArXiv abs/1706", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2017. An Overview of Multi- Task Learning in Deep Neural Networks. ArXiv abs/1706.05098 (2017).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning to Extract Attribute Value from Product via Question Answering: A Multi-task Approach", "authors": [ { "first": "Qifan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Bhargav", "middle": [], "last": "Kanagal", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Sanghai", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Sivakumar", "suffix": "" }, { "first": "Zac", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Yu", "suffix": "" }, { "first": "", "middle": [], "last": "Elsas", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "47--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qifan Wang, Li Yang, Bhargav Kanagal, Sumit Sang- hai, D Sivakumar, Bin Shu, Zac Yu, and Jon El- sas. 2020. Learning to Extract Attribute Value from Product via Question Answering: A Multi-task Ap- proach. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 47-55.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Scaling up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title", "authors": [ { "first": "Huimin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenting", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Xinyu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Man", "middle": [], "last": "Lan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "19--1514", "other_ids": { "DOI": [ "10.18653/v1/P19-1514" ] }, "num": null, "urls": [], "raw_text": "Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up Open Tagging from Tens to Thousands: Comprehension Empow- ered Attribute Value Extraction from Product Title. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Associ- ation for Computational Linguistics, Florence, Italy, 5214-5223. https://doi.org/10.18653/ v1/P19-1514", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deep Multi-task Representation Learning: A Tensor Factorisation Approach", "authors": [ { "first": "Yongxin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Timothy", "middle": [ "M" ], "last": "Hospedales", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongxin Yang and Timothy M. Hospedales. 2017. Deep Multi-task Representation Learning: A Ten- sor Factorisation Approach. ArXiv abs/1605.06391 (2017).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "OpenTag: Open Attribute Value Extraction from Product Profiles", "authors": [ { "first": "Guineng", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Subhabrata", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Xin", "middle": [ "Luna" ], "last": "Dong", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "1049--1058", "other_ids": { "DOI": [ "10.1145/3219819.3219839" ] }, "num": null, "urls": [], "raw_text": "Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. OpenTag: Open At- tribute Value Extraction from Product Profiles. In Proceedings of the 24th ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining (London, United Kingdom) (KDD '18). Association for Computing Machinery, New York, NY, USA, 1049-1058. https://doi. org/10.1145/3219819.3219839", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multimodal Joint Attribute Prediction and Value Extraction for E", "authors": [ { "first": "Tiangang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Youzheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2009.07162" ] }, "num": null, "urls": [], "raw_text": "Tiangang Zhu, Yue Wang, Haoran Li, Youzheng Wu, Xiaodong He, and Bowen Zhou. 2020. Multi- modal Joint Attribute Prediction and Value Ex- traction for E-commerce Product. arXiv preprint arXiv:2009.07162 (2020).", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "A snapshot of structured attributes and product description -underlined words wherein is important, additional information not provided by seller in attributes." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Sample tagged data from Jean Vertical." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "(a) Architecture of single task model showing the sentence embedding layers (b) High-level architecture of the multi-task attribute-value extraction model text sensitive word embeddings h are computed as described above. The context sensitive word embeddings h i , i = 1, ." }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "P (y j |x j )) (Lample et al., 2016) describes a method for learning the model parameters and inferring the partition function and scores by minimizing the above objective w.r.t. W and b." }, "TABREF0": { "content": "
Vertical# labels # Examples# Exmpl. / label
(Train, Test)(max , min )
Jean372206 , 9481387,1
Trouser381993, 8561350, 1
Dress304088, 17532241, 1
Mangalsutra38363, 157333, 1
Chain762068, 8881195, 1
Jewellery684863, 20854518, 1
", "html": null, "type_str": "table", "num": null, "text": "Dataset Characterstics." }, "TABREF1": { "content": "
: Similar Task Pairs for MTL
(Dress, Jean), (Mangalsutra, Jewellary)
(Trouser , Jean), (Chain, Jewellary),
(Mangalsutra, Chain)
", "html": null, "type_str": "table", "num": null, "text": "" } } } }