{ "paper_id": "S16-1046", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:25:04.969804Z" }, "title": "bunji at SemEval-2016 Task 5: Neural and Syntactic Models of Entity-Attribute Relationship for Aspect-based Sentiment Analysis", "authors": [ { "first": "Toshihiko", "middle": [], "last": "Yanase", "suffix": "", "affiliation": {}, "email": "toshihiko.yanase.gm@hitachi.com" }, { "first": "Kohsuke", "middle": [], "last": "Yanai", "suffix": "", "affiliation": {}, "email": "kohsuke.yanai.cs@hitachi.com" }, { "first": "Misa", "middle": [], "last": "Sato", "suffix": "", "affiliation": {}, "email": "misa.sato.mw@hitachi.com" }, { "first": "Toshinori", "middle": [], "last": "Miyoshi", "suffix": "", "affiliation": {}, "email": "toshinori.miyoshi.pd@hitachi.com" }, { "first": "Yoshiki", "middle": [], "last": "Niwa", "suffix": "", "affiliation": {}, "email": "yoshiki.niwa.tx@hitachi.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a sentiment analysis system developed by the bunji team in SemEval-2016 Task 5. In this task, we estimate the sentimental polarity of a given entity-attribute (E#A) pair in a sentence. Our approach is to estimate the relationship between target entities and sentimental expressions. We use two different methods to estimate the relationship. The first one is based on a neural attention model that learns relations between tokens and E#A pairs through backpropagation. The second one is based on a rule-based system that examines several verb-centric relations related to E#A pairs. We confirmed the effectiveness of the proposed methods in a target estimation task and a polarity estimation task in the restaurant domain, while our overall ranks were modest.", "pdf_parse": { "paper_id": "S16-1046", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a sentiment analysis system developed by the bunji team in SemEval-2016 Task 5. In this task, we estimate the sentimental polarity of a given entity-attribute (E#A) pair in a sentence. Our approach is to estimate the relationship between target entities and sentimental expressions. We use two different methods to estimate the relationship. The first one is based on a neural attention model that learns relations between tokens and E#A pairs through backpropagation. The second one is based on a rule-based system that examines several verb-centric relations related to E#A pairs. We confirmed the effectiveness of the proposed methods in a target estimation task and a polarity estimation task in the restaurant domain, while our overall ranks were modest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sentiment analysis is an important technology for understanding users' intentions from review texts. Such technologies are also useful for argumentation mining because it is necessary for readers to capture targets of interest and their polarities (Sato et al., 2015) . Shared tasks of aspect-based sentiment analysis (ABSA) in SemEval provide a test bed for fine-grained analysis of sentiment polarities (Pontiki et al., 2014; Pontiki et al., 2015; Pontiki et al., 2016) .", "cite_spans": [ { "start": 248, "end": 267, "text": "(Sato et al., 2015)", "ref_id": "BIBREF12" }, { "start": 405, "end": 427, "text": "(Pontiki et al., 2014;", "ref_id": "BIBREF8" }, { "start": 428, "end": 449, "text": "Pontiki et al., 2015;", "ref_id": "BIBREF9" }, { "start": 450, "end": 471, "text": "Pontiki et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We participate in all four slots (Slot 1, 2, 1 & 2, and 3) of the restaurant domain and laptop domain in English. We focus on two types of models to capture the entity-attribute relationship, especially in Slot 2 and Slot 3. The first one is a neural network based model. The second one is a rule-based approach. Now, we explain the problem settings of the slots and our approaches. The following is an example of sentences that provide positive opinions to the FOOD#QUALITY aspect: Pizza here is good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Slot 1 is an extraction of all aspects mentioned in a sentence. In this example, the goal is to choose FOOD#QUALITY among many other aspects. We formulate the problem as a multi-label classification problem and use a neural network-based model. Slot 2 is an extraction of opinion target expressions. The expected output is \"Pizza\" in the above example. We use a pattern matching based approach and focus on gathering resources such as dictionaries. For Slot 1 & 2, we simply combine the prediction results of Slot 1 and Slot 2. Slot 3 is an estimation of sentiment polarities. In this example, we estimate the polarity of this sentence from the aspect of FOOD#QUALITY. For Slot 3, we take two approaches. The first approach is a neural attention model (Luong et al., 2015) that considers the entity attention (FOOD) and attribute attention (QUAL-ITY) of each token. The second approach is a pattern matching-based model that examines the relationship between \"Pizza\" and \"good\" that is also used in Slot 2.", "cite_spans": [ { "start": 752, "end": 772, "text": "(Luong et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is structured as follows: In Section 2, we describe our system of phase A. In Section 3, we explain our system of phase B. Finally, Section 4 summarizes our work. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "x 3 x 4 x 5 s t h t h t v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: Structure of a neural network for Slot 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 System Description of Phase A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We formulate Slot 1 as a multi-label classification problem. In this problem, an entity-attribute pair is considered as a label. We use a neural model to solve this problem. The model is illustrated in Figure 1 . Given a sequence of word vectors X = (x 1 , x 2 , ..., x T ), this model calculates a vector y whose element represents probability of each label as:", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 210, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = f (X).", "eq_num": "(1)" } ], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "At first, we apply Stanford Core NLP (Manning et al., 2014) to each document to obtain word sequences. Then, we use word embedding generated by Skip-gram with Negative Sampling (Mikolov et al., 2013) to convert words into word vectors. Three hundred dimensional vectors trained with Google News Corpus 1 are used in Slot 1.", "cite_spans": [ { "start": 37, "end": 59, "text": "(Manning et al., 2014)", "ref_id": "BIBREF5" }, { "start": 177, "end": 199, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "Then, a word vector sequence X is inputted to a recurrent neural network (RNN). The RNN calculates an output vector s t for each x t as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s t , h t = g(x t , h t\u22121 ),", "eq_num": "(2)" } ], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "where h t denotes a hidden state of the RNN at position t. We use Long Short-Term Memory (LSTM) (Sak et al., 2014) and Gated Recurrent Unit (GRU) (Cho et al., 2014) as implementations of RNN units. We use a bi-directional RNN (BiRNN) (Schuster and Paliwal, 1997) ", "cite_spans": [ { "start": 96, "end": 114, "text": "(Sak et al., 2014)", "ref_id": "BIBREF11" }, { "start": 146, "end": 164, "text": "(Cho et al., 2014)", "ref_id": "BIBREF1" }, { "start": 234, "end": 262, "text": "(Schuster and Paliwal, 1997)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v = 1 T T \u2211 t=1 s t .", "eq_num": "(3)" } ], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "Finally, the probabilities in y are calculated by using a single layered perceptron:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = softmax(tanh(W v + b)),", "eq_num": "(4)" } ], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "where W, b denote a weight matrix and a bias vector, respectively. We determine that a sentence contains the i-th aspect if its output y i is greater than a threshold \u03b8. The threshold \u03b8 is determined by using development data that is randomly sampled from training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "We modify aspect names to a suitable format for our neural model. Low-frequency aspects in training datasets are replaced by a new aspect \"OTHER\". The most common 10 aspects are preserved in the restaurant domain; the most common 16 aspects are preserved in the laptop domain. \"NONE\" labels are assigned to sentences that do not have any labels. The probability y i in an example of a training dataset is defined as y i = 1/k when a target sentence has the i-th aspect and a total of k aspects, otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "y i = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "We train the model by using backpropagation. The loss is calculated by using cross entropy. We use a minibatch stochastic gradient descent (SGD) algorithm together with an AdaGrad optimizer (Duchi et al., 2011) . We add Dropout (Srivastava et al., 2014) layers to the input and output of the RNN. We clip the gradient norm when it exceeds 5.0 to improve the stability of training. The model parameters and \u03b8 are trained by the training dataset of the ABSA 2015, and the hyperparameters are tuned by test dataset of the ABSA 2015. We use random sampling to tune the hyperparameters. The best settings are shown in Table 1 . We implement our neural systems by using Tensorflow (Abadi et al., 2015) .", "cite_spans": [ { "start": 190, "end": 210, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF2" }, { "start": 228, "end": 253, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF14" }, { "start": 675, "end": 695, "text": "(Abadi et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 613, "end": 620, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Slot 1: Aspect Category", "sec_num": "2.1" }, { "text": "In Slot 2, we extract text spans corresponding to target entities. The procedure of our proposed method is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 2: Opinion Target Expression", "sec_num": "2.2" }, { "text": "1. Creating dictionaries of food names and drink names by extracting targets in a training dataset, 2. Collecting food names and drink names in Knowledge Base and adding them to dictionaries, 3. Applying dictionary matching to sentences in a test dataset, 4. Extracting restaurant names by using syntactic rules, and 5. Checking relationship between targets extracted by step 3 and step 4 and attribute expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 2: Opinion Target Expression", "sec_num": "2.2" }, { "text": "Three key features of our method are the dictionary creation in step 2, the syntactic rules in step 4, and the estimation of the entity-attribute relationship in step 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 2: Opinion Target Expression", "sec_num": "2.2" }, { "text": "Coverage of dictionaries is crucial to improve recall metrics. In the training dataset, we observe various instances of FOOD entities such as bread, focaccia and gazpacho. Therefore, we try to import world knowledge written in Knowledge Base. We use DBpedia 2 as Knowledge Base to expand the dictionaries. We write a SPARQL query to retrieve labels (rdfs:label) of entities as dictionary entries. First we prepare a list of target types. For examples, we use http://dbpedia.org/ontology/Food and http://dbpedia.org/ontology/Fish as types of FOOD entities. We also prepare a list of types to be ignored such as \"dbo:Beverage\". Names of DRINK are also retrieved in the same manner as FOOD. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary Creation", "sec_num": null }, { "text": "We use syntactic rules to extract restaurant names. We define a set of verb-centric rules such as \"A1 visited A2\" where A1 is a subject, and A2 is an object. A2 is likely to be restaurant names. We manually create 15 rules from training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restaurant Name Extraction", "sec_num": null }, { "text": "We observe entities not related to sentimental expressions in dictionary-match results, which decrease precision scores. Therefore, we filter entities related to sentiment expressions. We use the same method as that in Slot 3. Table 2 shows the results of Slot 1. Our system marked the highest recall score among all of the teams in the restaurant domain, while our precision score is lower than that of the baseline system. This is partly because of the determination of threshold values that may be overfitted to the development sets. One possible solution is to use cross validation to estimate more reliable threshold values. Table 3 shows the results of Slot 2. We can observe improvement of both the precision score and the recall score from those of the baseline system. The recall score is comparable to that of the ranked 1st team, while there is much room for improvement of the precision scores. Table 4 shows the results of Slot 1 & 2. We can observe the similar tendency to Slot 1's results because we simply merged the results of Slot 1 and Slot 2, and Slot 1 performs worse than Slot 2. Table 5 shows the results of Slot 1 of Subtask 2. We merge the sentence-wise results into documentwise results.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 234, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 630, "end": 637, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 907, "end": 914, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 1102, "end": 1109, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Entity-Attribute Relationship Estimation", "sec_num": null }, { "text": "3 System Description of Phase B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.3" }, { "text": "Our method is inspired by a Deep Learning method proposed by Wang and Liu (Wang and Liu, 2015) . They used estimated probabilities of Slot 1 as weights of a target entity-attribute pair, and then they inputted weighted tokens to a convolutional neural network. Instead of probabilities of Slot 1, we directly calculate entity attention and attribute attention at each token by using a neural attention model (Luong et al., 2015) . The model is illustrated in Figure 2 . We calculate a vector y p that represents probabilities of polarities (positive, negative, and neutral) as:", "cite_spans": [ { "start": 61, "end": 94, "text": "Wang and Liu (Wang and Liu, 2015)", "ref_id": "BIBREF15" }, { "start": 408, "end": 428, "text": "(Luong et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 459, "end": 467, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y p = f (X, v e , v a ),", "eq_num": "(5)" } ], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "where v e and v a denote vectors corresponding to a target entity and a target attribute. At first we calculate RNN outputs s t with Eq. 2 similarly to Slot 1. Then, attention weights for both entity and attribute are computed at attention layers. An entity-attention layer calculates weights \u03b5 t at position t. At each position, e t is computed to measure the relationship between s t and v e :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e t = v T e W e s t .", "eq_num": "(6)" } ], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "Then, we transform the scale of e t and obtain an entity-attention weight \u03b5 t as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b5 t = exp(e t ) \u2211 j exp(e j ) .", "eq_num": "(7)" } ], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "Similarly, the attribute attention layer has weights \u03b1 t at position t as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a t = v T a W a s t ,", "eq_num": "(8)" } ], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 t = exp(a t ) \u2211 j exp(a j ) .", "eq_num": "(9)" } ], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "Then, we calculate a sentence vector r that is a weighted sum of RNN output with entity attention weights and attribute attention weights as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r = \u2211 t (\u03b1 t s t ||\u03b5 t s t ),", "eq_num": "(10)" } ], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "where || denotes a concatenation operator that creates a vector in R 2d from two vectors in R d . Finally, we calculate y p by using a single layered perceptron:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y p = softmax(tanh(W p r + b p )).", "eq_num": "(11)" } ], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "We train the Slot 3 model by using backpropagation. We use a minibatch stochastic gradient descent (SGD) algorithm together with the ADAM optimizer (Kingma and Ba, 2015) . Hyperparameters are tuned similarly to Slot 1. The hyperparameter settings in Slot 3 are shown in Table 6 . We add Dropout (Srivastava et al., 2014) layers to the input and output of the RNN. We also apply L2regularization to two attention layers and a softmax Parameter REST (U) REST (C) LAPT (C) Dropout p k 0.9 0.9 0.9 Learning rate 1.7 \u00d7 10 \u22123 1.4 \u00d7 10 \u22123 5.8 \u00d7 10 \u22124 RNN state size 64 128 64 minibatch size 16 32 16 max epochs 12 19 6 L2 coef 1.9 \u00d7 10 \u22124 1.9 \u00d7 10 \u22124 3.3 \u00d7 10 \u22124 Table 6 : Hyperparameter setting for Slot 3. p k denotes a ratio to keep values in a Dropout layer.", "cite_spans": [ { "start": 148, "end": 169, "text": "(Kingma and Ba, 2015)", "ref_id": "BIBREF3" }, { "start": 295, "end": 320, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 270, "end": 277, "text": "Table 6", "ref_id": null }, { "start": 541, "end": 632, "text": "\u22124 RNN state size 64 128 64 minibatch size 16 32 16 max epochs 12 19 6 L2 coef", "ref_id": "TABREF2" }, { "start": 669, "end": 676, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "layer. The attention unit size is 300. In an unconstrained setting, we use the same word embedding that has 300 dimensional vectors as Slot 1. In a constrained setting, we use 128-dimensional vectors that are initialized by uniform distribution. We clip the gradient norm when it exceeds 5.0 to improve stability of training. We set the maximum token length as 40. Initial values of entity vectors are created by averaging word vectors in sentences that have target entities. Attribute vectors are also initialized in the same manner as the entity vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Slot 3: Sentiment Polarity Neural Approach", "sec_num": "3.1" }, { "text": "This approach trains a linear classifier using relations of a given entity and a given attribute as features. In the first step, we annotate the following 11 annotations of relations to all documents:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation-Features Approach", "sec_num": null }, { "text": "believe Showing someone's belief such as \"X likes Y\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation-Features Approach", "sec_num": null }, { "text": "and \"X avoids Y\", significant Showing X's significance such as \"X is impressive\" and \"X is terrible\", require Showing requirement such as \"X needs Y\", equivalent Showing X is equivalent to Y, such as \"X viewed Y\" and \"X regarded Y\", include Showing inclusion or possession such as \"X has Y\" and \"X equips Y\". contrast Comparing X with Y such as \"Y is ... than X\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation-Features Approach", "sec_num": null }, { "text": "and \"Y is ... compared to X\", affect Showing X affects Y such as \"X increases Y\" and \"X causes Y\", state Showing statement such as \"X doubts that Y\", negation Showing negations such as \"not X\" and \"no X\", shift Reversing X's polarity such as \"X ban\" and \"X shortage\", and absolutize Fixing polarity of X such as \"X problem\" and \"X risk\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation-Features Approach", "sec_num": null }, { "text": "These annotations were originally developed for an argument-generation system (Sato et al., 2015) .", "cite_spans": [ { "start": 78, "end": 97, "text": "(Sato et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Relation-Features Approach", "sec_num": null }, { "text": "In the second step, we identify an entity expression and an attribute expression that correspond to a given entity-attribute pair. We use a simple dictionary-matching approach. In the restaurant domain, we use a given target annotation as an entity expression. In the laptop domain, we prepare a list of entities extracted from a training dataset. For both domains, we create an attribute dictionary. Entries of the dictionary are manually extracted from training datasets. Then, we assign a sentimental polarity (positive, negative, or neutral) to each entry. In the third step, we create features for a linear classifier. Those features are generated by combining annotations to capture various relations of a target entity-attribute pair. For example, we examine whether an affect annotation is negated or not and whether a target entity is a subject of an affect annotation or an object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation-Features Approach", "sec_num": null }, { "text": "Finally, we classify a sentimental polarity by using a linear classifier. We use a linear SVM in scikitlearn (Pedregosa et al., 2011) as an implementation of the classifier, on the parameter C = 0.1, loss = squared epsilon insensitive, and penalty = l2. Table 7 shows the results for Slot 3. We select a suitable method from the neural method and the rulebased method for each domain by comparing scores in the ABSA15 dataset. In the restaurant domain, we can observe that the proposed method improves the accuracy by 10 percentage points compared with the baseline system.", "cite_spans": [ { "start": 109, "end": 133, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 254, "end": 261, "text": "Table 7", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Relation-Features Approach", "sec_num": null }, { "text": "We merged sentence-wise estimation and created document-wise estimation. We gathered polarities of an entity-attribute pair. If a result was both positive and negative, then we judged it as conflicting. a similar tendency to the results in Subtask 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "In this paper, we described the participation of the bunji team in SemEval-2016. We used both a neural approach and a rule-based approach to model an entity-attribute relationship. We confirmed the effectiveness of the proposed methods in a target estimation task and a polarity estimation task in the restaurant domain, while our overall ranks were modest. As a future work, we plan to investigate network structures that are simple enough to be trained with a relatively small dataset. For the rule-based system, we plan to add more rules to improve precision scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "http://wiki.dbpedia.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Harp", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Irving", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Manjunath", "middle": [], "last": "Kudlur", "suffix": "" } ], "year": 2015, "venue": "Josh Levenberg", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefow- icz, Lukasz Kaiser, Manjunath Kudlur, Josh Leven- berg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fer- nanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software avail- able from tensorflow.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using rnn encoder-decoder for statistical ma- chine translation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1724-1734.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "J. Mach. Learn. Res", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121-2159.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "The International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1412-1421.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Workshop at International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. Proceedings of Workshop at In- ternational Conference on Learning Representations.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Bertrand", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Per- rot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Ma- chine learning in Python. Journal of Machine Learn- ing Research, 12:2825-2830.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "John", "middle": [], "last": "Pavlopoulos", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation (Se-mEval 2014)", "volume": "", "issue": "", "pages": "27--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Har- ris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (Se- mEval 2014), pages 27-35.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semeval-2015 task 12: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Haris", "middle": [], "last": "Papageorgiou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "486--495", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analy- sis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486- 495.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SemEval-2016 task 5: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Haris", "middle": [], "last": "Papageorgiou", "suffix": "" }, { "first": "Ion", "middle": [], "last": "Androutsopoulos", "suffix": "" }, { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "" }, { "first": "Al-", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Mahmoud", "middle": [], "last": "Smadi", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Al-Ayyoub", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Orph\u00e9e", "middle": [], "last": "Qin", "suffix": "" }, { "first": "V\u00e9ronique", "middle": [], "last": "De Clercq", "suffix": "" }, { "first": "Marianna", "middle": [], "last": "Hoste", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Apidianaki", "suffix": "" }, { "first": "", "middle": [], "last": "Tannier", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, V\u00e9ronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeny Kotelnikov, Nuria Bel, Salud Mar\u00eda Jim\u00e9nez- Zafra, and G\u00fcl\u015fen Eryi\u01e7it. 2016. SemEval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evalua- tion, SemEval '16.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. INTERSPEECH, abs/1402", "authors": [ { "first": "Hasim", "middle": [], "last": "Sak", "suffix": "" }, { "first": "Andrew", "middle": [ "W" ], "last": "Senior", "suffix": "" }, { "first": "Fran\u00e7oise", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2014, "venue": "", "volume": "1128", "issue": "", "pages": "338--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hasim Sak, Andrew W. Senior, and Fran\u00e7oise Beaufays. 2014. Long short-term memory based recurrent neu- ral network architectures for large vocabulary speech recognition. INTERSPEECH, abs/1402.1128:338- 342.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "End-to-end argument generation system in debating", "authors": [ { "first": "Misa", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Kohsuke", "middle": [], "last": "Yanai", "suffix": "" }, { "first": "Toshinori", "middle": [], "last": "Miyoshi", "suffix": "" }, { "first": "Toshihiko", "middle": [], "last": "Yanase", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Iwayama", "suffix": "" }, { "first": "Qinghua", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yoshiki", "middle": [], "last": "Niwa", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL-IJCNLP 2015 System Demonstrations", "volume": "", "issue": "", "pages": "109--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Misa Sato, Kohsuke Yanai, Toshinori Miyoshi, Toshihiko Yanase, Makoto Iwayama, Qinghua Sun, and Yoshiki Niwa. 2015. End-to-end argument generation system in debating. In Proceedings of ACL-IJCNLP 2015 Sys- tem Demonstrations, pages 109-114.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Kuldip", "middle": [ "K" ], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "IEEE TRANS-ACTIONS ON SIGNAL PROCESSING", "volume": "45", "issue": "11", "pages": "2673--2681", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Schuster and Kuldip K. Paliwal. 1997. Bidi- rectional recurrent neural networks. IEEE TRANS- ACTIONS ON SIGNAL PROCESSING, 45(11):2673- 2681.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15:1929-1958.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Deep learning for aspectbased sentiment analysis", "authors": [ { "first": "Bo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Min", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Wang and Min Liu. 2015. Deep learning for aspect- based sentiment analysis. Reports for CS224d, Stan- ford University.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Structure of a neural network for Slot 3.", "uris": null }, "TABREF2": { "text": "Hyperparameter setting for Slot 1. p k denotes a ratio to keep values in a Dropout layer.", "content": "
ParameterREST LAPT
Dropout p k Learning rate0.8 0.9450.3 0.374
hidden unit size12864
minibatch size2020
max epochs100840
cellLSTM GRU
max token num4050
threshold0.0480.11
", "num": null, "type_str": "table", "html": null }, "TABREF4": { "text": "Official results of Subtask 1 Slot 1", "content": "
DomainTeamPrecision RecallF1
RESTbunji baseline Ranked 1st62.61 51.42 75.4967.32 64.88 38.56 44.07 69.44 72.34
", "num": null, "type_str": "table", "html": null }, "TABREF5": { "text": "Official results of Subtask 1 Slot 2", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF7": { "text": "Official results of Subtask 1 Slot 1 & 2", "content": "
DomainTeamPrecision RecallF1
RESTbunji baseline Ranked 1st72.71 90.65 87.0088.37 79.78 69.55 78.71 81.19 83.99
LAPTbunji baseline Ranked 1st66.84 86.55 72.4946.32 54.72 37.86 52.68 51.83 60.45
", "num": null, "type_str": "table", "html": null }, "TABREF8": { "text": "Official results of Subtask 2 Slot 1", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF11": { "text": "Official results of Subtask 1 Slot 3. (neural) and (rel) denote the neural approach and the relation-feature approach, respectively.", "content": "
", "num": null, "type_str": "table", "html": null }, "TABREF12": { "text": "shows the results for Subtask 2. We can see", "content": "
DomainTeamC/U Accuracy
RESTbunji(neural) bunji(neural) baseline Ranked 1stU C C U70.54 66.58 74.26 81.93
LAPTbunji(rel) bunji(neural) baseline Ranked 1stU C C U60.00 62.20 73.03 75.05
", "num": null, "type_str": "table", "html": null }, "TABREF13": { "text": "Official results of Subtask 2 Slot 3. (neural) and (rel) denote the neural approach and the relation-feature approach, respectively.", "content": "", "num": null, "type_str": "table", "html": null } } } }