|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:57:51.403076Z" |
|
}, |
|
"title": "Selective Attention Based Graph Convolutional Networks for Aspect-Level Sentiment Classification", |
|
"authors": [ |
|
{ |
|
"first": "Xiaochen", |
|
"middle": [], |
|
"last": "Hou", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Guangtao", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Recent work on aspect-level sentiment classification has employed Graph Convolutional Networks (GCN) over dependency trees to learn interactions between aspect terms and opinion words. In some cases, the corresponding opinion words for an aspect term cannot be reached within two hops on dependency trees, which requires more GCN layers to model. However, GCNs often achieve the best performance with two layers, and deeper GCNs do not bring any additional gain. Therefore, we design a novel selective attention based GCN model. On one hand, the proposed model enables the direct interaction between aspect terms and context words via the self-attention operation without the distance limitation on dependency trees. On the other hand, a top-k selection procedure is designed to locate opinion words by selecting k context words with the highest attention scores. We conduct experiments on several commonly used benchmark datasets and the results show that our proposed SA-GCN outperforms strong baseline models.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Recent work on aspect-level sentiment classification has employed Graph Convolutional Networks (GCN) over dependency trees to learn interactions between aspect terms and opinion words. In some cases, the corresponding opinion words for an aspect term cannot be reached within two hops on dependency trees, which requires more GCN layers to model. However, GCNs often achieve the best performance with two layers, and deeper GCNs do not bring any additional gain. Therefore, we design a novel selective attention based GCN model. On one hand, the proposed model enables the direct interaction between aspect terms and context words via the self-attention operation without the distance limitation on dependency trees. On the other hand, a top-k selection procedure is designed to locate opinion words by selecting k context words with the highest attention scores. We conduct experiments on several commonly used benchmark datasets and the results show that our proposed SA-GCN outperforms strong baseline models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Aspect-level sentiment classification is a finegrained sentiment analysis task, which aims to identify the sentiment polarity (e.g., positive, negative or neutral) of a specific aspect term (also called target) appearing in a sentence. For example, \"Despite a slightly limited menu, everything prepared is done to perfection, ultra fresh and a work of food art.\", the sentiment polarity of aspect terms \"menu\" and \"food\" are negative and positive, respectively. The opinion words \"limited\" and \"done to perfection\" provide evidences for sentiment polarity predictions. This task has many applications, such as restaurant recommendation and purchase recommendation on e-commerce websites.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To solve this problem, recent studies have shown that the interactions between an aspect term and its context (which include opinion words) are crucial in identifying the sentiment polarity towards the given term. Most approaches consider the semantic information from the context words and utilize the attention mechanism to learn such interactions. However, it has been shown that syntactic information obtained from dependency parsing is very effective in capturing long-range syntactic relations that are obscure from the surface form (Zhang et al., 2018) . A recent popular approach to learn syntaxaware representations is employing graph convolutional networks (GCN) (Kipf and Welling, 2017) model over dependency trees (Huang and Carley, 2019; Sun et al., 2019; Wang et al., 2020; Tang et al., 2020) , which introduces syntactic inductive biases into the message passing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 539, |
|
"end": 559, |
|
"text": "(Zhang et al., 2018)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 750, |
|
"text": "(Huang and Carley, 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 751, |
|
"end": 768, |
|
"text": "Sun et al., 2019;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 769, |
|
"end": 787, |
|
"text": "Wang et al., 2020;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 788, |
|
"end": 806, |
|
"text": "Tang et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In some cases, the most important context words, i.e. opinion words, are more than two-hops away from the aspect term words on the dependency tree. As indicated by Figure 1 , there are four hops between the target \"Mac OS\" and the opinion words \"easily picked up\" on the dependency tree. This type of cases requires more than two layers of GCN to learn interactions between them. However, previous works show that GCN models with two layers often achieve the best performance (Zhang et al., 2018; Xu et al., 2018) , deeper GCNs do not bring additional gain due to the over-smoothing problem (Li et al., 2018b) , which makes different nodes have similar representations and lose the distinction among nodes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 476, |
|
"end": 496, |
|
"text": "(Zhang et al., 2018;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 513, |
|
"text": "Xu et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 609, |
|
"text": "(Li et al., 2018b)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 172, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to solve the above problem, we propose a novel selective attention based GCN (SA-GCN) model, which combines the GCN model over dependency trees with a self-attention based sequence model over the sentence. On one hand, the selfattention sequence model enables the direct interaction between an aspect term and its context so that it can take care of the situation where the term is far away from the opinion words on the dependency tree. On the other hand, a top-k attention selection module is applied after the self-attention opera- tion, which is designed to locate opinion words contained in the context for the aspect term. As shown in Figure 1 , if the opinion words \"easily picked up\" are detected correctly through the top-k selection module, it definitely could help the model to classify the sentiment as positive. To provide supervision information for the top-k selection procedure, we introduce the opinion words extraction task and jointly train the task with the sentiment classification task. Specifically, the base model is the GCN model over dependency trees. The model uses the pretrained BERT to obtain representations of the aspect term and its context words as the initial node features on the dependency tree.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 650, |
|
"end": 658, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Next, the GCN outputs are fed into a multi-head top-k attention selection module. For each head, the self-attention operation is applied over the sentence to get a dense attention score matrix, where ith row corresponds the attention scores of all words to the i-th word in the sentence. Then for each word, context words with top-k attention scores are selected and others are ignored, which sparsifies the attention score matrix and forms a sparse graph. We design two strategies to get the sparse graph: i) applying top-k selection over the attention matrix obtained by summing attention score matrices of all heads, and thus different heads share the same sparse graph; ii) applying top-k selection on individual attention score matrix of each head, and thus different heads have its own sparse graph. Finally, we apply a GCN layer again to integrate information from such sparse graph(s) for each head, and concatenate the GCN outputs w.r.t. different heads as the final word representation for sentiment analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main contributions of this work are summarized as the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose a selective attention based GCN (SA-GCN) module, which takes the benefit of GCN over the dependency trees and enables the aspect term directly obtaining information from the opinion words according to most relevant context words. This helps the model handle cases when the aspect term and opinion words are located far away from each other on the dependency tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose to jointly train the sentiment classification and opinion extraction tasks. The joint training further improves the performance of the classification task and provides explanation for sentiment prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Capturing the interaction between the aspect term and opinion words is essential for predicting the sentiment polarity towards the aspect term. In recent work, various attention mechanisms, such as co-attention, self-attention and hierarchical attention, were utilized to learn this interaction (Tang et al., 2016; Liu and Zhang, 2017; Li et al., 2018c; Fan et al., 2018; Chen et al., 2017; Zheng and Xia, 2018; Wang and Lu, 2018; Li et al., 2018a,c) . Specifically, they first encoded the context and the aspect term by recurrent neural networks (RNNs), and then stacked several attention layers to learn the aspect term representations from important context words. After the success of the pre-trained BERT model (Devlin et al., 2018) , utilized the pre-trained BERT as the encoder. In the study by (Xu et al., 2019) , the task was considered as a review reading comprehension (RRC) problem. RRC datasets were post trained on BERT and then fine-tuned to the aspect-level sentiment classification. Rietzler et al. (2019) utilized millions of extra data based on BERT to help sentiment analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 314, |
|
"text": "(Tang et al., 2016;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 335, |
|
"text": "Liu and Zhang, 2017;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 353, |
|
"text": "Li et al., 2018c;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 371, |
|
"text": "Fan et al., 2018;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 390, |
|
"text": "Chen et al., 2017;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 411, |
|
"text": "Zheng and Xia, 2018;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 430, |
|
"text": "Wang and Lu, 2018;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 450, |
|
"text": "Li et al., 2018a,c)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 716, |
|
"end": 737, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 819, |
|
"text": "(Xu et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1000, |
|
"end": 1022, |
|
"text": "Rietzler et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The above approaches mainly considered the semantic information. Recent approaches attempted to incorporate the syntactic knowledge to learn the syntax-aware representation of the aspect term. Dong et al. (2014) proposed AdaRNN, which adaptively propagated the sentiments of words to target along the dependency tree in a bottom-up manner. Nguyen and Shirai (2015) extended RNN to obtain the representation of the target aspect by aggregating the syntactic information from the dependency and constituent tree of the sentence. He et al. (2018) proposed to use the distance between the context word and the aspect term along the dependency tree as the attention weight. Some re-searchers (Huang and Carley, 2019; Sun et al., 2019) employed GNNs over dependency trees to aggregate information from syntactic neighbors. Most recent work in Wang et al. (2020) proposed to reconstruct the dependency tree to an aspect-oriented tree. The reshaped tree only kept the dependency structure around the aspect term and got rid of all other dependency connections, which made the learned node representations not fully syntax-aware. Tang et al. (2020) designed a mutual biaffine module between Transformer encoder and the GCN encoder to enhance the representation learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 211, |
|
"text": "Dong et al. (2014)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 543, |
|
"text": "He et al. (2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 687, |
|
"end": 711, |
|
"text": "(Huang and Carley, 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 712, |
|
"end": 729, |
|
"text": "Sun et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 837, |
|
"end": 855, |
|
"text": "Wang et al. (2020)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1121, |
|
"end": 1139, |
|
"text": "Tang et al. (2020)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The downside of applying GCN over dependency trees is that it cannot elegantly handle the long distance between aspect terms and opinion words. Our proposed SA-GCN model effectively integrates the benefit of a GCN model over dependency trees and a self-attention sequence model to directly aggregate information from opinion words. The top-k self-attention sequence model selects the most important context words, which effectively sparsifies the fully-connected graph from self-attention. Then we apply another GCN layer on top of this new sparsified graph, such that each of those important context words is directly reachable by the aspect term and the interaction between them could be learned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The goal of our proposed SA-GCN model is to predict the sentiment polarity of an aspect term in a given sentence. To improve the sentiment classification performance and provide explanations about the polarity prediction, we also introduce the opinion extraction task for joint training. The opinion extraction task aims to predict a tag sequence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of the Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "y o = [y 1 , y 2 , \u2022 \u2022 \u2022 , y n ] (y i \u2208 {B, I, O})", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of the Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "denotes the beginning of, inside of, and outside of opinion words. Figure 2 illustrates the overall architecture of the SA-GCN model. For each instance composing of a sentence-term pair, all the words in the sentence except for the aspect term are defined as context words.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 75, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Overview of the Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "BERT Encoder. We use the pre-trained BERT base model as the encoder to obtain embeddings of sentence words. Suppose a sentence consists of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder for Aspect Term and Context", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "n words {w 1 , w 2 , ..., w \u03c4 , w \u03c4 +1 ..., w \u03c4 +m , ..., w n }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder for Aspect Term and Context", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where {w \u03c4 , w \u03c4 +1 ..., w \u03c4 +m\u22121 } stand for the aspect term containing m words. First, we construct the input as \"[CLS] + sentence + [SEP] + term + [SEP]\" and feed it into BERT. This input format enables explicit interactions between the whole sentence and the term such that the obtained word representations are term-attended. Then, we use average pooling to summarize the information carried by sub-words from BERT and obtain final embeddings of words X \u2208 R n\u00d7d B , d B refers to the dimensionality of BERT output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder for Aspect Term and Context", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "With words representations X as node features and dependency tree as the graph, we employ a GCN to capture syntactic relations between the term node and its neighboring nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GCN over Dependency Trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "GCNs have shown to be effective for many NLP applications, such as relation extraction (Guo et al., 2019; Zhang et al., 2018) , reading comprehension (Kundu et al., 2019; Tu et al., 2019) , and aspect-level sentiment analysis (Huang and Carley, 2019; Sun et al., 2019) . In each GCN layer, a node aggregates the information from its one-hop neighbors and update its representation. In our case, the graph is represented by the dependency tree, where each word is treated as a single node and its representation is denoted as the node feature. The message passing on the graph can be formulated as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 105, |
|
"text": "(Guo et al., 2019;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 125, |
|
"text": "Zhang et al., 2018)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 170, |
|
"text": "(Kundu et al., 2019;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 187, |
|
"text": "Tu et al., 2019)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 250, |
|
"text": "(Huang and Carley, 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 268, |
|
"text": "Sun et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GCN over Dependency Trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H (l) = \u03c3(AH (l\u22121) W (l\u22121) )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "GCN over Dependency Trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where H (l) \u2208 R n\u00d7d h is the output l-th GCN layer, H (0) \u2208 R n\u00d7d B is the input of the first GCN layer, and H (0) = X \u2208 R n\u00d7d B . A \u2208 R n\u00d7n denotes the adjacency matrix obtained from the dependency tree, note that we add a self-loop on each node.W represents the learnable weights, where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GCN over Dependency Trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "W (0) \u2208 R d B \u00d7d h and W (l\u22121) \u2208 R d h \u00d7d h . \u03c3 refers to ReLU activation function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GCN over Dependency Trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The node features are passed through the GCN layer, the representation of each node is now further enriched by syntax information from the dependency tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GCN over Dependency Trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Although performing GCNs over dependency trees brings syntax information to the representation of each word, it could still limit interactions between aspect terms and long-distance opinion words that are essential for determining the sentiment polarity. In order to alleviate the problem, we apply a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "BERT GCN Sentiment Classifier Term Sentence SA-GCN \u00d7 h 1 h 2 h 3 h 4 Multi-head Attention Multi-head Attention Multi-head Attention \u210e 1 \u2026 Top-k selection \u00d7 \u210e GCN Concat Dependency Tree L heads", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Opinion Extractor Figure 2 : The SA-GCN model architecture: the left part is the overview of the framework, the right part shows details of a SA-GCN block.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 26, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Selective Attention based GCN (SA-GCN) block to identify the most important context words and integrate their information into the representation of the aspect term. Multiple SA-GCN blocks can be stacked to form a deep model. Each SA-GCN block is composed of three parts: a multi-head selfattention layer, top-k selection and a GCN layer. Self-Attention. We apply the multi-head selfattention first to get the attention score matrices", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "A i score \u2208 R n\u00d7n (1 \u2264 i \u2264 L)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": ", L is the number of heads. It can be formulated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A i score = (H k,i W k )(H q,i W q ) T \u221a d head (2) d head = d h L", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where The obtained attention score matrices can be considered as L fully-connected (complete) graphs, where each word is connected to all the other context words with different attention weights. This kind of attention score matrix has been used in attention-guided GCNs for relation extraction (Guo et al., 2019) . Although the attention weight is help-ful to differentiate different words, the fully connected graph still results in the aspect node fusing all the other words information directly, and the noise is often introduced during feature aggregation in GCNs, which further hurts the sentiment prediction. Therefore, we propose a top-k attention selection mechanism to sparsify the fully connected graph, and obtain a new sparse graph for feature aggregation for GCN. This is different from attentionguided GCNs (Guo et al., 2019) which performed feature aggregation over the fully-connected graph. Moreover, our experimental study (see Table 5 in Section 4) also confirms that the top-k selection is quite important and definitely beneficial to the aspect-term classification task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 313, |
|
"text": "(Guo et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 822, |
|
"end": 840, |
|
"text": "(Guo et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 947, |
|
"end": 954, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "H * ,i = H * [:, :, i], *", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Top-k Selection. For each attention score matrix A i score , we find the top-k important context words for each word, which effectively remove some edges in A i score . The reason why we choose the top-k context words is that only a few words are sufficient to determine the sentiment polarity towards an aspect term. Therefore, we discard other words with low attention scores to get rid of irrelevant noisy words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We design two strategies for top-k selection, head-independent and head-dependent. Headindependent selection determines k context words by aggregating the decisions made by all heads and reaches to an agreement among heads, while headdependent policy enables each head to keep its own selected k words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Head-independent selection is defined as following: we first sum the attention score matrix of each head element-wise, and then find top-k context words using the mask generated by the function topk. For example, topk([0.3, 0.2, 0.5]) returns [1, 0, 1] if k is set to 2. Finally, we apply a softmax operation on the updated attention score matrix. The process could be formulated as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A sum = L i=1 A i score (4) A m ind = topk(A sum )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "A i h ind = sof tmax(A m ind \u2022 A i score ) (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where A i score is the attention score matrix of i-th head, \u2022 denotes the element-wise multiplication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Head-dependent selection finds top-k context words according to the attention score matrix of each head individually. We apply the softmax operation on each top-k attention matrix. This step can be formulated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A i m dep = topk(A i score )", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "A", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "i h dep = sof tmax(A i m dep \u2022 A i score ) (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Compared to head-independent selection with exactly k words selected, head-dependent usually selects a larger number (than k) of important context words. Because each head might choose different k words thus more than k words are selected in total. From top-k selection we obtain L graphs based on the new attention scores and pass them to the next GCN layer. For simplicity, we will omit the head-ind and head-dep subscript in the later section. The obtained top-k score matrix A could be treated as an adjacency matrix, where A(p, q) denotes as the weight of the edge connecting word p and word q. Note that A does not contain self-loop, and we add a self-loop for each node. GCN Layer. After top-k selection on each attention score matrix A i score (A i score is not fully connected anymore), we apply a one-layer GCN and get updated node features as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H (l,i) = \u03c3(A i\u0124 (l\u22121) W i ) +\u0124 (l\u22121) W i (9) H (l) = L i=1\u0124 (l,i)", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where\u0124 (l) \u2208 R n\u00d7d h is the output of the l-th SA-GCN block and composed by the concatenation of H (l,i) \u2208 R n\u00d7d head of i-th head,\u0124 (0) \u2208 R n\u00d7d h is the input of the first SA-GCN block and comes from the GCN layer operating on the dependency tree, A i is the top-k score matrix of i-th head, W i \u2208 R d h \u00d7d head denotes as the learnable weight matrix, and \u03c3 refers to ReLU activation function. The SA-GCN block can be applied multi times if needed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SA-GCN: Selective Attention based GCN", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Based on the output\u0124 o of the last SA-GCN block, we extract the aspect term node features from\u0124 o , and conduct average pooling to obtain the aspect term 1 representation\u0125 t \u2208 R 1\u00d7d h . Then we feed it into a two-layer MLP to calculate the final classification scores\u0177 s :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y s = sof tmax(W 2 \u03c3(W 1\u0125 T t ))", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Classifier", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "where W 2 \u2208 R C\u00d7dout and W 1 \u2208 R dout\u00d7d h denote the learnable weight matrix, C is the sentiment class number, and \u03c3 refers to ReLU activation function. We use cross entropy as the sentiment classification loss function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L s = \u2212 C c=1 y s,c log\u0177 s,c + \u03bb \u03b8 2", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Classifier", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "where \u03bb is the coefficient for L2-regularization, \u03b8 denotes the parameters that need to be regularized, y s is the true sentiment label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "The opinion extraction shares the same input encoder, i.e. the SA-GCN as sentiment classification. Therefore we feed the output of SA-GCN to a linearchain Conditional Random Field (CRF) (Lafferty et al., 2001) , which is the opinion extractor. Specifically, based on the SA-GCN output\u0124 o , the output sequence", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 209, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "y o = [y 1 , y 2 , \u2022 \u2022 \u2022 , y n ] (y i \u2208 {B, I, O})", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "is predicted as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(y o |\u0124 o ) = exp(s(\u0124 o , y o )) y o \u2208Y exp(s(\u0124 o , y o ))", |
|
"eq_num": "(13)" |
|
} |
|
], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s(\u0124 o , y o ) = n i (T y i\u22121 ,y i + P i,y i )", |
|
"eq_num": "(14)" |
|
} |
|
], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P i = W o\u0124o [i] + b o", |
|
"eq_num": "(15)" |
|
} |
|
], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "where Y denotes the set of all possible tag sequences, T y i\u22121 ,y i is the transition score matrix, W o and b o are learnable parameters. We apply Viterbi algorithm in the decoding phase. And the loss for opinion extraction task is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L o = \u2212log(p(y o |\u0124 o ))", |
|
"eq_num": "(16)" |
|
} |
|
], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "Finally, the total training loss is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L = L s + \u03b1L o", |
|
"eq_num": "(17)" |
|
} |
|
], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "where \u03b1 \u2265 0 represents the weight of opinion extraction task. (Pontiki et al., 2015) and SemEval 2016 Task 5 (Pontiki et al., 2016) (14Rest, 15Rest and 16Rest) . We remove several examples with \"conflict\" labels. The statistics of these datasets are listed in Table 1 . The opinion words labeling for these four datasets come from (Fan et al., 2019) . Baselines. Since BERT (Devlin et al., 2018) model shows significant improvements over many NLP tasks, we directly implement SA-GCN based on BERT and compare with following BERT-based baseline models:", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 84, |
|
"text": "(Pontiki et al., 2015)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 159, |
|
"text": "(Pontiki et al., 2016) (14Rest, 15Rest and 16Rest)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 349, |
|
"text": "(Fan et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 395, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 267, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "1. BERT-SPC feeds the sentence and term pair into the BERT model and the BERT outputs are used for prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "2. AEN-BERT uses BERT as the encoder and employs several attention layers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "3. TD-GAT-BERT (Huang and Carley, 2019) utilizes GAT on the dependency tree to propagate features from the syntactic context. (Tang et al., 2020) proposes a mutual biaffine module to jointly consider the flat representations learnt from Transformer and graph-based representations learnt from the corresponding dependency graph in an iterative manner.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 39, |
|
"text": "(Huang and Carley, 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 126, |
|
"end": 145, |
|
"text": "(Tang et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Extractor", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "5. R-GAT+BERT (Wang et al., 2020) reshapes and prunes the dependency parsing tree to an aspectoriented tree rooted at the aspect term, and then employs relational GAT to encode the new tree for sentiment predictions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 33, |
|
"text": "(Wang et al., 2020)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DGEDT-BERT", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "In our experiments, we present results of the average and standard deviation numbers from seven runs of different random initialization. We use BERT-base model to compare with other published numbers. We implement our own BERT-baseline by directly applying a classifier on top of BERTbase encoder, BERT+2-layer GCN and BERT+4layer GCN are models with 2-layer and 4-layer GCN respectively on dependency trees with the BERT encoder. BERT+SA-GCN is our proposed SA-GCN model with BERT encoder. Joint SA-GCN refers to joint training of sentiment classification and opinion extraction tasks. Evaluation metrics. We train the model on training set, and evaluate the performance on test set in terms of accuracy and macro-F1 scores which are commonly-used in sentiment analysis (Sun et al., 2019; Tang et al., 2016; Wang et al., 2020) . Parameter Setting. During training, we set the learning rate to 10 \u22125 . The batch size is 4. We train the model up to 5 epochs with Adam optimizer. We obtain dependency trees using the Stanford Stanza (Qi et al., 2020) . The dimension of BERT output d B is 768. The hidden dimensions are selected from {128, 256, 512}. We apply dropout (Srivastava et al., 2014) and the dropout rate range is [0.1, 0.4]. The L2 regularization is set to 10 \u22126 . We use 1 or 2 SA-GCN blocks in our experiments. We choose k in top-k selection module from {2, 3} to achieve the best performance. For joint training, the weight range of opinion extraction loss is [0.05, 0.15]. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 771, |
|
"end": 789, |
|
"text": "(Sun et al., 2019;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 790, |
|
"end": 808, |
|
"text": "Tang et al., 2016;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 809, |
|
"end": 827, |
|
"text": "Wang et al., 2020)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1031, |
|
"end": 1048, |
|
"text": "(Qi et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1166, |
|
"end": 1191, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DGEDT-BERT", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We present results of the SA-GCN model in two aspects: classification performance and qualitative case study. Classification. Table 2 shows comparisons of SA-GCN with other baselines in terms of classification accuracy and Macro-F1. From this table, we observe that: SA-GCN achieves the best average results on 14Lap, 15Rest and 16Rest datasets, and obtains competitive results on 14Rest dataset. The joint training of sentiment classification and opin- DT: Dependency Tree; RDT: Reshaped Dependency Tree. \u2020: Head-independent based top-k Selection. The \"best\" denotes as the best performances of our SA-GCN model from the seven runs. Row \"Joint-SA-GCN\" reports the average and std of these seven runs. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 133, |
|
"text": "Table 2", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Label GCN SA-GCN Satay is one of those favorite haunts on Washington where the service and food is always on the money. positive neutral positive And the fact that it comes with an i5 processor definitely speeds things up positive neutral positive I know real Indian food and this was n't it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "negative neutral negative Table 3 : Top-k visualization: the darker the shade, the larger attention weight.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 33, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ion extraction tasks further boosts the performances on all datasets. Specifically, BERT+2-layer GCN outperforms BERT-baseline, which proves the benefit of using syntax information. BERT+4-layer GCN is actually worse than BERT+2-layer GCN, which shows that more GCN layers do not bring additional gain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our BERT+SA-GCN model further outperforms the BERT+2-layer GCN model. Because the SA-GCN block allows aspect terms to directly absorb the information from the most important context words that are not reachable within two hops in the dependency tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Besides, introducing the opinion extraction task provides more supervision signals for the top-k selection module, which benefits the sentiment classification task. Qualitative Case Study. To show the efficacy of the SA-GCN model on dealing long-hops between aspect term and its opinion words, we demonstrate three examples as shown in Table 3 . These sentences are selected from test sets of 14Lap and 14Rest datasets and predicted correctly by the SA-GCN model but wrongly by BERT+2-layer GCN. The important thing to note here, our SA-GCN model could provide explanation about the prediction according to the learned attention weights, while the GCN based model (BERT+2-layer GCN denoted as \"GCN\" in Table 3 ) cannot. Aspect terms are colored red. Top-3 words with the largest attention weights towards the aspect term are shaded. The darker the shade, the larger attention weight.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 343, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 709, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In all three examples the aspect terms are more than three hops away from essential opinion words (Please refer to Fig. 3 ), thus BERT+2-layer GCN model cannot learn the interactions between them within two layers, while SA-GCN model overcomes the distance limitation and locates right opinion words.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 121, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Opinion Extraction. Table 4 shows the results of the opinion extraction task under the joint training setting. The reported numbers are obtained by averaging F1 of seven runs. In each run, the selected opinion F1 is generated from the best sentiment classification checkpoint. We compare our model with two baselines: IOG (Fan et al., 2019) encodes the aspect term information into context by an Inward-Outward LSTM to find the corresponding opinion words. ASTE (Peng et al., 2020 ) utilizes a GCN module to learn the mutual dependency relations between different words and to guide opinion term extraction. As shown in this table, the joint SA-GCN model outperforms two baseline models on all datasets, which demonstrates that the sentiment classification task is helpful for opinion extraction task as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 340, |
|
"text": "(Fan et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 480, |
|
"text": "(Peng et al., 2020", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 27, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We further analyze our SA-GCN model from two perspectives: ablation study and sentence length analysis. Ablation Study. To demonstrate effectiveness of different modules in SA-GCN, we conduct ablation Figure 3 : Dependency trees of case study. Case 1: the aspect term \"food\" is four hops away from the opinion words \"favorite\" and \"on the money\". In cases 2 and 3, there are also three-hops distance between aspect terms and opinion words. studies in Table 5 . From this table, we observe that:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 209, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 458, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "1. Effect of Top-k Selection. To examine the impact the top-k selection, we present the result of SA-GCN w/o top-k in Table 5 . We can see that without top-k selection, both accuracy and macro-F1 decrease on all datasets. This observation proves that the top-k selection helps to reduce the noisy context and locate top important opinion words. We also conduct the effect of the hyper-parameter k and the block number N on SA-GCN under head-independent and head-dependent selection respectively (see the supplemental material).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 125, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "2. Effect of Head-independent and Headdependent Selection. As shown in the last row in Table 5 , head-independent selection achieves better results than head-dependent selection. This is because the mechanism of head-independent selection is similar to voting. By summing up the weight scores from each head, context words with higher scores in most heads get emphasized, and words that only show importance in few heads are filtered out. Thus all heads reach to an agreement and the top-k context words are decided. However for head-dependent selection, each head selects different top-k context words, which is more likely to choose certain unimportant context words and introduce noise to the model prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 94, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Sentence Length Analysis. To quantify the ability of our SA-GCN model dealing with long-distance problem, we conduct sentence length analysis on 14Lap and 14Rest datasets. The assumption is that the longer the sentence, the more likely the longdistance problem occurs. The results are showed in Figure 4 . We measure the sentiment classification accuracy of BERT+2-layer GCN (denotes as GCN in Figure 4 ) and BERT+SA-GCN models under different sentence lengths. We observe that SA-GCN achieves better accuracy than GCN across all length ranges and is more advantageous when sentences are longer. To some extent, the results prove effectiveness of SA-GCN in dealing with long-distance problem.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 303, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 402, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Hyper-parameter Analysis. We examine the effect of the hype-parameter k and the block number N on our proposed model under head-independent and head-dependent selection respectively. Figure 5 shows the results on 14Rest dataset. Length of sentence 5a, we observe that: 1) the highest accuracy appears when k is equal to 3. As k becomes bigger, the accuracy goes down. The reason is that integrating information from too many context words could introduce distractions and confuse the representation of the current word. 2) Head-independent selection performs better than head-dependent selection as k increases. As mentioned before, compared with head-independent, head-dependent selection might have more than k context words contribute to the aggregation and introduce some noise.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 192, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "2. Effect of Block Number. Figure 5b shows the effect of different number of SA-GCN blocks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 36, |
|
"text": "Figure 5b", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As the block number increases, the accuracy decreases for both head-independent and headdependent selection. A single SA-GCN block is sufficient for selecting top-k important context words. Stacking multiple blocks introduces more parameters and thus would lead to overfitting with such a small amount of training data. This might be the reason why stacking multiple blocks is not helpful. For our future work we plan to look into suitable deeper GNN models that are good for this task. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We propose a selective attention based GCN model for the aspect-level sentiment classification task. We first encode the aspect term and context words by pre-trained BERT to capture the interaction between them, then build a GCN on the dependency tree to incorporate syntax information. In order to handle the long distance between aspect terms and opinion words, we use the selective attention based GCN block, to select the top-k important context words and employ the GCN to integrate their information for the aspect term representation learning. Further, we adopt opinion extraction problem as an auxiliary task to jointly train with sentiment classification task. We conduct experiments on several SemEval datasets. The results show that SA-GCN achieve better performances than previous strong baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The aspect term might be composed of multiple term nodes in the graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our code will be released at the time of publication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Recurrent attention network on memory for aspect sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongqian", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lidong", |
|
"middle": [], |
|
"last": "Bing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "452--461", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on mem- ory for aspect sentiment analysis. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 452-461.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of NAACL-HLT 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. Proceedings of NAACL-HLT 2019, page pages 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Adaptive recursive neural network for target-dependent twitter sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chuanqi", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duyu", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ke", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd annual meeting of the association for computational linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "49--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment clas- sification. In Proceedings of the 52nd annual meet- ing of the association for computational linguistics (volume 2: Short papers), pages 49-54.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Multi-grained attention network for aspect-level sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Feifan", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yansong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3433--3442", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Feifan Fan, Yansong Feng, and Dongyan Zhao. 2018. Multi-grained attention network for aspect-level sen- timent classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3433-3442.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Target-oriented opinion words extraction with target-fused neural sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Zhifang", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhen", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinyu", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujian", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiajun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2509--2518", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhifang Fan, Zhen Wu, Xinyu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence label- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 2509-2518.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Attention guided graph convolutional networks for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Zhijiang", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "241--251", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation ex- traction. 57th Annual Meeting of the Association for Computational Linguistics, page 241-251.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Effective attention modeling for aspect-level sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Ruidan", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Wee Sun Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dahlmeier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1121--1131", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Effective attention modeling for aspect-level sentiment classification. In Proceed- ings of the 27th International Conference on Com- putational Linguistics, pages 1121-1131.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Syntaxaware aspect level sentiment classification with graph attention networks", |
|
"authors": [ |
|
{ |
|
"first": "Binxuan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Carley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5472--5480", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Binxuan Huang and Kathleen M Carley. 2019. Syntax- aware aspect level sentiment classification with graph attention networks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5472-5480.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Semisupervised classification with graph convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kipf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Exploiting explicit paths for multi-hop reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Souvik", |
|
"middle": [], |
|
"last": "Kundu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tushar", |
|
"middle": [], |
|
"last": "Khot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Sabharwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2737--2747", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1263" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Souvik Kundu, Tushar Khot, Ashish Sabharwal, and Peter Clark. 2019. Exploiting explicit paths for multi-hop reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2737-2747. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando Cn", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Hierarchical attention based position-aware network for aspect-level sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Lishuang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anqiao", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "181--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lishuang Li, Yang Liu, and AnQiao Zhou. 2018a. Hier- archical attention based position-aware network for aspect-level sentiment analysis. In Proceedings of the 22nd Conference on Computational Natural Lan- guage Learning, pages 181-189.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Deeper insights into graph convolutional networks for semi-supervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Qimai", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhichao", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiao-Ming", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018b. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Transformation networks for target-oriented sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lidong", |
|
"middle": [], |
|
"last": "Bing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wai", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bei", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "946--956", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1087" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018c. Transformation networks for target-oriented senti- ment classification. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 946- 956, Melbourne, Australia. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Attention modeling for targeted sentiment", |
|
"authors": [ |
|
{ |
|
"first": "Jiangming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "572--577", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiangming Liu and Yue Zhang. 2017. Attention mod- eling for targeted sentiment. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 2, Short Papers, pages 572-577.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis", |
|
"authors": [], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2509--2514", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thien Hai Nguyen and Kiyoaki Shirai. 2015. Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2509-2514.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Knowing what, how and why: A near complete solution for aspect-based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Haiyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lidong", |
|
"middle": [], |
|
"last": "Bing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luo", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8600--8607", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2020. Knowing what, how and why: A near complete solution for aspect-based sentiment analysis. In AAAI, pages 8600-8607.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "SemEval-2016 task 5: Aspect based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pontiki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Galanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haris", |
|
"middle": [], |
|
"last": "Papageorgiou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ion", |
|
"middle": [], |
|
"last": "Androutsopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Al-", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahmoud", |
|
"middle": [], |
|
"last": "Smadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanyan", |
|
"middle": [], |
|
"last": "Al-Ayyoub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orph\u00e9e", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e9ronique", |
|
"middle": [], |
|
"last": "De Clercq", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marianna", |
|
"middle": [], |
|
"last": "Hoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Apidianaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Tannier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Loukachevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kotelnikov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--30", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S16-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Moham- mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, V\u00e9ronique Hoste, Marianna Apidianaki, Xavier Tannier, Na- talia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar\u00eda Jim\u00e9nez-Zafra, and G\u00fcl\u015fen Eryigit. 2016. SemEval-2016 task 5: Aspect based senti- ment analysis. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 19-30, San Diego, California. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "SemEval-2015 task 12: Aspect based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pontiki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Galanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haris", |
|
"middle": [], |
|
"last": "Papageorgiou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "486--495", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S15-2082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486-495, Denver, Colorado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "SemEval-2014 task 4: Aspect based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pontiki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Galanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harris", |
|
"middle": [], |
|
"last": "Papageorgiou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--35", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/S14-2004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: As- pect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35, Dublin, Ireland. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Stanza: A Python natural language processing toolkit for many human languages", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bolton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rietzler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Stabinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Opitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Engl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.11860" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2019. Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification. arXiv preprint arXiv:1908.11860.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Attentional encoder network for targeted sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Youwei", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiahai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyue", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanghui", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1902.09314" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Youwei Song, Jiahai Wang, Tao Jiang, Zhiyue Liu, and Yanghui Rao. 2019. Attentional encoder network for targeted sentiment classification. arXiv preprint arXiv:1902.09314.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Dropout: a simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Aspect-level sentiment analysis via convolution over dependency tree", |
|
"authors": [ |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Mensah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongyi", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xudong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5683--5692", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2019. Aspect-level senti- ment analysis via convolution over dependency tree. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5683- 5692.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Effective LSTMs for target-dependent sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Duyu", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaocheng", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3298--3307", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016. Effective LSTMs for target-dependent sen- timent classification. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 3298- 3307, Osaka, Japan. The COLING 2016 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Dependency graph enhanced dualtransformer structure for aspect-based sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghong", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenliang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiji", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6578--6588", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.588" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Tang, Donghong Ji, Chenliang Li, and Qiji Zhou. 2020. Dependency graph enhanced dual- transformer structure for aspect-based sentiment classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 6578-6588, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs", |
|
"authors": [ |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guangtao", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2704--2713", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xi- aodong He, and Bowen Zhou. 2019. Multi-hop read- ing comprehension across multiple documents by reasoning over heterogeneous graphs. 57th Annual Meeting of the Association for Computational Lin- guistics, page pages 2704-2713.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Learning latent opinions for aspect-level sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Bailin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailin Wang and Wei Lu. 2018. Learning latent opin- ions for aspect-level sentiment classification. In Thirty-Second AAAI Conference on Artificial Intel- ligence.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Relational graph attention network for aspect-based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhou", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunyi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Quan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3229--3238", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.295" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Wang, Weizhou Shen, Yunyi Yang, Xiaojun Quan, and Rui Wang. 2020. Relational graph attention net- work for aspect-based sentiment analysis. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3229- 3238, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Target-sensitive memory networks for aspect sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Shuai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sahisnu", |
|
"middle": [], |
|
"last": "Mazumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mianwei", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "957--967", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, and Yi Chang. 2018. Target-sensitive mem- ory networks for aspect sentiment classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 957-967.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Bert post-training for review reading comprehension and aspect-based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Hu", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Shu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019. Bert post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Representation learning on graphs with jumping knowledge networks", |
|
"authors": [ |
|
{ |
|
"first": "Keyulu", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengtao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonglong", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomohiro", |
|
"middle": [], |
|
"last": "Sonobe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ken-Ichi", |
|
"middle": [], |
|
"last": "Kawarabayashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefanie", |
|
"middle": [], |
|
"last": "Jegelka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 35th International Conference on Machine Learning", |
|
"volume": "80", |
|
"issue": "", |
|
"pages": "5453--5462", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keyulu Xu, Chengtao Li, Yonglong Tian, Tomo- hiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. 2018. Representation learning on graphs with jumping knowledge networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5453-5462. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Aspect-based sentiment classification with aspectspecific graph convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiuchi", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawei", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4560--4570", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Zhang, Qiuchi Li, and Dawei Song. 2019. Aspect-based sentiment classification with aspect- specific graph convolutional networks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4560-4570.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Graph convolution over pruned dependency trees improves relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2205--2215", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 2205-2215.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Left-center-right separated neural network for aspect-based sentiment analysis with rotatory attention", |
|
"authors": [ |
|
{ |
|
"first": "Shiliang", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1802.00892" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shiliang Zheng and Rui Xia. 2018. Left-center-right separated neural network for aspect-based sentiment analysis with rotatory attention. arXiv preprint arXiv:1802.00892.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Example of dependency tree with multi-hop between aspect term and determined context words.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "2", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"text": "Sentence length analysis on 14Lap and 14Rest.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"text": "Impact of k and block numbers on SA-GCN over Restaurant dataset.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "\u2208 {k: key, q: query}, H k \u2208 R n\u00d7d head \u00d7L and H q \u2208 R n\u00d7d head \u00d7L are the node representations from the previous GCN layer, W k \u2208 R d head \u00d7d head and W q \u2208 R d head \u00d7d head are learnable weight matrices, d h is the dimension of the input node feature, and d head is the dimension of each head.", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table><tr><td>Category</td><td>Model</td><td>Acc</td><td>14Rest Macro-F1</td><td>Acc</td><td>14Lap Macro-F1</td><td>Acc</td><td>15Rest Macro-F1</td><td>Acc</td><td>16Rest Macro-F1</td></tr><tr><td>BERT</td><td>BERT-SPC AEN-BERT</td><td>84.46 83.12</td><td>76.98 73.76</td><td>78.99 79.93</td><td>75.03 76.31</td><td>--</td><td>--</td><td>--</td><td>--</td></tr><tr><td>BERT+DT</td><td>TD-GAT-BERT DGEDT-BERT</td><td>83.0 86.3</td><td>-80.0</td><td>80.1 79.8</td><td>-75.6</td><td>-84.0</td><td>-71.0</td><td>-91.9</td><td>-79.0</td></tr><tr><td>BERT+RDT</td><td>R-GAT+BERT</td><td>86.60</td><td>81.35</td><td>78.21</td><td>74.07</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Ours</td><td colspan=\"2\">BERT-baseline 85.56 Joint SA-GCN (Best ) 87.68</td><td>82.45</td><td>81.03</td><td>77.71</td><td>85.26</td><td>69.71</td><td>92.0</td><td>81.86</td></tr></table>", |
|
"html": null, |
|
"text": "\u00b1 0.30 79.21 \u00b1 0.45 79.57 \u00b1 0.15 76.18 \u00b1 0.31 83.45 \u00b1 1.13 69.29 \u00b1 1.78 91.06 \u00b1 0.44 78.58 \u00b1 1.62 BERT+2-layer GCN 85.78 \u00b1 0.59 80.55 \u00b1 0.90 79.72 \u00b1 0.31 76.31 \u00b1 0.35 83.71 \u00b1 0.42 69.26 \u00b1 1.63 91.23 \u00b1 0.25 79.29 \u00b1 0.51 BERT+4-layer GCN 85.03 \u00b1 0.64 78.90 \u00b1 0.75 79.57 \u00b1 0.15 76.23 \u00b1 0.49 83.48 \u00b1 0.33 68.72 \u00b1 1.08 91.02 \u00b1 0.26 78.68 \u00b1 0.50 BERT+SA-GCN \u2020 86.16 \u00b1 0.23 80.54 \u00b1 0.38 80.31 \u00b1 0.47 76.99 \u00b1 0.59 84.18 \u00b1 0.29 69.42 \u00b1 0.81 91.41 \u00b1 0.39 80.39 \u00b1 0.93 Joint SA-GCN 86.57 \u00b1 0.81 81.14 \u00b1 0.69 80.61 \u00b1 0.32 77.12 \u00b1 0.51 84.63 \u00b1 0.33 69.1 \u00b1 0.78 91.54 \u00b1 0.26 80.68 \u00b1 0.92", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Comparison of SA-GCN with various baselines.", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"content": "<table><tr><td>Model</td><td>14Rest F1</td><td>14Lap F1</td><td>15Rest F1</td><td>16Rest F1</td></tr><tr><td>IOG</td><td>80.24</td><td>71.39</td><td>73.51</td><td>81.84</td></tr><tr><td>ASTE</td><td>83.15</td><td>76.03</td><td>78.02</td><td>83.73</td></tr><tr><td>Joint SA-</td><td/><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"text": "GCN 83.72 \u00b1 0.51 76.79 \u00b1 0.33 80.99 \u00b1 0.43 83.83 \u00b1 0.50", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Opinion extraction results.", |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"content": "<table><tr><td>Model</td><td>Acc</td><td>14Rest Macro-F1</td><td>Acc</td><td>14Lap Macro-F1</td><td>Acc</td><td>15Rest Macro-F1</td><td>Acc</td><td>16Rest Macro-F1</td></tr><tr><td>SA-</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"text": "GCN (head-ind) 86.16 \u00b1 0.23 80.54 \u00b1 0.38 80.31 \u00b1 0.47 76.99 \u00b1 0.59 84.18 \u00b1 0.29 69.42 \u00b1 0.81 91.41\u00b1 0.39 80.39 \u00b1 0.93 SA-GCN w/o top-k 85.06 \u00b1 0.68 78.88 \u00b1 0.83 79.96 \u00b1 0.14 76.64 \u00b1 0.58 83.15 \u00b1 0.41 68.74 \u00b1 1.48 90.92 \u00b1 0.45 78.18 \u00b1 0.71 SA-GCN (head-dep) 85.41 \u00b1 0.21 79.19 \u00b1 0.68 80.17 \u00b1 0.55 76.83 \u00b1 0.59 83.68 \u00b1 0.54 68.81 \u00b1 1.39 91.01 \u00b1 0.40 78.88 \u00b1 1.04 head-dep: head-dependent based top-k selection.", |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Ablation study of SA-GCN.", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |