{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:37.023554Z" }, "title": "ABSA-Bench: Towards the Unified Evaluation of Aspect-based Sentiment Analysis Research", "authors": [ { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Adelaide", "location": {} }, "email": "abhishek.das@student.adelaide.edu.au" }, { "first": "Wei", "middle": [ "Emma" ], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Adelaide", "location": {} }, "email": "wei.e.zhang@adelaide.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Aspect-Based Sentiment Analysis (ABSA) has gained much attention in recent years. ABSA is the task of identifying fine-grained opinion polarity towards a specific aspect associated with a given target. However, there is a lack of benchmarking platform to provide a unified environment under consistent evaluation criteria for ABSA, resulting in the difficulties for fair comparisons. In this work, we address this issue and define a benchmark, ABSA-Bench 1 , by unifying the evaluation protocols and the pre-processed public datasets in a Web-based platform. ABSA-Bench provides two means of evaluations for participants to submit their predictions or models for online evaluation. Performances are ranked in the leader board and a discussion forum is supported to serve as a collaborative platform for academics and researchers to discuss queries.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Aspect-Based Sentiment Analysis (ABSA) has gained much attention in recent years. ABSA is the task of identifying fine-grained opinion polarity towards a specific aspect associated with a given target. However, there is a lack of benchmarking platform to provide a unified environment under consistent evaluation criteria for ABSA, resulting in the difficulties for fair comparisons. In this work, we address this issue and define a benchmark, ABSA-Bench 1 , by unifying the evaluation protocols and the pre-processed public datasets in a Web-based platform. ABSA-Bench provides two means of evaluations for participants to submit their predictions or models for online evaluation. Performances are ranked in the leader board and a discussion forum is supported to serve as a collaborative platform for academics and researchers to discuss queries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Aspect-based sentiment analysis (ABSA) has gained a lot of attention in recent years from both industries and academic communities as it provides a more practical solution to real life problems. The goal of ABSA is to identify the aspects and infer the sentiment expressed for each aspect. For example, given a sentence I hated their service, but their food was great, the sentiment polarities for the aspect service and food are negative and positive respectively. Conventional techniques for ABSA are mostly traditional machine learning models based on lexicons and syntactic features (Jiang et al., 2011; Kiritchenko et al., 2014; Vo and Zhang, 2015) . Therefore, the performance of such models depend on hand-crafted features. Recent progresses have been made with the advancement of Deep Neural Networks (DNN) with some of the models being considered as state-of-the-art (Xu 1 https://absa-bench.com/ Figure 1 : The General Process of ABSA et al.). Among them, attention mechanism has played an important role outperforming previous approaches by paying more attention to the context words that are semantically-closer with the aspect terms (Luong et al., 2015; Wang et al., 2016; Chen et al., 2017; Liu et al., 2018; Ma et al., 2017) . The most recent approaches adopted pre-trained Bidirectional Encoder Representations from Transformers (BERT) architecture (Devlin et al., 2019; generating significant performance gaps to other approaches due to BERT's capability of capturing bi-directional contextual information and providing rich token-wise representation. Introducing BERT architecture into ABSA task naturally distinguishing the approaches to Non-BERT based models and BERT-based models. Figure 1 depicts the general processes of both of the two groups of supervised ABSA methods.", "cite_spans": [ { "start": 587, "end": 607, "text": "(Jiang et al., 2011;", "ref_id": "BIBREF4" }, { "start": 608, "end": 633, "text": "Kiritchenko et al., 2014;", "ref_id": "BIBREF5" }, { "start": 634, "end": 653, "text": "Vo and Zhang, 2015)", "ref_id": "BIBREF17" }, { "start": 876, "end": 881, "text": "(Xu 1", "ref_id": null }, { "start": 1146, "end": 1166, "text": "(Luong et al., 2015;", "ref_id": "BIBREF8" }, { "start": 1167, "end": 1185, "text": "Wang et al., 2016;", "ref_id": "BIBREF19" }, { "start": 1186, "end": 1204, "text": "Chen et al., 2017;", "ref_id": "BIBREF1" }, { "start": 1205, "end": 1222, "text": "Liu et al., 2018;", "ref_id": "BIBREF7" }, { "start": 1223, "end": 1239, "text": "Ma et al., 2017)", "ref_id": "BIBREF9" }, { "start": 1365, "end": 1386, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 906, "end": 914, "text": "Figure 1", "ref_id": null }, { "start": 1702, "end": 1710, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although this research area has gained much attention in recent years, it lacks of unbiased comparisons overall. As deep learning based models perform differently on various hardware on different deep learning tools, existing works typically chose to either re-run or re-implement the selected comparative models under their own experimental environment. We also observe few works directly referring the results presented in the corresponding papers for comparison. This makes it difficult to have a general overview of the performances of the state-of-the-art models and has motivated us to build a benchmarking platform for ABSA research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing benchmarking research works are mostly conducted on evaluating single tasks and none of them support aspect-based sentiment analysis (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 Choi et al., 2018; Aguilar et al., 2020; Zhu et al., 2018) . In this project, we fill this gap by proposing a unified evaluation process and building a united platform for comparing different ABSA models. We name our work as ABSA-Bench. ABSA-Bench particularly focuses on supervised approaches and is suitable for both DNN-based and conventional models. It provides two means of evaluations namely, Results Evaluation and Model Evaluation. Results evaluation is done by comparing the ground-truth with the model-generated predictions submitted by the researchers. Model evaluation supports the model submission and online evaluation which keeps the integrity of the predictions in a better way. To aid the model evaluation, a Web based tool is developed to provide an objective evaluation environment. The background computation power of ABSA-Bench is supported by the Google Cloud Platform (GCP) 2 . After evaluation, the performance results are then ranked in the ABSA-Bench leader board. ABSA-Bench further supports a discussion forum for queries, comments and discussions regarding the model implementations, performances, ranking and new ideas.", "cite_spans": [ { "start": 142, "end": 165, "text": "(Rajpurkar et al., 2016", "ref_id": "BIBREF13" }, { "start": 166, "end": 191, "text": "(Rajpurkar et al., , 2018", "ref_id": "BIBREF12" }, { "start": 192, "end": 210, "text": "Choi et al., 2018;", "ref_id": "BIBREF2" }, { "start": 211, "end": 232, "text": "Aguilar et al., 2020;", "ref_id": "BIBREF0" }, { "start": 233, "end": 250, "text": "Zhu et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, this is the first platform created with diverse functionalities to support the understanding of the state-of-the-art ABSA works. The contributions of the work includes: i) providing a unified ABSA evaluation platform which enables researchers to evaluate their models on the same benchmark dataset with a consistent metric under the same computation environment; ii) supporting a leader board for easy comparison, and a discussion forum for sharing ideas; iii) presenting the comparisons of several recent research works based on their performances on the ABSA-Bench platform through a re-run or re-implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The related benchmarking platforms for natural language processing models can be categorized into two groups: single task benchmarks and mul-tiple tasks benchmarks. SQuAD (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 3 is a representative benchmark for a single task. It provides a platform for evaluating question answering models on the SQuAD dataset. Researchers could either submit the prediction results or their models which will be run on CodaLab Worksheets 4 . A leader board ranks the performances of all the evaluated models. QuAC (Choi et al., 2018) 5 imitates SQuAD, but for contextaware question answering models for which the questions and answers are provided in the dialogue form. GLUE 6 provides a collection of tools for evaluating the natural language understanding models across a diverse set of existing tasks. It allows researchers to submit their prediction files for comparison. Error analysis is also enabled. LinCE (Aguilar et al., 2020) 7 is a centralized benchmark for linguistic code-switching evaluation that combines ten corpora covering four different code-switched language pairs and four sub-tasks. Similar to GLUE, LinCE enables result submission, but does not support online model execution. TextGen (Zhu et al., 2018 ) is a benchmarking platform to support research on open-domain text generation models. It implements a majority of text generation models and aims to standardize the research in this field. However, TextGen does not allow online submission and evaluation.", "cite_spans": [ { "start": 171, "end": 194, "text": "(Rajpurkar et al., 2016", "ref_id": "BIBREF13" }, { "start": 195, "end": 220, "text": "(Rajpurkar et al., , 2018", "ref_id": "BIBREF12" }, { "start": 545, "end": 564, "text": "(Choi et al., 2018)", "ref_id": "BIBREF2" }, { "start": 565, "end": 566, "text": "5", "ref_id": null }, { "start": 945, "end": 967, "text": "(Aguilar et al., 2020)", "ref_id": "BIBREF0" }, { "start": 1240, "end": 1257, "text": "(Zhu et al., 2018", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "ABSA-Bench is the most akin to SQuAD but unlike SQuAD, it focuses on ABSA task. ABSA-Bench provides two means of evaluations that is similar to SQuAD and QuAC. The online evaluation in ABSA-Bench is supported by JupyterHub which has key features like customization, flexibility and scalability. This distinguishes it from other similar platforms. JuptyerHub also serves a variety of environments. It can be easily containerised with any container, therefore can be scaled up for a greater number of users. A number of authentication protocols such as OAuth and GitHub are also supported, making it flexible for users. ABSA-Bench also supports an online discussion forum for researchers to exchange their ideas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "There are relatively less research efforts on providing a comprehensive benchmarking platform for multiple NLP tasks. DecaNLP (McCann et al., 2018) 8 is the only one found in this category. It spans ten NLP tasks and recasts these tasks as question answering over a context using automatic transformations. Therefore, DecaNLP evaluates the models under the rubrics of assessing question answering models. DecaNLP considers the general sentiment analysis, but does not include ABSA.", "cite_spans": [ { "start": 118, "end": 147, "text": "DecaNLP (McCann et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Aspect based sentiment analysis is a fundamental task in sentiment analysis research field (Pontiki et al., 2014) which comprises of three sub-tasks: aspect extraction, sentiment extraction and aspect based sentiment classification. In recent years, deep neural network has gained a lot of attention in solving the problem of ABSA. More recently, BERT (Devlin et al., 2019) , has shown its effectiveness to alleviate the effort of feature engineering and has shown state-of-the art results in the given task. However these performance improvements have been achieved at a high computational cost. As a result these models are costly to train and evaluate. To have a better understanding of the large number of DNN based ABSA models, a categorization is utmost essential. Therefore, a taxonomy has been designed in this study which categorises different deep learning supervised technique, diving all approaches into broadly two categories: BERT based and Non-BERT based models. Note that we focus on supervised approaches in this work.", "cite_spans": [ { "start": 91, "end": 113, "text": "(Pontiki et al., 2014)", "ref_id": "BIBREF11" }, { "start": 352, "end": 373, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Taxonomy and the Models", "sec_num": "3" }, { "text": "Although the platform is designed for researchers to evaluate their models per their own need, we examined some representative models as examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.1" }, { "text": "CNN. We adopt a Convolution Neural Network model (Xue and Li, 2018) based on convolution operations and gating mechanisms to represent the CNN-based ABSA models. LSTM. A vanilla Long Short Term Memory network represents the vanilla RNN-based models. TD-LSTM. Target-Dependent LSTM (Tang et al., 2016a ) is a modified LSTM. It consists of two LSTMs, which models the preceding and subsequent contexts surrounding the target words (aspect terms) respectively so that the contexts in both directions can be used as the feature representations for classifying sentiment in later stage. TC-LSTM. Target-Connection LSTM (Tang et al., 2016a) extends TD-LSTM by adding target connection component in order to capture the interactions between target word and its contexts. This component is basically a concatenation of word embedding and target vector at each position. ATAE-LSTM. The ATtention-based LSTM with Aspect Embedding (Wang et al., 2016) model appends the aspect embedding into each word input vector to capture aspect information. To capture the inter-aspect dependencies, the aspect-focused sentence representations are fed into another LSTM to model the temporal dependency.", "cite_spans": [ { "start": 281, "end": 300, "text": "(Tang et al., 2016a", "ref_id": "BIBREF15" }, { "start": 614, "end": 634, "text": "(Tang et al., 2016a)", "ref_id": "BIBREF15" }, { "start": 920, "end": 939, "text": "(Wang et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Non-BERT based Models", "sec_num": "3.1.1" }, { "text": "Sentiment Classification model (Liu et al., 2018) improves the attention mechanism with the help of two attention enhancing mechanisms, i.e., sentencelevel content attention and context attention. This ensures that the model is capable of taking the word order information, the aspect information and the correlation between the word and the aspect to calculate the attention weight and embed them into a series of customized memories.", "cite_spans": [ { "start": 31, "end": 49, "text": "(Liu et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "CABASC. Content Attention Based Aspect based", "sec_num": null }, { "text": "Network considers attention mechanisms on both the aspect and the context (Ma et al., 2017) . It uses two attention-based LSTM which interactively capture the key aspect terms and the important words of its context. The final representation of the sentence is produced by concatenating the representations of the aspect and its context, and is then passed to a soft-max layer for sentiment classification.", "cite_spans": [ { "start": 74, "end": 91, "text": "(Ma et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "IAN. Interactive Attention", "sec_num": null }, { "text": "MemNet. A Memory Network-based model (Tang et al., 2016b) adopts an attention mechanism with multi-hop layers which are stacked to select abstractive evidences from an external memory.", "cite_spans": [ { "start": 37, "end": 57, "text": "(Tang et al., 2016b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "IAN. Interactive Attention", "sec_num": null }, { "text": "The Recurrent Attention mechanism based on Memory network (Chen et al., 2017) targets the cases that aspect terms are distant from the corresponding sentiment information. RAM introduces multiple attentions to distill the related information from its position-weighted memory and a recurrent network for sentiment classification.", "cite_spans": [ { "start": 58, "end": 77, "text": "(Chen et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "RAM.", "sec_num": null }, { "text": "BERT-SPC. In this model, a pre-trained BERT model was fine-tuned with just one additional layer (Devlin et al., 2019) . For down-stream task like ABSA, the input representation is able to represent both a single sentence and a pair of sentences.", "cite_spans": [ { "start": 96, "end": 117, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "BERT based Models", "sec_num": "3.1.2" }, { "text": "AEN-BERT. The Attentional Encoder Network (Song et al., 2019) is built upon a BERT embedding layer along with an attentional encoder layer and a target-specific attention layer. LCF-BERT. In this model (Zeng et al., 2019) , a Local Context Focus (LCF) mechanism is proposed for aspect-based sentiment classification based on multi-head self-attention . It utilizes the Context Features Dynamic Mask and Context Features Dynamic Weighted layers to assign more attention weights to the local context words. A BERT-shared layer is adopted to capture the internal long-term dependencies of local context and global context. BERT-PT. The BERT Post-Training (Xu et al.) work enhances the performance of fine-tuning of BERT for Review Reading Comprehension (RRC) by adding a post-training step. This approach was then generalised to perform the task of aspect extraction and aspect sentiment classification in aspect-based sentiment analysis.", "cite_spans": [ { "start": 42, "end": 61, "text": "(Song et al., 2019)", "ref_id": "BIBREF14" }, { "start": 202, "end": 221, "text": "(Zeng et al., 2019)", "ref_id": "BIBREF22" }, { "start": 652, "end": 663, "text": "(Xu et al.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BERT based Models", "sec_num": "3.1.2" }, { "text": "This section introduces the ABSA-Bench platform, including the two ways of ABSA benchmarking evaluations provided and our insights into the design and implementation of ABSA-Bench.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ABSA-Bench", "sec_num": "4" }, { "text": "To evaluate the model's performance, we provide a way for researchers to submit their prediction results on the formatted test set to ABSA-Bench. The submission file needs to follow the structure required by ABSA-Bench, which is simply the sentence ID and aspect terms along with the predicted sentiment polarity. We also make available an evaluation script that we will use for the official evaluations. The evaluation script will measure the model performance based on Macro F 1 score, which is the weighted average of Precision and Recall. It is usually a more useful accuracy measure when there is an uneven class distribution which was the case in our benchmarking dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Results", "sec_num": "4.1" }, { "text": "The other means of evaluation supported by ABSA-Bench is model evaluation. We provide a unified online computation environment for researchers to train and test their models. We used widelyadopted JuypterHub 9 to which researchers could submit their model as a Juypter Notebook file. 9 https://jupyter.org/hub Once the trained model is submitted, it will get official scores on the test set. The platform also provides a documentation to help researchers understand how to use the platform. Please refer to Section 4.3.2 for more details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Models", "sec_num": "4.2" }, { "text": "In order to enable the above-mentioned evaluations, we design and implement a Web-based benchmark platform that enables researchers to evaluate their ABSA models in a unified environment for fair comparison. The performances measured in Macro F -1 score is ranked in the leader board in the platform with a discussion board provided to exchange ideas among researchers. Specifically, the platform consists of three primary elements: Leader board, Evaluation Portal, and Discussion forum. Figure 2 shows these three elements in this platform.", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 496, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Web-based Platform", "sec_num": "4.3" }, { "text": "We maintain a leader board in ABSA-Bench based on the evaluations of some of the state-of-the art ABSA models so far. The performances of the models that are submitted by the authors will be added to the leader board and assigned a proper ranking position. For a fair comparison, the BERT based and Non-BERT based models have been ranked separately with two tabs in the leader board.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Leader Board", "sec_num": "4.3.1" }, { "text": "The computation power is supported by Google Cloud Platform which will serve the Jupyter-Hub that is integrated with our platform. A preconfigured environment dedicated to ABSA will be created for participants. This environment will support complex computations and provide a task bundle which contains necessary dependencies for the task and the evaluations. Users need to create an account and be authenticated to participate in the challenge. They can train and evaluate their model in their own work spaces leveraging the resources provided and managed by system administrators who can test the submitted prediction files and assess the submitted models under a unified standard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Portal", "sec_num": "4.3.2" }, { "text": "A discussion forum is provided for participants once they create their account.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion forum", "sec_num": "4.3.3" }, { "text": "This will serve as a collaborative environment where researchers can post queries and collaborate. It will be especially helpful for new academics making an initial start in this field. This will save immense time in resolving concerns through a collaborative effort.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion forum", "sec_num": "4.3.3" }, { "text": "Discussion on the dataset including the motivation for choice, the implementation settings for the experiments and an objective comparison of the results have been presented in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Comparison", "sec_num": "5" }, { "text": "We adopted SemEval14 Task 4 (Pontiki et al., 2014) as the benchmarking dataset. This is because it is the only widely accepted benchmark dataset for ABSA and has successfully fostered ABSA research since its release. Although later SemEval competitions also contain ABSA tasks, those datasets are derived from the SemEval14 version with small updates that deviate the evaluation purpose from ABSA. Therefore, we retain the original version intending to be more focused.", "cite_spans": [ { "start": 28, "end": 50, "text": "(Pontiki et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "In SemEval14 ABSA task 4, there are two domain-specific subsets for laptops and restaurants reviews respectively, consisting of over 6,000 sentences with aspect-level human-authored labels for evaluation. Each single or multi-word aspect term is assigned one of the following polarities based on the sentiment that is expressed in the sentence towards it: positive, negative, neutral, and conflict. Restaurants includes annotations for coarse aspect categories, aspect terms, aspect term-specific polarities, and aspect category-specific polarities. Laptop includes annotations for aspect terms and their polarities. We removed the data with conflict sentiment polarity and the ones without aspect terms, obtaining ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "We evaluated some of the state-of-the-art ABSA models as introduced in Section 3.1. To provide a unified computation environment, we made necessary adjustments and expect researchers to follow these adjustments and submit their models to ABSA-Bench for fair comparisons. For Non-BERT-based models, GloVe 10 is adopted as the pre-trained word embedding. We have uniformly adjusted the dimension of the hidden state vectors as 300 and position embedding as 100. We initialised the weight matrices with the uniform distribution U (\u22120.1, 0.1), and the biases were initialised to zero. We experimented with a couple of optimizers and finally selected Adam for all the models to maintain uniformity. We kept the learning rate as 2e \u2212 5 and used 1e \u2212 5 as the value of the L 2 regularisation parameter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Adjustment", "sec_num": "5.2" }, { "text": "For BERT-based models, we used a pre-trained BERT 11 model to generate word vectors of sequences. All the models were implemented using Pytorch framework. Optimal parameters were selected during the training stage and the best performed models were selected for evaluation. We kept the default settings for other parameters as set in the original papers of each work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Adjustment", "sec_num": "5.2" }, { "text": "We report the evaluation results in this section, including prediction performance, run-time statistics and model sizes comparisons. Table 1 reports the Macro F 1 score in % of the examined models. We have compared BERT based models and Non-BERT based models separately as BERT based models have larger model sizes. BERT-based models achieved a much higher F 1 score in comparison to Non-BERT based models as did for all the other NLP tasks. LCF-BERT model provided the best performance among BERT based models in our experiments. Among all the Non-BERT base models, CABASC has obtained the highest F 1 score on both datasets. TC-LSTM outperforms basic LSTM model The results confirm that the context attention mechanism is more effective than the position attention mechanism. IAN outperforms ATAE-LSTM as it not only models the context representation, but also models the aspect representation by using attentions mechanism. Figure 3 illustrates the comparisons of the model run-time i.e training and evaluation time. Table 2 present the comparisons of the model sizes in terms of the number of parameters and the size of the memory used during model training. From Figure 3 and Table 2 , we observe the huge differences in the model sizes and execution times between BERT-based and Non-BERT based models. It is worth noting that for our experiments and also in the original papers, pre-trained BERT models have been used and therefore the model run time signifies time taken for fine-tuning and down-streaming the BERT model for particular task.", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 927, "end": 935, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 1020, "end": 1027, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1168, "end": 1177, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 1182, "end": 1189, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5.3" }, { "text": "Difference in the performances. Compared to the values provided by the original papers, the performances of the examined models under our benchmarking environment ABSA-Bench show different macro F 1 scores for all the models. It is easy to understand that the differences are as results of the different data pre-processing, implementation settings and evaluation environment. However, it is difficult to compare the models by just referring the papers. For example, the Macro F 1 value for RAM is 70.51% for Laptop in (Li et al.) Macro F 1 value for RAM is 71.35% for the same dataset in (Zeng et al., 2019) . Given a new model with 71.00% Macro F 1 on Laptop, we could not know whether it is better than RAM or not. This inconsistency motivates us to build an evaluation process on under a unified settings. Our platform aims to overcome these inconsistencies.", "cite_spans": [ { "start": 519, "end": 530, "text": "(Li et al.)", "ref_id": null }, { "start": 589, "end": 608, "text": "(Zeng et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Discussion", "sec_num": "5.4" }, { "text": "Trade-off between the performances and the computational costs. While BERT based models overall performed much better than Non-BERT based models, it is computationally more expensive. Even though pre-trained BERT models were used in the experiments, there was a significant increase in the computational cost which was mainly due to the huge difference in the parameter size. These models also limits research to industrial or big-scale research labs while researchers without the access to large-scale computation will be constrained with their experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Discussion", "sec_num": "5.4" }, { "text": "In this work, we design and implement an ABSA benchmarking evaluation process by providing two means of online evaluations and a Web-based platform. Leader board and discussion forums are enabled to rank the state-of-the-art ABSA research and share research ideas respectively. We examined some recent models and compared their actual differences under the unified platform ABSABench. This platform will help to understand the implementation of different deep learning models performing the task of ABSA. This understanding can then be utilised to improve the existing models. We intend to update our benchmarking platform with new tasks and datasets which will encourage quantitatively-informed research and learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "6" }, { "text": "https://cloud.google.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://rajpurkar.github.io/SQuAD-explorer/ 4 https://worksheets.codalab.org/ 5 http://quac.ai/ 6 https://gluebenchmark.com/ 7 https://ritual.uh.edu/lince/home", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://decanlp.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlp.stanford.edu/projects/glove/ 11 https://github.com/google-research/bert", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This project is sponsored by Google Academic Research Grants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation", "authors": [ { "first": "Gustavo", "middle": [], "last": "Aguilar", "suffix": "" }, { "first": "Sudipta", "middle": [], "last": "Kar", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "Solorio", "suffix": "" } ], "year": 2020, "venue": "Proc. of the LREC 2020", "volume": "", "issue": "", "pages": "1803--1813", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Aguilar, Sudipta Kar, and Thamar Solorio. 2020. LinCE: A Centralized Benchmark for Lin- guistic Code-switching Evaluation. In Proc. of the LREC 2020, pages 1803-1813, Marseille, France.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Recurrent Attention Network on Memory for Aspect Sentiment Analysis", "authors": [ { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhongqian", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2017, "venue": "Proc. of the EMNLP 2017", "volume": "", "issue": "", "pages": "452--461", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent Attention Network on Mem- ory for Aspect Sentiment Analysis. In Proc. of the EMNLP 2017, pages 452-461, Copenhagen, Den- mark.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "QuAC: Question Answering in Context", "authors": [ { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Wentau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proc. of the EMNLP 2018", "volume": "", "issue": "", "pages": "2174--2184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. QuAC: Question Answering in Con- text. In Proc. of the EMNLP 2018, pages 2174- 2184, Brussels, Belgium.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proc. of the NAACL-HLT 2019", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proc. of the NAACL-HLT 2019, pages 4171-4186, Minneapolis, MN, USA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Target-dependent Twitter Sentiment Classification", "authors": [ { "first": "Long", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2011, "venue": "Proc. of the ACL HLT", "volume": "", "issue": "", "pages": "151--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent Twitter Senti- ment Classification. In Proc. of the ACL HLT, pages 151-160, Portland, Oregon, USA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "NRC-Canada-2014: Detecting Aspects and Sentiment in Customer Reviews", "authors": [ { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2014, "venue": "Proc. of the SemEval", "volume": "", "issue": "", "pages": "437--442", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif Mohammad. 2014. NRC-Canada-2014: Detect- ing Aspects and Sentiment in Customer Reviews. In Proc. of the SemEval 2014, pages 437-442, Dublin, Ireland.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Transformation networks for target-oriented sentiment classification", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Bei", "middle": [], "last": "Shi", "suffix": "" } ], "year": 2018, "venue": "Proc. of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li, Lidong Bing, Wai Lam, and Bei Shi. Transfor- mation networks for target-oriented sentiment classi- fication. In Proc. of the ACL 2018.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Content Attention Model for Aspect Based Sentiment Analysis", "authors": [ { "first": "Qiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Haibin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yifu", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Ziqi", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zufeng", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2018, "venue": "Proc. of the WWW 2018", "volume": "", "issue": "", "pages": "1023--1032", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiao Liu, Haibin Zhang, Yifu Zeng, Ziqi Huang, and Zufeng Wu. 2018. Content Attention Model for Aspect Based Sentiment Analysis. In Proc. of the WWW 2018, pages 1023-1032, Lyon, France.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Effective Approaches to Attentionbased Neural Machine Translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proc. of the EMNLP 2015", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective Approaches to Attention- based Neural Machine Translation. In Proc. of the EMNLP 2015, pages 1412-1421, Lisbon, Portugal.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Interactive Attention Networks for Aspect-Level Sentiment Classification", "authors": [ { "first": "Dehong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2017, "venue": "Proc. of the IJCAI 2017", "volume": "", "issue": "", "pages": "4068--4074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive Attention Networks for Aspect-Level Sentiment Classification. In Proc. of the IJCAI 2017, pages 4068-4074, Melbourne, Aus- tralia.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Natural Language Decathlon: Multitask Learning as Question Answering", "authors": [ { "first": "Bryan", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The Natural Language Decathlon: Multitask Learning as Question Answer- ing. CoRR, abs/1806.08730.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SemEval-2014 Task 4: Aspect Based Sentiment Analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "John", "middle": [], "last": "Pavlopoulos", "suffix": "" }, { "first": "Harris", "middle": [], "last": "Papageorgiou", "suffix": "" } ], "year": 2014, "venue": "Proc. of the SemEval", "volume": "", "issue": "", "pages": "27--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In Proc. of the SemEval 2014, pages 27-35, Dublin, Ireland.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Know What You Don't Know: Unanswerable Questions for SQuAD", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proc. of the ACL 2018", "volume": "", "issue": "", "pages": "784--789", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don't Know: Unanswerable Ques- tions for SQuAD. In Proc. of the ACL 2018, pages 784-789, Melbourne, Australia.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "SQuAD: 100, 000+ Questions for Machine Comprehension of Text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proc. of the EMNLP 2016", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100, 000+ Questions for Machine Comprehension of Text. In Proc. of the EMNLP 2016, pages 2383-2392, Austin, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Targeted sentiment classification with attentional encoder network", "authors": [ { "first": "Youwei", "middle": [], "last": "Song", "suffix": "" }, { "first": "Jiahai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Zhiyue", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yanghui", "middle": [], "last": "Rao", "suffix": "" } ], "year": 2019, "venue": "Proc of the ICANN 2019", "volume": "", "issue": "", "pages": "93--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Youwei Song, Jiahai Wang, Tao Jiang, Zhiyue Liu, and Yanghui Rao. 2019. Targeted sentiment classifica- tion with attentional encoder network. In Proc of the ICANN 2019, pages 93-103.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Effective LSTMs for Target-Dependent Sentiment Classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Xiaocheng", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proc. of the COLING 2016", "volume": "", "issue": "", "pages": "3298--3307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective LSTMs for Target-Dependent Sen- timent Classification. In Proc. of the COLING 2016, pages 3298-3307, Osaka, Japan.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Aspect Level Sentiment Classification with Deep Memory Network", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proc. of the EMNLP 2016", "volume": "", "issue": "", "pages": "214--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect Level Sentiment Classification with Deep Memory Network. In Proc. of the EMNLP 2016, pages 214- 224, Austin, Texas, USA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Target-dependent twitter sentiment classification with rich automatic features", "authors": [ { "first": "Duy-Tin", "middle": [], "last": "Vo", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2015, "venue": "Proc. of the IJCAI 2015", "volume": "", "issue": "", "pages": "1347--1352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duy-Tin Vo and Yue Zhang. 2015. Target-dependent twitter sentiment classification with rich automatic features. In Proc. of the IJCAI 2015, pages 1347- 1352, Buenos Aires, Argentina.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "GLUE:A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Proc. of the ICLR 2019", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE:A Multi-Task Benchmark and Analysis Plat- form for Natural Language Understanding. In Proc. of the ICLR 2019, New Orleans, LA, USA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Attention-based LSTM for Aspectlevel Sentiment Classification", "authors": [ { "first": "Yequan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Li", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "Proc. of the EMNLP 2016", "volume": "", "issue": "", "pages": "606--615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for Aspect- level Sentiment Classification. In Proc. of the EMNLP 2016, pages 606-615, Austin, Texas.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis", "authors": [ { "first": "Hu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Philip", "middle": [ "S" ], "last": "Yu", "suffix": "" } ], "year": null, "venue": "Proc. of the NAACL-HLT 2019", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis. In Proc. of the NAACL-HLT 2019, Minneapolis, MN, USA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Aspect Based Sentiment Analysis with Gated Convolutional Networks", "authors": [ { "first": "Wei", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proc. of the ACL 2018", "volume": "", "issue": "", "pages": "2514--2523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xue and Tao Li. 2018. Aspect Based Senti- ment Analysis with Gated Convolutional Networks. In Proc. of the ACL 2018, pages 2514-2523, Mel- bourne, Australia.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "LCF: A Local Context Focus Mechanism for Aspect-Based Sentiment Classification", "authors": [ { "first": "Biqing", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruyang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xuli", "middle": [], "last": "Han", "suffix": "" } ], "year": 2019, "venue": "Applied Sciences", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biqing Zeng, Heng Yang, Ruyang Xu, Wu Zhou, and Xuli Han. 2019. LCF: A Local Context Focus Mechanism for Aspect-Based Sentiment Classifica- tion. Applied Sciences, 9:3389.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Texygen: A Benchmarking Platform for Text Generation Models", "authors": [ { "first": "Yaoming", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Sidi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Jiaxian", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "Proc. of the SIGIR 2018", "volume": "", "issue": "", "pages": "1097--1100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A Benchmarking Platform for Text Generation Models. In Proc. of the SIGIR 2018, pages 1097- 1100, Ann Arbor, MI, USA.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "The Framework of ABSA-Bench", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "Model Run-Time Comparison", "uris": null }, "TABREF0": { "type_str": "table", "num": null, "text": "1,978 training samples and 600 test for Restaurants and 1,462 training samples and 411 test samples for Laptop respectively.", "html": null, "content": "
ModelsRestaurants Laptop
CNN60.2557.75
LSTM65.5155.35
TD-LSTM68.9861.87
TC-LSTM66.7261.11
ATAE-LSTM63.7258.47
CABASC68.0262.94
IAN65.1260.90
RAM66.7659.73
MemNet61.0958.01
AEN-BERT73.7676.31
BERT-PT76.9675.08
BERT-SPC73.0372.63
LCF-BERT81.7479.59
" }, "TABREF1": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
: Performances Comparison (F -1 in %) on the
Unified Environment
" }, "TABREF2": { "type_str": "table", "num": null, "text": "while the", "html": null, "content": "
ModelsParams 10 6 Memory (MB)
CNN1.2110.01
LSTM7.2335.61
TD-LSTM1.4412.41
TC-LSTM2.1614.11
ATAE-LSTM2.5316.61
CABASC1.5312.61
IAN2.1616.18
RAM6.1331.18
MemNet0.367.8 2
AEN-BERT112.93451.84
BERT-PT110450.23
BERT-SPC109.48450.58
LCF-BERT113.61452.62
" }, "TABREF3": { "type_str": "table", "num": null, "text": "Mode Size Comparison", "html": null, "content": "" } } } }