{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:13:42.372581Z" }, "title": "BERT-Based Neural Collaborative Filtering and Fixed-Length Contiguous Tokens Explanation", "authors": [ { "first": "Reinald", "middle": [ "Adrian" ], "last": "Pugoy", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": { "settlement": "Tainan City", "country": "Taiwan" } }, "email": "rdpugoy@up.edu.ph" }, { "first": "Hung-Yu", "middle": [], "last": "Kao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": { "settlement": "Tainan City", "country": "Taiwan" } }, "email": "hykao@mail.ncku.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a novel, accurate, and explainable recommender model (BENEFICT) that addresses two drawbacks that most reviewbased recommender systems face. First is their utilization of traditional word embeddings that could influence prediction performance due to their inability to model the word semantics' dynamic characteristic. Second is their black-box nature that makes the explanations behind every prediction obscure. Our model uniquely integrates three key elements: BERT, multilayer perceptron, and maximum subarray problem to derive contextualized review features, model user-item interactions, and generate explanations, respectively. Our experiments show that BENEFICT consistently outperforms other state-of-the-art models by an average improvement gain of nearly 7%. Based on the human judges' assessment, the BENEFICT-produced explanations can capture the essence of the customer's preference and help future customers make purchasing decisions. To the best of our knowledge, our model is one of the first recommender models to utilize BERT for neural collaborative filtering. Lei Zheng, Vahid Noroozi, and Philip S Yu. 2017. Joint deep modeling of users and items using reviews for recommendation. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 425-434. ACM.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We propose a novel, accurate, and explainable recommender model (BENEFICT) that addresses two drawbacks that most reviewbased recommender systems face. First is their utilization of traditional word embeddings that could influence prediction performance due to their inability to model the word semantics' dynamic characteristic. Second is their black-box nature that makes the explanations behind every prediction obscure. Our model uniquely integrates three key elements: BERT, multilayer perceptron, and maximum subarray problem to derive contextualized review features, model user-item interactions, and generate explanations, respectively. Our experiments show that BENEFICT consistently outperforms other state-of-the-art models by an average improvement gain of nearly 7%. Based on the human judges' assessment, the BENEFICT-produced explanations can capture the essence of the customer's preference and help future customers make purchasing decisions. To the best of our knowledge, our model is one of the first recommender models to utilize BERT for neural collaborative filtering. Lei Zheng, Vahid Noroozi, and Philip S Yu. 2017. Joint deep modeling of users and items using reviews for recommendation. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 425-434. ACM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recommender systems research, collaborative filtering (CF) is the dominant state-of-the-art recommendation model, which primarily focuses on learning accurate representations of users (user preferences) and items (item characteristics) Tay et al., 2018) . The earliest recommender models learned these representations based on user-given numeric ratings that each item received (Mnih and Salakhutdinov, 2008; Koren et al., 2009) . However, ratings, which are values on a single discrete scale, oversimplify user preferences and item characteristics (Musto et al., 2017) . The large amount of users and items in a typical online platform consequently results in a highly sparse rating matrix, making it hard to learn accurate representations (Zheng et al., 2017) .", "cite_spans": [ { "start": 239, "end": 256, "text": "Tay et al., 2018)", "ref_id": "BIBREF24" }, { "start": 381, "end": 411, "text": "(Mnih and Salakhutdinov, 2008;", "ref_id": "BIBREF16" }, { "start": 412, "end": 431, "text": "Koren et al., 2009)", "ref_id": "BIBREF12" }, { "start": 552, "end": 572, "text": "(Musto et al., 2017)", "ref_id": "BIBREF17" }, { "start": 744, "end": 764, "text": "(Zheng et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To alleviate these issues, review texts have instead been utilized to model such representations for subsequent recommendation and rating prediction, and this approach has attracted growing attention in research (Catherine and Cohen, 2017; Zheng et al., 2017) . The main advantage of reviews as the source of features is that they can cover user opinions' multi-faceted substance. Because users can explain their reasons underlying their given ratings, reviews contain a large amount of latent information that is both rich and valuable, and that cannot be otherwise obtained from ratings alone Wang et al., 2019) . Recently, models that incorporate user reviews have yielded state-of-the-art performances (Zheng et al., 2017; . These approaches learn user and item representations by using traditional word embeddings (e.g., word2vec, GloVe) to map each word in the review into its corresponding vector. The review is transformed into an embedded matrix before being fed to a convolutional neural network (CNN) . CNNs have been shown to effectively model reviews and have illustrated outstanding results in numerous natural language processing tasks (Wang et al., 2018a) .", "cite_spans": [ { "start": 212, "end": 239, "text": "(Catherine and Cohen, 2017;", "ref_id": "BIBREF3" }, { "start": 240, "end": 259, "text": "Zheng et al., 2017)", "ref_id": null }, { "start": 595, "end": 613, "text": "Wang et al., 2019)", "ref_id": "BIBREF26" }, { "start": 706, "end": 726, "text": "(Zheng et al., 2017;", "ref_id": null }, { "start": 1151, "end": 1171, "text": "(Wang et al., 2018a)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Nevertheless, there are drawbacks that most review-based recommender models experience. First is the utilization of traditional or mainstream word embeddings to learn review features. Their static nature is a hindrance, as each word sense is associated with the same embedding regardless of the context. In other words, such embeddings cannot identify the dynamic nature of each word's semantics. For review-based recommenders, this could be an issue in modeling users and items, which could, in turn, affect recommendation performance (Pilehvar and Camacho-Collados, 2019) . Also, once a CNN is fed with the matrix of word embeddings, the word frequency information of contextual fea-tures, said to be crucial for modeling reviews, will be lost (Wang et al., 2018a) .", "cite_spans": [ { "start": 536, "end": 573, "text": "(Pilehvar and Camacho-Collados, 2019)", "ref_id": "BIBREF19" }, { "start": 746, "end": 766, "text": "(Wang et al., 2018a)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another drawback is the inherent black-box nature of deep learning-based models that makes the explanations behind every prediction obscure (Ribeiro et al., 2016; Wang et al., 2018b) . The complex architecture of hidden layers has opaqued the models' internal decision-making processes (Peake and Wang, 2018) . Providing explanations could help persuade users to make decisions and develop trust in a recommender system (Zhang et al., 2014; Ribeiro et al., 2016; Costa et al., 2018; Peake and Wang, 2018) . However, this leads us to a dilemma, i.e., a trade-off between accuracy and explainability. Usually, the most accurate models are inherently complicated, non-transparent, and unexplainable . The same is also true for explainable and straightforward methods that sacrifice accuracy. Formulating models that are both explainable and accurate is a challenging yet critical research agenda for the machine learning community to ensure that we derive benefits from machine learning fairly and responsibly (Peake and Wang, 2018) .", "cite_spans": [ { "start": 140, "end": 162, "text": "(Ribeiro et al., 2016;", "ref_id": "BIBREF22" }, { "start": 163, "end": 182, "text": "Wang et al., 2018b)", "ref_id": "BIBREF27" }, { "start": 286, "end": 308, "text": "(Peake and Wang, 2018)", "ref_id": "BIBREF18" }, { "start": 420, "end": 440, "text": "(Zhang et al., 2014;", "ref_id": "BIBREF33" }, { "start": 441, "end": 462, "text": "Ribeiro et al., 2016;", "ref_id": "BIBREF22" }, { "start": 463, "end": 482, "text": "Costa et al., 2018;", "ref_id": "BIBREF5" }, { "start": 483, "end": 504, "text": "Peake and Wang, 2018)", "ref_id": "BIBREF18" }, { "start": 1007, "end": 1029, "text": "(Peake and Wang, 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a unique model: BERT-Based Neural Collaborative Filtering and Fixed-Length Contiguous Tokens Explanation (BENEFICT). Our model learns user and item representations simultaneously using two parallel networks. To address the first drawback, we incorporate BERT as a key component in each parallel network. BERT affords us to extract more meaningful, contextualized features adaptable to arbitrary contexts; such features cannot be derived from mainstream word embeddings (Pilehvar and Camacho-Collados, 2019; Zakbik et al., 2019) . BERT can also retain the word frequency information that makes CNN an unnecessary component of our model. Once user and item representations are learned, they are concatenated together in a shared hidden space before being finally fed to an optimal stack of multilayer perceptron (MLP) layers that serve as BENEFICT's interaction function.", "cite_spans": [ { "start": 495, "end": 532, "text": "(Pilehvar and Camacho-Collados, 2019;", "ref_id": "BIBREF19" }, { "start": 533, "end": 553, "text": "Zakbik et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address the second drawback, we introduce a novel component in our model that integrates BERT's self-attention and an implementation of the fixed-length maximum subarray problem (MSP), which is considered to be a classic computer science problem. BERT applies self-attention in each encoder layer that consequently produces selfattention weights for each token. These are passed to the successive encoder layers through feedforward networks. We argue that these self-attention weights can be the basis for explaining rating predictions. Based on this premise, MSP then selects a segment or subarray of consecutive tokens that has the maximum possible sum of self-attention weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work aims to fill the research gap by implementing a solution that is both accurate and explainable. We propose a novel model that uniquely integrates three vital elements, i.e., BERT, MLP, and MSP, to derive review features, model useritem interactions, and produce possible explanations. To the best of our knowledge, BENEFICT is one of the first review-based recommender models to utilize BERT for neural CF. Also, to the extent of our knowledge, BENEFICT is one of the first models to repurpose a portion of the Neural Collaborative Filtering (NCF) framework as the user-item interaction function for review-based, explicit CF. Moreover, our experiments have demonstrated that our model achieves better rating prediction results than the other stateof-the-art recommender models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": "1.1" }, { "text": "Designing a CF model involves two crucial steps: learning user and item representations and modeling user-item interactions based on those representations (He et al., 2018) . Before the advancements provided by neural networks, matrix factorization (MF) was the dominant model representing users and items as vectors of latent factors (called embeddings) and models user-item interactions using the inner product operation. The said operation leads to poor performance because it is sub-optimal for learning rich yet complicated patterns from realworld data (He et al., 2018) . To address this scenario, neural networks (NN) have been integrated into recommender architectures. One of the initial works that have laid the foundation in employing NN for CF is NCF . Their framework, originally implemented for rating-based, implicit CF, learns non-linear interactions between users and items by employing MLP layers as their interaction function, granting it a high degree of non-linearity and flexibility to learn meaningful interactions. Two common designs have emerged when it comes to leveraging MLP layers: placing an MLP above either the concatenated user-item embeddings Bai et al., 2017) or the element-wise product of user and item embeddings .", "cite_spans": [ { "start": 155, "end": 172, "text": "(He et al., 2018)", "ref_id": "BIBREF10" }, { "start": 558, "end": 575, "text": "(He et al., 2018)", "ref_id": "BIBREF10" }, { "start": 1177, "end": 1194, "text": "Bai et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Concepts", "sec_num": "2" }, { "text": "As far as rating prediction is concerned, two notable recommender models have yielded significant state-of-the-art prediction performances. DeepCoNN is the first deep model that represents users and items from reviews jointly (Zheng et al., 2017) . It consists of two parallel, CNN-powered networks. One network learns user behavior by examining all reviews that he has written, and the other network models item properties by exploring all reviews that it has received. A shared layer connects these two networks, and factorization machines capture user-item interactions. The second model is NARRE, which shares certain similarities with DeepCoNN. NARRE is also composed of two parallel networks for user and item modeling with respective CNNs to process reviews . Rather than concatenating reviews to one long sequence the same way that DeepCoNN does, their model introduces an attention mechanism that learns review-level usefulness in the form of attention weights. These weights are integrated into user and item representations to enhance the embedding quality and the subsequent prediction accuracy. Both DeepCoNN and NARRE employ traditional word embeddings.", "cite_spans": [ { "start": 226, "end": 246, "text": "(Zheng et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Concepts", "sec_num": "2" }, { "text": "Other relevant studies have claimed to provide explanations for recommendations such as EFM (Zhang et al., 2014 ), sCVR (Ren et al., 2017 , and TriRank (He et al., 2015) . These models initially extract aspects and opinions by performing phraselevel sentiment analysis on reviews. Afterward, they generate feature-level explanations according to product features that correspond to user interests . However, these models have some limitations; manual preprocessing is required for sentiment analysis and feature extraction, and the explanations are simple extraction of words or phrases from the review text (Zhang et al., 2014; Ren et al., 2017) . This also has the unintended effect of distorting the reviews' original meaning (Ribeiro et al., 2016; . Another limitation is that textual similarity is solely based on lexical similarity; this implies that semantic meaning is ignored (Zheng et al., 2017; .", "cite_spans": [ { "start": 92, "end": 111, "text": "(Zhang et al., 2014", "ref_id": "BIBREF33" }, { "start": 112, "end": 137, "text": "), sCVR (Ren et al., 2017", "ref_id": null }, { "start": 152, "end": 169, "text": "(He et al., 2015)", "ref_id": "BIBREF9" }, { "start": 608, "end": 628, "text": "(Zhang et al., 2014;", "ref_id": "BIBREF33" }, { "start": 629, "end": 646, "text": "Ren et al., 2017)", "ref_id": "BIBREF21" }, { "start": 729, "end": 751, "text": "(Ribeiro et al., 2016;", "ref_id": "BIBREF22" }, { "start": 885, "end": 905, "text": "(Zheng et al., 2017;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Concepts", "sec_num": "2" }, { "text": "BENEFICT, as illustrated in Figure 1 , has two parallel networks to model user and item embeddings that both utilize BERT. Hereafter, we will only illustrate the user modeling process because the same is also observed for item modeling, with their inputs as the only difference.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Given an input set of user-written reviews", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Layer and BERT Encoding", "sec_num": "3.1" }, { "text": "V u = {V u1 , V u2 , ..., V uj }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Layer and BERT Encoding", "sec_num": "3.1" }, { "text": "where j is the total number of reviews from user u, V u is fed to a pre-trained BERT BASE model to encode the reviews and obtain their respective contextualized representations. BERT BASE consists of 12 encoder layers and 12 self-attention heads (Devlin et al., 2018) . It also has a hidden size of 768, which we will directly utilize later as the fixed embedding dimension. Furthermore, BERT requires every review to follow a particular format. For this purpose, the model applies WordPiece tokenization to the review's input sequence (Wu et al., 2016) . The format is comprised of token embeddings, segment embeddings, position embeddings, and padding masks. Because rating prediction is not a sentence pairing task, BERT takes each review as a single segment of contiguous text. Typically, BERT supports a maximum sequence length of 512 tokens. In this study, we use a shorter length of 256 tokens to save substantial memory. As such, each input sequence is truncated or padded accordingly.", "cite_spans": [ { "start": 246, "end": 267, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 536, "end": 553, "text": "(Wu et al., 2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Input Layer and BERT Encoding", "sec_num": "3.1" }, { "text": "The newly-formatted input sequence then passes through a stack of Transformer encoders to obtain the contextualized representations of reviews:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Layer and BERT Encoding", "sec_num": "3.1" }, { "text": "h [CLS],u = {h [CLS],u1 , h [CLS],u2 , ..., h [CLS],uj }, where h [CLS],u \u2208 R j\u00d7768", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Layer and BERT Encoding", "sec_num": "3.1" }, { "text": ". We utilize the hidden state of the special [CLS] token to serve as the review's aggregate sequence representation or pooled contextualized embedding (Devlin et al., 2018) . In theory, any encoder layer may be selected to provide the hidden state of [CLS] as the review's representation. We select the twelfth layer for our approach; prior studies have illustrated that its predictive capability is the best among the other layers (Sun et al., 2019) .", "cite_spans": [ { "start": 151, "end": 172, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 432, "end": 450, "text": "(Sun et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Input Layer and BERT Encoding", "sec_num": "3.1" }, { "text": "Perceptron, and Prediction", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Generation, Multilayer", "sec_num": "3.2" }, { "text": "The user embedding (user feature vector) P u \u2208 R 1\u00d7768 is obtained by calculating the average of the [CLS] representations of the reviews written by user u, given by the formula below. Similarly, the item embedding (item feature vector) Q i \u2208 R 1\u00d7768 can be generated from the item modeling network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Generation, Multilayer", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P u = 1 j j t=1 h [CLS],ut", "eq_num": "(1)" } ], "section": "Embedding Generation, Multilayer", "sec_num": "3.2" }, { "text": "Furthermore, the purpose of incorporating an MLP is to learn the interactions between user and item representations and to model the CF effect, which will not be properly covered by solely using vector concatenation or element-wise product . Adding a certain number of hidden layers on top of the concatenated user-item embedding provides further flexibility and non-linearity. Formally, the MLP component of BENEFICT is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Generation, Multilayer", "sec_num": "3.2" }, { "text": "h 0 = P u , Q i T h 1 = ReLU (W 1 h 0 + b 1 ) h L = ReLU (W L h L\u22121 + b L ) R ui = W L+1 h L + b L+1 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Generation, Multilayer", "sec_num": "3.2" }, { "text": "where h 0 \u2208 R 1\u00d71536 is the concatenated user-item embedding in the shared hidden space; h L represents the L-th MLP layer; W L and b L pertain to the weight matrix and bias vector of the L-th layer, respectively; andR ui denotes the predicted rating that user u gives to item i. For the activation function of the MLP layers, we choose the rectified linear unit (ReLU), which generally yields better performance than other activation functions such as tanh and sigmoid (Glorot et al., 2011; He et al., 2016 .", "cite_spans": [ { "start": 470, "end": 491, "text": "(Glorot et al., 2011;", "ref_id": "BIBREF7" }, { "start": 492, "end": 507, "text": "He et al., 2016", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Generation, Multilayer", "sec_num": "3.2" }, { "text": "Concerning the structure, our model's MLP component follows a tower pattern where the bottom layer is the widest, and every subsequent top layer has a smaller number of neurons. The rationale behind this is that the MLP can learn more abstractive data features by utilizing fewer hidden units for the top layers (He et al., 2016) . In our implementation for a three-layered MLP, the number of neurons from the bottom layer to the top layer follows this pattern: 1536 (concatenated embedding) \u2192 768 (MLP layer 1) \u2192 384 (MLP layer 2) \u2192 192 (MLP layer 3) \u2192 1 (prediction layer)", "cite_spans": [ { "start": 312, "end": 329, "text": "(He et al., 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Generation, Multilayer", "sec_num": "3.2" }, { "text": "In training the model, the loss function is the mean squared error (MSE) given by this formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M SE = 1 |T r| u,i\u2208T r (R ui \u2212R ui ) 2", "eq_num": "(3)" } ], "section": "Learning", "sec_num": "3.3" }, { "text": "where T r refers to the training samples or instances, and R ui is the ground-truth rating given by user u to item i. Moreover, we employ the Adaptive Moment Estimation with weight decay or AdamW (Loshchilov and Hutter, 2018) to optimize the loss function. Based on the original Adam optimizer, AdamW also leverages the power of adaptive learning rates during training. This makes the selection of a proper learning rate less cumbersome that consequently leads to faster convergence . Unlike Adam, AdamW implements a weight decay fix, a regularization technique that prevents weights from growing too huge and is proven to yield better training loss and generalization error (Loshchilov and Hutter, 2018) .", "cite_spans": [ { "start": 196, "end": 225, "text": "(Loshchilov and Hutter, 2018)", "ref_id": "BIBREF13" }, { "start": 675, "end": 704, "text": "(Loshchilov and Hutter, 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "3.3" }, { "text": "The stack of BERT's Transformer encoders also provides sets of self-attention weights that a token gives to every token found in the review text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "We are particularly interested in the attention that [CLS] gives to each review token using the twelfth layer's multiple attention heads. Given an input sequence of tokens F uj produced by WordPiece tokenization from review V uj , a set of attention weights is represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "\u03b1 [CLS],uj = {\u03b1 k 1 (F uj ), \u03b1 k 2 (F uj ), ..., \u03b1 k g (F uj )} (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "where k is the specific attention head in a particular encoder layer, and \u03b1 k g is the attention that [CLS] gives to the g-th WordPiece token over the input sequence F uj . There are 12 attention heads in an encoder layer which translate to 12 different attention weights that each token receives from the [CLS] token. For a given token g, the following formula is applied to compress the weights into a single value:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ComAtt g = 12 k=1 \u03b1 k g (F uj )", "eq_num": "(5)" } ], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "We then reformulate the task of generating explanations as a fixed-length MSP. In its vanilla sense, MSP selects a segment of consecutive array elements (i.e., a contiguous subarray of tokens) that has the maximum possible sum over all other segments (Bae, 2007) . In this paper, we introduce constraint N to the MSP; N is a fixed value that pertains to the length of the explanation. Formally, the set of compressed attention weights per review is given by the following array: The goal is to find token indices x and y that maximize:", "cite_spans": [ { "start": 251, "end": 262, "text": "(Bae, 2007)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "A uj = [ComAtt 1 , ComAtt 2 , ..., ComAtt g ] (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y t=x A uj [t]", "eq_num": "(7)" } ], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "This is subject to the requirements that 1 \u2264 x < y \u2264 256 and (y \u2212 x) + 1 = N . Finally, the generated explanation for review V uj is represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "EXP uj = Concat(F uj,x , F uj,x+1 , ..., F uj,y )", "eq_num": "(8)" } ], "section": "Explanation Generation", "sec_num": "3.4" }, { "text": "In this section, we perform relevant experiments intending to answer the following research questions: RQ1: Does BENEFICT outperform other stateof-the-art recommender models?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "RQ2: What is the optimal configuration for learning user-item interactions? RQ3: Can our model produce explanations acceptable to humans? Table 1 summarizes the four public datasets from different domains used in our study. Two of these datasets are Amazon 5-core 1 : Toys and Games, which consists of nearly 168 thousand reviews, and Digital Music, which contains about 65 thousand reviews (McAuley et al., 2015) . These datasets are said to be 5-core wherein users and items have five reviews each. We also utilize Yelp 2 , a large-scale dataset for restaurant feedback and ratings. We both employ its original, sparse version and its 5core, dense version with about 160 thousand and 230 thousand reviews, respectively. The ratings in all datasets are in the range of [1, 5] . We randomly split each dataset of user-item pairs into training (80% 0.001. Due to memory limitations, the batch size is fixed at 32. We select the model configuration (i.e., a grid point) with the best root mean square error (RMSE) on the validation set. We use the test set for evaluating the model's final performance.", "cite_spans": [ { "start": 391, "end": 413, "text": "(McAuley et al., 2015)", "ref_id": "BIBREF14" }, { "start": 770, "end": 773, "text": "[1,", "ref_id": null }, { "start": 774, "end": 776, "text": "5]", "ref_id": null } ], "ref_spans": [ { "start": 138, "end": 145, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "To validate the effectiveness of BENEFICT, we select two other state-of-the-art models as baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Evaluation Metric", "sec_num": "4.2" }, { "text": "\u2022 DeepCoNN (Zheng et al., 2017) : It is a deep collaborative neural network model based on two parallel CNNs to learn user and item feature vectors in a joint manner.", "cite_spans": [ { "start": 2, "end": 31, "text": "DeepCoNN (Zheng et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baselines and Evaluation Metric", "sec_num": "4.2" }, { "text": "\u2022 NARRE : Similar to Deep-CoNN, it is a neural attentional regression model that integrates two parallel CNNs and an attention mechanism to model latent features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Evaluation Metric", "sec_num": "4.2" }, { "text": "Afterward, we calculate the RMSE, a widely used metric for rating prediction, to evaluate the models' respective performances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Evaluation Metric", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "RM SE = 1 |T s| u,i\u2208T s (R ui \u2212R ui ) 2", "eq_num": "(9)" } ], "section": "Baselines and Evaluation Metric", "sec_num": "4.2" }, { "text": "In the formula, T s denotes the test samples or instances of user-item pairs. Table 2 reports the RMSE values of BENEFICT and the two baselines, with the last row (represented by \u2206BENEFICT) indicating the improvement gained by our model compared with the better baseline. The results show that BENEFICT consistently outperforms the baselines across all datasets; our model has an average RMSE score of 0.9206, as opposed to 1.0065 and 0.9979 for DeepCoNN and NARRE, respectively. On average, this has resulted in the improvement gained by BENEFICT of nearly 7%. These results validate our hypothesis that using BERT-derived embeddings and representations, considered to be more semantically meaningful than their traditional counterparts, can significantly improve rating prediction accuracy and that BERT can likewise offset the limitations of mainstream word embeddings and CNN. Moreover, the rationale of employing two versions of Yelp is to compare the recommender models' performances on both dense and sparse datasets. As illustrated in the fourth and fifth columns of Table 2 , both the RMSE values of Deep-CoNN and NARRE worsen when they attempt to perform predictions on the original, sparse Yelp. For DeepCoNN, from the dense version's RMSE of 1.0311, it increases to 1.2006. The same is also true for NARRE, whose RMSE increases to 1.1770 from 1.0312. Interestingly, BENEFICT produces an entirely different observation; its RMSE decreases to 0.9764 from 0.9963. Our model's improvement is 17.04%, greater than \u2206BENEFICT for the three other datasets. We attribute these findings to the greater amount of information in Yelp-Sparse that can be successfully utilized by BENEFICT for modeling reviews. It should be noted that Yelp-Sparse has nearly 230 thousand reviews, while Yelp-Dense has almost 160 thousand. In conclusion, these results provide evidence that our model is best equipped and capable of performing predictions regardless of a dataset's inherent sparsity or density.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1075, "end": 1082, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Baselines and Evaluation Metric", "sec_num": "4.2" }, { "text": "BENEFICT employs an MLP above the concatenated user-item embeddings in the shared hidden space. We compare it against another variant of our model, which utilizes an MLP on top of the element-wise product of user and item representations. We examine their performances using a different number of hidden layers [0, 3] . It should be noted that an MLP with zero layers pertains to the shared hidden space's direct projection to the prediction layer. Figure 2 demonstrates that BENEFICT's utilization of concatenation exceeds the element-wise product by a significant margin across all MLP layers and datasets. This result verifies the positive effect of feeding the concatenated features to the MLP to learn user-item interactions. Furthermore, consistent with the findings of , stacking more layers is indeed beneficial and effective for neural explicit collaborative filtering as well. There appears to be a trend: increasing the hidden layers implies decreasing (and better) RMSE values. Simply projecting the shared hidden space to the prediction layer is insufficient and weak, as evidenced by its relatively high RMSE scores. On the contrary, using three MLP layers has generally resulted in the lowest RMSE scores. The only exception is with the Digital Music dataset wherein utilizing two layers produces the best RMSE value. Furthermore, even though the element-wise product is more inferior than concatenation, the former also benefits from increasing the MLP layers. In summary, all these findings validate the necessity of incorporating the MLP as an integral part of the whole BENEFICT model.", "cite_spans": [ { "start": 311, "end": 314, "text": "[0,", "ref_id": null }, { "start": 315, "end": 317, "text": "3]", "ref_id": null } ], "ref_spans": [ { "start": 449, "end": 457, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Optimal Interaction Function", "sec_num": "4.3.1" }, { "text": "To validate the helpfulness of BENEFICTproduced explanations in real life, we also generate possible explanations using TF-IDF and Tex-tRank. Applying TF-IDF determines which words are more favorable or relevant in a corpus of documents (Rajaraman and Ullman, 2011) . To make the assessment fair, we only select words with the top N TF-IDF scores, where the value of N is the same as the constraint introduced in BENEFICT's ", "cite_spans": [ { "start": 237, "end": 265, "text": "(Rajaraman and Ullman, 2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Human Assessment of Explanations", "sec_num": "5.1" }, { "text": "US2: 4 Table 3 : Sample explanations (highlighted in yellow) generated by TF-IDF, TextRank, and BENEFICT from a specific user review. The second column includes the average judge-given US1 and US2 scores.", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 14, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "US1: 4", "sec_num": null }, { "text": "explanation generation module. On the other hand, TextRank is a fully unsupervised, graph-based extractive summarization algorithm (Mihalcea and Tarau, 2004) . Its goal is to rank entire sentences that comprise a given review text. Also, to make the assessment consistent, we only take the top sentence with a length of less than or equal to N for each review. We then ask two human judges to evaluate a total of 90 explanations, 30 explanations each for TF-IDF, TextRank, and BENEFICT, with N = 20. We instruct them to score each explanation based on the following usefulness statements (US) on a five-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree).", "cite_spans": [ { "start": 131, "end": 157, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "US1: 4", "sec_num": null }, { "text": "US1: The explanation captures the essence of the customer's preference (like or dislike) in the review. US2: The explanation is helpful for you or any customer to decide to purchase that particular item in the future. We further examine the human assessment results by determining the strength of agreement between the two judges. This is done by calculating the Quadratic Weighted Kappa (QWK) statistic. It measures inter-rater agreement and is suitable for ordinal or ranked variables. The Kappa metric lies on a scale of -1 to 1, where 1 implies perfect agreement, 0 indicates random agreement, and negative values mean that the agreement is less than chance, such as disagreement. Specifically, a coefficient of 0.01-0.20 indicates slight agreement, 0.21-0.40 implies fair agreement, 0.41-0.60 refers to moderate agreement, 0.61-0.80 pertains to substantial agreement, and 0.81-0.99 denotes nearly perfect agreement (Borromeo and Toyama, 2015) . Figure 3 summarizes the judges' given scores on their assessment of explanations based on US1. They find that nearly 58% of BENEFICT-derived explanations capture the essence of the customer's preference (i.e., those with usefulness scores of either four or five). It is followed by TextRank, with almost 52% of its produced explanations, and TF-IDF, with only 1.67% of its generated explanations. With respect to the inter-rater agreement on US1 in Table 5 , the judges express fair agreement on BENEFICT (having a Kappa value of 0.2019). On the other hand, they slightly agree with each other on both TF-IDF and TextRank, with QWK values of 0.1924 and 0.0625, respectively. As Table 4 indicates, our model has a mean usefulness score of 3.45, better than TextRank (3.26) and TF-IDF (2.05). Figure 4 shows the judges' assessment scores based on US2. Interestingly, the judges express that nearly 63% of the explanations generated by BENEFICT and TextRank are helpful for any future customers. Upon including the low-scoring explanations, BENEFICT is still better than Tex-tRank; the former has a mean usefulness score of 3.61 against the latter's 3.40. Furthermore, the judges moderately agree as far as our model's generated explanation is concerned (with a Kappa value of 0.4705). At the same time, they express less than chance agreement for TextRank (obtaining a Kappa value of -0.0073). This statement means that the large majority of TextRank's high assessment scores come from one judge alone. Lastly, the judges observe that only 8.33% of the explanations from TF-IDF are helpful, with a mean usefulness score of 2.18 and a QWK value of 0.1921, which implies their slight agreement.", "cite_spans": [ { "start": 920, "end": 947, "text": "(Borromeo and Toyama, 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 950, "end": 958, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 1399, "end": 1406, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 1628, "end": 1635, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 1741, "end": 1749, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "US1: 4", "sec_num": null }, { "text": "These results indicate that BENEFICT's explanation generation module can effectively provide useful explanations that capture the essence of the customer's preference and help future customers make purchasing decisions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Assessment", "sec_num": "5.2.1" }, { "text": "Given an example, we highlight words that serve as the explanations in Table 3 . The explanation produced by TF-IDF can capture a few important words, such as unashamed and undemanding. However, due to its bag-of-words property, it includes several other unnecessary words that may not contribute to the explanation. Therefore, the judges do not find it to be helpful. Next, the TextRank-generated explanation also does not appear to capture the essence of the user's like or dislike. It does not seem useful for customers to decide whether to purchase that item in the future. Still, the judges give TextRank higher usefulness scores than TF-IDF, even though the latter captures more adjectives and important words. We attribute this to human's natural bias toward less noisy sentences that express complete thoughts. Lastly, the BENEFICT-produced explanation con- veys a near-complete thought; take note that it is not a sentence but a segment of contiguous tokens that maximize the sum of attention weights. This enables BENEFICT to capture important phrases such as like this album and the best of all. Hence, the judges agree that it captures the essence of the customer's preference and helps customers make purchasing decisions in the future.", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Specific Example Comparison", "sec_num": "5.2.2" }, { "text": "We have successfully implemented a novel recommender model that uniquely integrates BERT, MLP, and MSP. BENEFICT's predictive capability is validated by experiments performed on Amazon and Yelp datasets, consistently outperforming other state-of-the-art models. Moreover, its explanation generation capability is verified by human judges. We argue that our work offers an avenue to help bridge the research gap between accuracy and explainability. In the future, we will consider incorporating other neural components, such as attention mechanisms, in improving the user-item modeling process. We also intend to enhance the expressiveness and the overall quality of the generated explanations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "http://jmcauley.ucsd.edu/data/amazon/ 2 https://github.com/danielfrg/kaggle-yelp-recruitingcompetition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "First and foremost, we extend our gratitude to the hardworking anonymous reviewers for their valuable insights and suggestions. Likewise, we sincerely thank the judges in our explainability study, Dr. Ria Mae Borromeo and Ms. Verna Banasihan, for their time and participation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sequential and parallel algorithms for the generalized maximum subarray problem", "authors": [ { "first": "Sung", "middle": [ "Eun" ], "last": "Bae", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sung Eun Bae. 2007. Sequential and parallel algo- rithms for the generalized maximum subarray prob- lem.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A neural collaborative filtering model with interaction-based neighborhood", "authors": [ { "first": "Ting", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Ji-Rong", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wayne Xin", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1979--1982", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ting Bai, Ji-Rong Wen, Jun Zhang, and Wayne Xin Zhao. 2017. A neural collaborative filtering model with interaction-based neighborhood. In Proceed- ings of the 2017 ACM on Conference on Information and Knowledge Management, pages 1979-1982.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic vs. crowdsourced sentiment analysis", "authors": [ { "first": "Ria", "middle": [ "Mae" ], "last": "Borromeo", "suffix": "" }, { "first": "Motomichi", "middle": [], "last": "Toyama", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 19th International Database Engineering & Applications Symposium", "volume": "", "issue": "", "pages": "90--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ria Mae Borromeo and Motomichi Toyama. 2015. Au- tomatic vs. crowdsourced sentiment analysis. In Proceedings of the 19th International Database En- gineering & Applications Symposium, pages 90-95.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Transnets: Learning to transform for recommendation", "authors": [ { "first": "Rose", "middle": [], "last": "Catherine", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the eleventh ACM conference on recommender systems", "volume": "", "issue": "", "pages": "288--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rose Catherine and William Cohen. 2017. Transnets: Learning to transform for recommendation. In Pro- ceedings of the eleventh ACM conference on recom- mender systems, pages 288-296.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Neural attentional rating regression with review-level explanations", "authors": [ { "first": "Chong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yiqun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shaoping", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2018, "venue": "International World Wide Web Conferences Steering Committee", "volume": "", "issue": "", "pages": "1583--1592", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chong Chen, Min Zhang, Yiqun Liu, and Shaoping Ma. 2018. Neural attentional rating regression with review-level explanations. In Proceedings of the 2018 World Wide Web Conference, pages 1583- 1592. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic generation of natural language explanations", "authors": [ { "first": "Felipe", "middle": [], "last": "Costa", "suffix": "" }, { "first": "Sixun", "middle": [], "last": "Ouyang", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Dolog", "suffix": "" }, { "first": "Aonghus", "middle": [], "last": "Lawlor", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felipe Costa, Sixun Ouyang, Peter Dolog, and Aonghus Lawlor. 2018. Automatic generation of natural language explanations. In Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, page 57. ACM.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep sparse rectifier neural networks", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the fourteenth international conference on artificial intelligence and statistics", "volume": "", "issue": "", "pages": "315--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Pro- ceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315- 323.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "770--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Trirank: Review-aware explainable recommendation by modeling aspects", "authors": [ { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1661--1670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangnan He, Tao Chen, Min-Yen Kan, and Xiao Chen. 2015. Trirank: Review-aware explainable recom- mendation by modeling aspects. In Proceedings of the 24th ACM International on Conference on Infor- mation and Knowledge Management, pages 1661- 1670. ACM.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Outer productbased neural collaborative filtering", "authors": [ { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Du", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Jinhui", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.03912" ] }, "num": null, "urls": [], "raw_text": "Xiangnan He, Xiaoyu Du, Xiang Wang, Feng Tian, Jin- hui Tang, and Tat-Seng Chua. 2018. Outer product- based neural collaborative filtering. arXiv preprint arXiv:1808.03912.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Neural collaborative filtering", "authors": [ { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "" }, { "first": "Lizi", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Hanwang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Liqiang", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th international conference on world wide web", "volume": "", "issue": "", "pages": "173--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collab- orative filtering. In Proceedings of the 26th inter- national conference on world wide web, pages 173- 182.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Matrix factorization techniques for recommender systems", "authors": [ { "first": "Yehuda", "middle": [], "last": "Koren", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Bell", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Volinsky", "suffix": "" } ], "year": 2009, "venue": "Computer", "volume": "42", "issue": "8", "pages": "30--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer, 42(8):30-37.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Fixing weight decay regularization in adam", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Image-based recommendations on styles and substitutes", "authors": [ { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Targett", "suffix": "" }, { "first": "Qinfeng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Hengel", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval", "volume": "", "issue": "", "pages": "43--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recom- mendations on styles and substitutes. In Proceed- ings of the 38th international ACM SIGIR confer- ence on research and development in information re- trieval, pages 43-52.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Textrank: Bringing order into text", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Probabilistic matrix factorization", "authors": [ { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "", "middle": [], "last": "Russ R Salakhutdinov", "suffix": "" } ], "year": 2008, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1257--1264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andriy Mnih and Russ R Salakhutdinov. 2008. Proba- bilistic matrix factorization. In Advances in neural information processing systems, pages 1257-1264.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A multi-criteria recommender system exploiting aspect-based sentiment analysis of users' reviews", "authors": [ { "first": "Cataldo", "middle": [], "last": "Musto", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Marco De Gemmis", "suffix": "" }, { "first": "Pasquale", "middle": [], "last": "Semeraro", "suffix": "" }, { "first": "", "middle": [], "last": "Lops", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the eleventh ACM conference on recommender systems", "volume": "", "issue": "", "pages": "321--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cataldo Musto, Marco de Gemmis, Giovanni Semer- aro, and Pasquale Lops. 2017. A multi-criteria recommender system exploiting aspect-based senti- ment analysis of users' reviews. In Proceedings of the eleventh ACM conference on recommender sys- tems, pages 321-325. ACM.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Explanation mining: Post hoc interpretability of latent factor models for recommendation systems", "authors": [ { "first": "Georgina", "middle": [], "last": "Peake", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "2060--2069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgina Peake and Jun Wang. 2018. Explanation min- ing: Post hoc interpretability of latent factor mod- els for recommendation systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2060- 2069. ACM.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Wic: the word-in-context dataset for evaluating context-sensitive meaning representations", "authors": [ { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1267--1273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Taher Pilehvar and Jose Camacho- Collados. 2019. Wic: the word-in-context dataset for evaluating context-sensitive meaning representa- tions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 1267-1273.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Mining of massive datasets", "authors": [ { "first": "Anand", "middle": [], "last": "Rajaraman", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "David Ullman", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anand Rajaraman and Jeffrey David Ullman. 2011. Mining of massive datasets. Cambridge University Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Social collaborative viewpoint regression with explainable recommendations", "authors": [ { "first": "Shangsong", "middle": [], "last": "Zhaochun Ren", "suffix": "" }, { "first": "Piji", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Shuaiqiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the tenth ACM international conference on web search and data mining", "volume": "", "issue": "", "pages": "485--494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhaochun Ren, Shangsong Liang, Piji Li, Shuaiqiang Wang, and Maarten de Rijke. 2017. Social collabo- rative viewpoint regression with explainable recom- mendations. In Proceedings of the tenth ACM inter- national conference on web search and data mining, pages 485-494. ACM.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Why should i trust you?: Explaining the predictions of any classifier", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1135--1144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144. ACM.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "How to fine-tune bert for text classification?", "authors": [ { "first": "Chi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.05583" ] }, "num": null, "urls": [], "raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? arXiv preprint arXiv:1905.05583.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Multi-pointer co-attention networks for recommendation", "authors": [ { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Anh", "middle": [ "Tuan" ], "last": "Luu", "suffix": "" }, { "first": "Siu Cheung", "middle": [], "last": "Hui", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "2309--2318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. Multi-pointer co-attention networks for recommen- dation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2309-2318.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Worddriven and context-aware review modeling for recommendation", "authors": [ { "first": "Qianqian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Si", "middle": [], "last": "Li", "suffix": "" }, { "first": "Guang", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1859--1862", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qianqian Wang, Si Li, and Guang Chen. 2018a. Word- driven and context-aware review modeling for rec- ommendation. In Proceedings of the 27th ACM In- ternational Conference on Information and Knowl- edge Management, pages 1859-1862.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Neural review rating prediction with hierarchical attentions and latent factors", "authors": [ { "first": "Xianchen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hongtao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Peiyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Fangzhao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hongyan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenjun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Xie", "suffix": "" } ], "year": 2019, "venue": "International Conference on Database Systems for Advanced Applications", "volume": "", "issue": "", "pages": "363--367", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xianchen Wang, Hongtao Liu, Peiyi Wang, Fangzhao Wu, Hongyan Xu, Wenjun Wang, and Xing Xie. 2019. Neural review rating prediction with hierar- chical attentions and latent factors. In International Conference on Database Systems for Advanced Ap- plications, pages 363-367. Springer.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Tem: Tree-enhanced embedding model for explainable recommendation", "authors": [ { "first": "Xiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "" }, { "first": "Fuli", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Liqiang", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2018, "venue": "International World Wide Web Conferences Steering Committee", "volume": "", "issue": "", "pages": "1543--1552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Wang, Xiangnan He, Fuli Feng, Liqiang Nie, and Tat-Seng Chua. 2018b. Tem: Tree-enhanced embedding model for explainable recommendation. In Proceedings of the 2018 World Wide Web Confer- ence, pages 1543-1552. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Item silk road: Recommending items from information domains to social users", "authors": [ { "first": "Xiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "" }, { "first": "Liqiang", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "185--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Wang, Xiangnan He, Liqiang Nie, and Tat-Seng Chua. 2017. Item silk road: Recommending items from information domains to social users. In Pro- ceedings of the 40th International ACM SIGIR con- ference on Research and Development in Informa- tion Retrieval, pages 185-194.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Google's neural machine translation system", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Le", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2016, "venue": "Bridging the gap between human and machine translation", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.08144" ] }, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Pooled contextualized embeddings for named entity recognition", "authors": [ { "first": "Alan", "middle": [], "last": "Zakbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "724--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Zakbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 724-728.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Joint representation learning for topn recommendation with heterogeneous information sources", "authors": [ { "first": "Yongfeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qingyao", "middle": [], "last": "Ai", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "W Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1449--1458", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongfeng Zhang, Qingyao Ai, Xu Chen, and W Bruce Croft. 2017. Joint representation learning for top- n recommendation with heterogeneous information sources. In Proceedings of the 2017 ACM on Confer- ence on Information and Knowledge Management, pages 1449-1458.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Explainable recommendation: A survey and new perspectives", "authors": [ { "first": "Yongfeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.11192" ] }, "num": null, "urls": [], "raw_text": "Yongfeng Zhang and Xu Chen. 2018. Explainable recommendation: A survey and new perspectives. arXiv preprint arXiv:1804.11192.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Explicit factor models for explainable recommendation based on phrase-level sentiment analysis", "authors": [ { "first": "Yongfeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guokun", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yiqun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shaoping", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval", "volume": "", "issue": "", "pages": "83--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma. 2014. Explicit fac- tor models for explainable recommendation based on phrase-level sentiment analysis. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, pages 83-92. ACM.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "The proposed BENEFICT architecture." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "RMSE comparison of BENEFICT variants using different user-item interaction functions. The solid lines pertain to the concatenation-MLP interaction function. On the other hand, the broken lines refer to the interaction function based on the element-wise product (EWP) and MLP." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Distribution of the judges' given usefulness scores based on US1." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Distribution of the judges' given usefulness scores based on US2." }, "TABREF1": { "html": null, "num": null, "content": "", "type_str": "table", "text": "Statistics summary of the datasets." }, "TABREF3": { "html": null, "num": null, "content": "
", "type_str": "table", "text": "RMSE comparison of the recommender models. The best RMSE values are highlighted in bold. The last row shows the improvement gained by BENEFICT against the better performing baseline." }, "TABREF4": { "html": null, "num": null, "content": "
ExplanationUS Scores
TF-IDF: 'Red' in 74? How could the three come together yet produce this Adult-Oriented stadium Rock?Let's notUS1: 1.5 US2: 1.5
forget Palmer's beginnings in the Crazy World of Arthur Brown and Atomic Rooster. Or Wetton'sbizarre
phase with Uriah Heep. And Geoff Downes was nominally half of 'Buggles', whose minimal output was
unashamed pop. The style of this, Asia's debut album wasn't a million miles from UK's eponymous LP of
1978, although it was distinctly more mainstream.I like this album, the best of all the Asia output that I've
heard. I would have preferred the music to be a little more ambitious; there's a sense in which it's all been
concocted to maximise the commercial return, which you couldn't say of UK. But it's a good, undemanding
listen.
TextRank: Some of the tracks were really quite ... dare I say it, catchy. And there was even a Top 30-friendly single on the album ('Only Time will tell'). But wasn't this Carl Palmer -he of the 70s tripleUS1: 2 US2: 2
album and serious devotee of classical percussionist James Blades? And wasn't this also Steve Hose....
", "type_str": "table", "text": "Some of the tracks were really quite ... dare I say it, catchy. And there was even a Top 30-friendly single on the album ('Only Time will tell'). But wasn't this Carl Palmer -he of the 70s triple album and serious devotee ofclassical percussionist James Blades? And wasn't this also Steve Hose -he of another 70s triple album and several serious solo albums. And hadn't John Wetton starred on the seriously serious BENEFICT: .....The style of this, Asia's debut album wasn't a million miles from UK's eponymous LP of 1978, although it was distinctly more mainstream.I like this album, the best of all the Asia output that I've heard. I would have preferred the music to be a little more ambitious; there's a sense in which it's all been concocted to maximise the commercial return, which you couldn't say of UK....." }, "TABREF6": { "html": null, "num": null, "content": "
MethodUS1 QWK US2 QWK
TF-IDF0.19240.1921
TextRank0.0625-0.0073
BENEFICT0.20190.4705
", "type_str": "table", "text": "Mean usefulness scores of explanations assessed by the judges, based on US1 and US2." }, "TABREF7": { "html": null, "num": null, "content": "", "type_str": "table", "text": "The strength of inter-judge agreement for both US1 and US2 given by the QWK values." } } } }