ACL-OCL / Base_JSON /prefixH /json /hcinlp /2021.hcinlp-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:30.134210Z"
},
"title": "Can You Distinguish Truthful from Fake Reviews? User Analysis and Assistance Tool for Fake Review Detection",
"authors": [
{
"first": "Jeonghwan",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {},
"email": "jeonghwankim123@kaist.ac.kr"
},
{
"first": "Junmo",
"middle": [],
"last": "Kang",
"suffix": "",
"affiliation": {},
"email": "junmo.kang@kaist.ac.kr"
},
{
"first": "Suwon",
"middle": [],
"last": "Shin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sung-Hyon",
"middle": [],
"last": "Myaeng",
"suffix": "",
"affiliation": {},
"email": "myaeng@kaist.ac.kr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Customer reviews are useful in providing an indirect, secondhand experience of a product. People often use reviews written by other customers as a guideline prior to purchasing a product/service or as a basis for acquiring information directly or through question answering. Such behavior signifies the authenticity of reviews in e-commerce platforms. However, fake reviews are increasingly becoming a hassle for both consumers and product owners. To address this issue, we propose You Only Need Gold (YONG), an assistance tool for detecting fake reviews and augmenting user discretion. Our experimental results show the poor human performance on fake review detection, substantially improved user capability given our tool, and the ultimate need for user reliance on the tool.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Customer reviews are useful in providing an indirect, secondhand experience of a product. People often use reviews written by other customers as a guideline prior to purchasing a product/service or as a basis for acquiring information directly or through question answering. Such behavior signifies the authenticity of reviews in e-commerce platforms. However, fake reviews are increasingly becoming a hassle for both consumers and product owners. To address this issue, we propose You Only Need Gold (YONG), an assistance tool for detecting fake reviews and augmenting user discretion. Our experimental results show the poor human performance on fake review detection, substantially improved user capability given our tool, and the ultimate need for user reliance on the tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The increasing prominence of e-commerce platforms gave rise to numerous customer-written reviews. The reviews, given their authenticity, provide important secondhand experience to other potential customers or to information-seeking functions such as search and question answering. Meanwhile, fake reviews are increasingly becoming a social problem in e-commerce platforms (Chakraborty et al., 2016; Rout et al., 2018; Ellson, 2018) . Such deceptive reviews are either incentivized by the beneficiaries (i.e., sellers, marketers) or motivated by those with malicious intention to damage the reputation of the target product.",
"cite_spans": [
{
"start": 372,
"end": 398,
"text": "(Chakraborty et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 399,
"end": 417,
"text": "Rout et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 418,
"end": 431,
"text": "Ellson, 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To date, there have been many studies (Kim et al., 2015; Wang et al., 2017; Aghakhani et al., 2018; You et al., 2018; Kennedy et al., 2019 ) that address fake review detection in the field of natural language processing (NLP). The use of highperformance deep neural networks such as BERT Figure 1 : YONG -a prototype interface. Given the review input by user, YONG shows 1) whether it is gold or fake with 2) the probability (%), and 3) evidence that shows how much each word contributes to model's final decision. The highlighted evidences show the top p% proportion of the contributors, where the proportion can be adjusted using the horizontal slider bar. (Devlin et al., 2018) , fast and scalable anomaly detection algorithms like DenseAlert have made effective and promising contributions in detection of fraudulent reviews. Despite such contributions, these approaches only focus on better modeling to improve the accuracy in fake review detection, instead of its practical applications such as assisting users to distinguish fake reviews (i.e., an assistance tool) or filtering out deceptive texts for review-based question answering (QA) (Gupta et al., 2019) .",
"cite_spans": [
{
"start": 38,
"end": 56,
"text": "(Kim et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 57,
"end": 75,
"text": "Wang et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 76,
"end": 99,
"text": "Aghakhani et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 100,
"end": 117,
"text": "You et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 118,
"end": 138,
"text": "Kennedy et al., 2019",
"ref_id": "BIBREF9"
},
{
"start": 659,
"end": 680,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 1146,
"end": 1166,
"text": "(Gupta et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 288,
"end": 296,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the line of Human-Computer interaction (HCI), there have been a variety of studies on customer reviews (Wu et al., 2010; Alper et al., 2011; Yatani et al., 2011; Zhang et al., 2020) . While there are gold reviews, which are authentic, real-user written reviews, there are fake, deceptive reviews as well. All of the previous works on review visualization and interaction implicitly assume the authenticity of collected reviews. Furthermore, the previous works mentioned above confine the scope of research on reviews to interaction and visualization.",
"cite_spans": [
{
"start": 106,
"end": 123,
"text": "(Wu et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 124,
"end": 143,
"text": "Alper et al., 2011;",
"ref_id": "BIBREF2"
},
{
"start": 144,
"end": 164,
"text": "Yatani et al., 2011;",
"ref_id": "BIBREF26"
},
{
"start": 165,
"end": 184,
"text": "Zhang et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We claim that user capabilities of distinguishing fake reviews are seriously unreliable as suggested in (Lee et al., 2016) , and this motivates our work as practical research for helping humans discern fake reviews. While the two lines of research in NLP and HCI have focused on improving the fake review detection models and developing effective visualization of reviews, respectively, the actual victims (the users) of fake reviews are being neglected. The challenges of carefully curated deceptive reviews on the Web necessitate the need for an assistance tool that helps users avoid fraudulent information.",
"cite_spans": [
{
"start": 104,
"end": 122,
"text": "(Lee et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose You Only Need Gold (YONG) (Figure 1 ), a simple assistance tool that augments user discretion in fake review detection. YONG is built upon the body of previous studies by fine-tuning BERT and providing a self-explanatory gold indicator to assist users in exploring customer reviews. Through a series of user evaluations, we reveal the over-confident nature of people despite their poor performance in distinguishing fake reviews from real ones and the need to implement an explainable, human-understandable features to guide user decisions in fake review detection. We also demonstrate that the application of YONG effectively augments user performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 60,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions is two-fold, the tool and an extensive user understanding, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 An easy-to-use tool for fake review detection with the intuitive gold indicator, which consists of the following three features: (i) Model Decision, (ii) Percentage Indicator (%), and (iii) Evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 User analysis on fake review detection. Our work sheds a light on how susceptible human judgment is to deceptive text. Understanding human decisions and behaviors in discerning fake reviews provides an insight into the design considerations of a system or tool for fake reviews. (Ott et al., 2013 (Ott et al., , 2011 data set and automatically obtained Yelp (Rayana and Akoglu, 2015) data set. Another work proposes a generative framework for fake review generation (Adelani et al., 2020) . It proposes a pipeline of GPT-2 (Radford et al., 2019) and BERT to generate fake reviews. Due to the remarkable fluency of these reviews, its participants failed to identify fake reviews, and surprisingly, gave higher fluency score to the generated reviews than to the gold reviews. Similar result is evidenced in (Donahue et al., 2020), where humans have difficulty identifying machine-generated sentences. Other related works (Lee et al., 2016) use probabilistic methods such as Latent Dirichlet Analysis (LDA) to discover word choice patterns and linguistic characteristics of fake reviews.",
"cite_spans": [
{
"start": 281,
"end": 298,
"text": "(Ott et al., 2013",
"ref_id": "BIBREF16"
},
{
"start": 299,
"end": 318,
"text": "(Ott et al., , 2011",
"ref_id": "BIBREF17"
},
{
"start": 360,
"end": 385,
"text": "(Rayana and Akoglu, 2015)",
"ref_id": "BIBREF19"
},
{
"start": 468,
"end": 490,
"text": "(Adelani et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 525,
"end": 547,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 921,
"end": 939,
"text": "(Lee et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The growth of customer reviews sparked research on its interaction and visualization. Visualization tools like OpinionBlocks (Alper et al., 2011) provide an interactive visualization for better organization of the reviews in bi-polar sentiments. Similar works like OpinionSeer (Wu et al., 2010) and Review Spotlight (Yatani et al., 2011) are also focused on providing an accessible and interactive visualization of reviews. The major drawback of these works is that they naively assume the authenticity of the reviews. From the previous lines of work, we argue the threat of fake reviews and their imminence with the rise of generative models (e.g., GPT-2) necessitate the use of our tool.",
"cite_spans": [
{
"start": 125,
"end": 145,
"text": "(Alper et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 277,
"end": 294,
"text": "(Wu et al., 2010)",
"ref_id": "BIBREF25"
},
{
"start": 316,
"end": 337,
"text": "(Yatani et al., 2011)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Review Interaction and Visualization",
"sec_num": "2.2"
},
{
"text": "We build a tool that provides straightforward, intuitive features to guide user decision in fake review detection. The tool is built upon a state-of-the-art NLP model fine-tuned on the OpSpam data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The tool we propose is a prototype for receiving a review text as an input and returns whether it is \"Gold\" or \"Fake\". YONG is an easy-to-use tool with three features collectively referred to as the gold indicator -namely, (i) Model Decision, (ii) Probability (%) and (iii) Evidence. Model Decision is the model output on the top right corner of Figure 1 as either Gold or Fake. The Probability (%) is the softmax output, which is also the model (Kennedy et al., 2019) 0.891 BERT (Ours) 0.896 confidence for its decision. The word highlights, which we also define as the Evidence, is a visualization of the attention weights from the last layer of BERT to provide an interpretable medium to model's decision for its users.",
"cite_spans": [
{
"start": 446,
"end": 468,
"text": "(Kennedy et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 346,
"end": 354,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "You Only Need Gold (YONG)",
"sec_num": "3.1"
},
{
"text": "To validate the claim made by a previous work (Kennedy et al., 2019) that BERT outperforms other baselines in fake review detection and to employ the most effective existing approach to fake review detection in our tool, we compare the performance of BERT against other classification models on the OpSpam (Ott et al., 2011 (Ott et al., , 2013 data set.",
"cite_spans": [
{
"start": 46,
"end": 68,
"text": "(Kennedy et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 306,
"end": 323,
"text": "(Ott et al., 2011",
"ref_id": "BIBREF17"
},
{
"start": 324,
"end": 343,
"text": "(Ott et al., , 2013",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fake Review Detection Model",
"sec_num": "3.2"
},
{
"text": "In Table 1 , we report the results of non-neural and neural network models under the 5-fold cross validation setting from (Kennedy et al., 2019) . Our implementation of BERT outperforms all the other baseline models, reaching a 90% accuracy. Based on the model performances on OpSpam data set in Table 1 and the renowned language understanding capability of BERT (Devlin et al., 2018) , we decide to employ BERT in our tool. We also finetune our BERT with the HuggingFace (Wolf et al., 2020) version of bert-base-uncased, where the [CLS] embedding is used for binary sequence classification.",
"cite_spans": [
{
"start": 122,
"end": 144,
"text": "(Kennedy et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 363,
"end": 384,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 472,
"end": 491,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 296,
"end": 303,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Fake Review Detection Model",
"sec_num": "3.2"
},
{
"text": "To fine-tune BERT and to choose carefully curated data on fake review detection, we use the Deceptive Opinion Spam corpus, also known as the OpSpam data set (Ott et al., 2011 (Ott et al., , 2013 . This data set consists of a total of 1,600 review instances, which is divided into truthful (i.e., gold) and deceptive reviews. The gold reviews are extracted from the 20 most popular hotels in Chicago (800 gold reviews) and the set of fake reviews are built using Amazon Mechanical Turk (AMT) (800 fake reviews). The gold reviews are not artificially generated, but are carefully chosen based on the criteria of determining deception as an effort to ensure truthfulness in (Ott et al., 2011) . The data set is divided into a training and test set ratio of 8:2 in this work.",
"cite_spans": [
{
"start": 157,
"end": 174,
"text": "(Ott et al., 2011",
"ref_id": "BIBREF17"
},
{
"start": 175,
"end": 194,
"text": "(Ott et al., , 2013",
"ref_id": "BIBREF16"
},
{
"start": 671,
"end": 689,
"text": "(Ott et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training & Dataset",
"sec_num": "3.3"
},
{
"text": "4 Experiment and Result",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training & Dataset",
"sec_num": "3.3"
},
{
"text": "To validate the usefulness of our tool and further the understanding of users in the application of our tool, we define the following research questions (RQ): RQ1. \"How do humans fare against machines on fake review detection?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Question",
"sec_num": "4.1"
},
{
"text": "RQ2. \"Does YONG augment human performance on the task?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Question",
"sec_num": "4.1"
},
{
"text": "RQ3. \"Can we increase the level of human trust on YONG by injecting prior knowledge about human and model performance?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Question",
"sec_num": "4.1"
},
{
"text": "RQ4. \"How much influence does each feature of the gold indicator have on human trust?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Question",
"sec_num": "4.1"
},
{
"text": "Here, we define the term \"trust\" as the level of human reliance on the tool's decision. To be specific, as we evaluate in Section 4.5, the level of trust is calculated on the participant-machine agreement (regardless of the ground-truth label). Through the experiments that correspond to the RQs, we build a concrete understanding of user behaviors in the presence of customer-generated reviews and the gold indicator through human performance evaluation on fake review detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Question",
"sec_num": "4.1"
},
{
"text": "We provided 10 reviews from the data set to a total of 24 participants. The 10 reviews were randomly sampled, resulting in correct-to-wrong ratio of the model prediction to 7:3 (i.e., 7 correct and 3 wrong model predictions; score = 0.70), and the Goldto-Fake ratio to 5:5. From Experiment 1 to 4 we assess the following criteria: human discretion, tool helpfulness, trust level on the tool, and feature-level influence on decisions made: Experiment 1. Users are required to classify fake reviews given 10 reviews without our tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4.2"
},
{
"text": "Experiment 2. Users conduct the same task provided our tool (i.e. gold indicator).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4.2"
},
{
"text": "Experiment 3. Users are shown their score and the model score for Experiment 1, and conduct the same task as in Experiment 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4.2"
},
{
"text": "Experiment 4. Users are asked to score how each of the three features in our tool influences their decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4.2"
},
{
"text": "We designed and built a separate Experiment test bed ( Figure 2 ) using React 1 and FastAPI 2 to conduct an extensive quantitative and qualitative experiments on the participants of our study. To alleviate the learning effect, the participants are made unaware of the ground-truth answers and stay oblivious to both their and model scores (until Experiment 3).",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 63,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4.2"
},
{
"text": "In Experiment 1, we assess the human performance on fake review detection. The participants are not given any hint -only the raw text. At the end of Experiment 1, we ask the participants enter their expected score. Here, the \"score\" corresponds to the number of correct decisions with respect to the ground-truth. In Table 2 , we see that the average score of the participants are at 0.41. This result contrasts with the high accuracy score of our model (Table 1 ). An interesting observation made from Experiment 1 is the level of confidence participants had. Their expected score was 0.65 while their actual score was, on average, 0.41. They assumed that they got on average of 2.4 more problems (out of 10) correct than their actual score.",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 454,
"end": 462,
"text": "(Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Human Discretion Assessment",
"sec_num": "4.3"
},
{
"text": "To evaluate the helpfulness of our tool in augmenting user performance on fake review detection, we provide the gold indicator as in Figure 2 . For a fair comparison, the participants were given the same reviews as in Experiment 1. In Table 2 , the accuracy increases substantially from 0.41 to 0.54 by simply providing the gold indicator with the review text. Before proceeding to Experiment 3trust level assessment, we provide the participants their own scores on Experiment 1 and the model score to see if the injection of prior bias (i.e., being aware of the performance gap) could influence the participants to more align their answers with those of the model.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 235,
"end": 242,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Helpfulness Assessment",
"sec_num": "4.4"
},
{
"text": "As previously stated, the participant is shown both scores prior to entering the experiment. The result shows an increase in the average score in Experiment 3 compared to that in Experiment 1. With the user awareness of their and model scores shown to improve the average participant score on the task, we decide calculate the change in the level of user trust on our tool. The trust level is calculated based on the proportion of overlapping answers ( Table 2) . The analysis on the trust level reveals two disparate groups: (i) Those who earned lower score than average in Experiment 1 and (ii) those who earned higher score than average in the same experiment. While the latter shows a drop in trust level from Experiment 2 to 3 (0.79 \u2192 0.71), the former shows an increase in the trust level (0.81 \u2192 0.83).",
"cite_spans": [],
"ref_spans": [
{
"start": 453,
"end": 461,
"text": "Table 2)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Trust Level Assessment",
"sec_num": "4.5"
},
{
"text": "From Experiment 1, 2 to 3, we see the increase in average accuracy from 0.41 to 0.54, and to 0.56, respectively. To evaluate the statistical significance of the changes, we apply one-way repeated ANOVA with a post-hoc test to see that there are significant differences between Experiments 1 and 2 (p <0.005) and Experiments 1 and 3 (p <0.005), and no statistically significant difference between Experiments 2 and 3. The result suggests that there is a large gap between human reasoning and datadriven model decision, and thus providing YONG contributes to an augmented human performance on fake review detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trust Level Assessment",
"sec_num": "4.5"
},
{
"text": "Aside from evaluating performance, we also measure the feature-wise influence on decision. We ask each participant to rate the three features in a 1-to-5 scale for how much influence did each feature have on their decision. Here, the scale at 1 represents \"No Influence\", while scale at 5 means \"Major Influence\". In Table 3 , we show the average rating per gold indicator feature. This result shows that the Probability (%) plays the primary role in convincing users that model prediction is correct. In other words, the higher the probability score, the more \"trustworthy\" the model decision becomes.",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Feature-Level Influence Assessment",
"sec_num": "4.6"
},
{
"text": "There are three major findings in this work. In a fake review detection setting, (i) human capability is unreliable and needs machine assistance, (ii) the interpretable hidden weights are hardly explicable, and (ii) for DNN-powered assistive tools like YONG, it is essential to provide faith-gaining features for its users. A notable finding is the difference between interpretability and explainability. In this work, we distinguish the two terms to ease their equivocal use. Interpretability refers to the property of being able to describe the cause and effect of the model through observation, whereas explainability refers to the human-understandable qualities that can be accepted on the terms of human logic and intuition. Although numerous existing works (Lee et al., 2017; Vig, 2019; Kumar et al., 2020) have repeatedly provided an interpretable look into the model's decision making process through layerwise hidden vector or attention weight visualization, most stop at showing the colored vectors and matrices. In Figure 2 , we can see how the model places much attention on proper and common nouns like \"Chicago,\" and \"morning,\" that fail to disambiguate and explain the reasoning process of our model for a human to understand. We can also observe in Table 3 , where the evidence (i.e., highlighted words based on the attention weights) shows the lowest influence score on the users of our tool. This result implies that users found the feature either obsolete or inexplicable, leading to the low reference to the respective feature. These findings are also supported by a number of previous works on attention weights' explainability (Jain and Wallace, 2019; Kobayashi et al., 2020) . (Jain and Wallace, 2019) show that the much touted attention weights' transparency for model decision is unaccounted for and do not provide meaningful explanations. (Kobayashi et al., 2020) , furthermore, shows that focusing only on the parallels between the attention weights and linguistic phenomena within the model is insufficient and thus requires a norm-based analysis. Based on such observation, a possible addition to our tool can be generating textual explanations or providing not only the token-level highlights as in Figure 2 , but also more high-level (e.g., sentence-, paragraph-level information) highlights that show a comprehensible process to model decision.",
"cite_spans": [
{
"start": 763,
"end": 781,
"text": "(Lee et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 782,
"end": 792,
"text": "Vig, 2019;",
"ref_id": "BIBREF22"
},
{
"start": 793,
"end": 812,
"text": "Kumar et al., 2020)",
"ref_id": null
},
{
"start": 1649,
"end": 1673,
"text": "(Jain and Wallace, 2019;",
"ref_id": "BIBREF8"
},
{
"start": 1674,
"end": 1697,
"text": "Kobayashi et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 1700,
"end": 1724,
"text": "(Jain and Wallace, 2019)",
"ref_id": "BIBREF8"
},
{
"start": 1865,
"end": 1889,
"text": "(Kobayashi et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 1026,
"end": 1034,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1265,
"end": 1272,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 2229,
"end": 2237,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Our work proposed You Only Need Gold (YONG), an assistant tool for fake review detection. From a series of experiments, we deepened our understanding of human capability in fake review detection. We observed that people were generally overconfident with their ability to discern fake reviews from real ones, and we discovered that the model far outperforms its human counterparts, suggesting the need for effective design to convince users to trust the model decision. Furthermore, our work reveals the need to develop more \"explainable\" tools and promotes collaboration of users and the machine for fake review detection. For future work, expanding the scope of our tool to other fields such as products and restaurants would likely contribute to its generalizability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://reactjs.org 2 https://fastapi.tiangolo.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generating sentiment-preserving fake online reviews using neural language models and their human-and machine-based detection",
"authors": [
{
"first": "Haotian",
"middle": [],
"last": "David Ifeoluwa Adelani",
"suffix": ""
},
{
"first": "Fuming",
"middle": [],
"last": "Mai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Huy",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Yamagishi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Echizen",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Advanced Information Networking and Applications",
"volume": "",
"issue": "",
"pages": "1341--1354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Ifeoluwa Adelani, Haotian Mai, Fuming Fang, Huy H Nguyen, Junichi Yamagishi, and Isao Echizen. 2020. Generating sentiment-preserving fake online reviews using neural language models and their human-and machine-based detection. In International Conference on Advanced Information Networking and Applications, pages 1341-1354. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Detecting deceptive reviews using generative adversarial networks",
"authors": [
{
"first": "Hojjat",
"middle": [],
"last": "Aghakhani",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Machiry",
"suffix": ""
},
{
"first": "Shirin",
"middle": [],
"last": "Nilizadeh",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Kruegel",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Vigna",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Security and Privacy Workshops",
"volume": "",
"issue": "",
"pages": "89--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hojjat Aghakhani, Aravind Machiry, Shirin Nilizadeh, Christopher Kruegel, and Giovanni Vigna. 2018. Detecting deceptive reviews using generative adver- sarial networks. In 2018 IEEE Security and Privacy Workshops (SPW), pages 89-95. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Opinionblocks: Visualizing consumer reviews",
"authors": [
{
"first": "Basak",
"middle": [],
"last": "Alper",
"suffix": ""
},
{
"first": "Huahai",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eben",
"middle": [],
"last": "Haber",
"suffix": ""
},
{
"first": "Eser",
"middle": [],
"last": "Kandogan",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE VisWeek 2011 Workshop on Interactive Visual Text Analytics for Decision Making",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basak Alper, Huahai Yang, Eben Haber, and Eser Kan- dogan. 2011. Opinionblocks: Visualizing consumer reviews. In IEEE VisWeek 2011 Workshop on Inter- active Visual Text Analytics for Decision Making.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Recent developments in social spam detection and combating techniques: A survey. Information Processing Management",
"authors": [
{
"first": "Manajit",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Sukomal",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Pramanik",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ravindranath Chowdary",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "52",
"issue": "",
"pages": "1053--1073",
"other_ids": {
"DOI": [
"10.1016/j.ipm.2016.04.009"
]
},
"num": null,
"urls": [],
"raw_text": "Manajit Chakraborty, Sukomal Pal, Rahul Pramanik, and C. Ravindranath Chowdary. 2016. Recent de- velopments in social spam detection and combating techniques: A survey. Information Processing Man- agement, 52(6):1053-1073.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enabling language models to fill in the blanks",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Mina",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2492--2501",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.225"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Donahue, Mina Lee, and Percy Liang. 2020. En- abling language models to fill in the blanks. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2492- 2501, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "2018. 'a third of tripadvisor reviews are fake' as cheats buy five stars",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Ellson",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Ellson. 2018. 'a third of tripadvisor reviews are fake' as cheats buy five stars.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Amazonqa: A review-based question answering task",
"authors": [
{
"first": "Mansi",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Raghuveer",
"middle": [],
"last": "Chanda",
"suffix": ""
},
{
"first": "Anirudha",
"middle": [],
"last": "Rayasam",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19",
"volume": "",
"issue": "",
"pages": "4996--5002",
"other_ids": {
"DOI": [
"10.24963/ijcai.2019/694"
]
},
"num": null,
"urls": [],
"raw_text": "Mansi Gupta, Nitish Kulkarni, Raghuveer Chanda, Anirudha Rayasam, and Zachary C. Lipton. 2019. Amazonqa: A review-based question answering task. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI- 19, pages 4996-5002. International Joint Confer- ences on Artificial Intelligence Organization.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Attention is not explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.10186"
]
},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. arXiv preprint arXiv:1902.10186.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fact or factitious? contextualized opinion spam detection",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Niall",
"middle": [],
"last": "Walsh",
"suffix": ""
},
{
"first": "Kirils",
"middle": [],
"last": "Sloka",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccarren",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "344--350",
"other_ids": {
"DOI": [
"10.18653/v1/P19-2048"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan Kennedy, Niall Walsh, Kirils Sloka, Andrew McCarren, and Jennifer Foster. 2019. Fact or fac- titious? contextualized opinion spam detection. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics: Student Re- search Workshop, pages 344-350, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep semantic frame-based deceptive opinion spam analysis",
"authors": [
{
"first": "Seongsoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Hyeokyoon",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Seongwoon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Minhwan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1131--1140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seongsoon Kim, Hyeokyoon Chang, Seongwoon Lee, Minhwan Yu, and Jaewoo Kang. 2015. Deep se- mantic frame-based deceptive opinion spam analy- sis. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Man- agement, pages 1131-1140.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Attention is not only a weight: Analyzing transformers with vector norms",
"authors": [
{
"first": "Goro",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Tatsuki",
"middle": [],
"last": "Kuribayashi",
"suffix": ""
},
{
"first": "Sho",
"middle": [],
"last": "Yokoi",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7057--7075",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7057-7075.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Veerubhotla Aditya Srikanth, Aruna Malapati, and Lalita Bhanu Murthy Neti. 2020. Sarcasm detection using multi-head attention based bidirectional lstm",
"authors": [
{
"first": "Avinash",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": null,
"venue": "Vishnu Teja Narapareddy",
"volume": "8",
"issue": "",
"pages": "6388--6397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avinash Kumar, Vishnu Teja Narapareddy, Veerub- hotla Aditya Srikanth, Aruna Malapati, and Lalita Bhanu Murthy Neti. 2020. Sarcasm detection using multi-head attention based bidirectional lstm. IEEE Access, 8:6388-6397.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Interactive visualization and manipulation of attention-based neural machine translation",
"authors": [
{
"first": "Jaesong",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Joong-Hwi",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Jun-Seok",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "121--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaesong Lee, Joong-Hwi Shin, and Jun-Seok Kim. 2017. Interactive visualization and manipulation of attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 121-126.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Capturing word choice patterns with LDA for fake review detection in sentiment analysis",
"authors": [
{
"first": "Kyungah",
"middle": [],
"last": "Kyungyup Daniel Lee",
"suffix": ""
},
{
"first": "Sung-Hyon",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Myaeng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 6th International Conference on Web Intelligence, Mining and Semantics",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyungyup Daniel Lee, Kyungah Han, and Sung-Hyon Myaeng. 2016. Capturing word choice patterns with LDA for fake review detection in sentiment analysis. In Proceedings of the 6th International Conference on Web Intelligence, Mining and Semantics, pages 1-7.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards explainable nlp: A generative explanation framework for text classification",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qingyu",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00196"
]
},
"num": null,
"urls": [],
"raw_text": "Hui Liu, Qingyu Yin, and William Yang Wang. 2018. Towards explainable nlp: A generative explanation framework for text classification. arXiv preprint arXiv:1811.00196.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Negative deceptive opinion spam",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Jeffrey T",
"middle": [],
"last": "Hancock",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference of the north american chapter of the association for computational linguistics: human language technologies",
"volume": "",
"issue": "",
"pages": "497--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Claire Cardie, and Jeffrey T Hancock. 2013. Negative deceptive opinion spam. In Proceedings of the 2013 conference of the north american chap- ter of the association for computational linguistics: human language technologies, pages 497-501.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Finding deceptive opinion spam by any stretch of the imagination",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"T"
],
"last": "Hancock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "309--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T. Han- cock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 309-319, Portland, Oregon, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Collective opinion spam detection: Bridging review networks and metadata",
"authors": [
{
"first": "Shebuti",
"middle": [],
"last": "Rayana",
"suffix": ""
},
{
"first": "Leman",
"middle": [],
"last": "Akoglu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "985--994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shebuti Rayana and Leman Akoglu. 2015. Collec- tive opinion spam detection: Bridging review net- works and metadata. In Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining, pages 985-994.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A framework for fake review detection: issues and challenges",
"authors": [
{
"first": "Jitendra",
"middle": [],
"last": "Kumar Rout",
"suffix": ""
},
{
"first": "Amiya",
"middle": [],
"last": "Kumar Dash",
"suffix": ""
},
{
"first": "Niranjan Kumar",
"middle": [],
"last": "Ray",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 International Conference on Information Technology (ICIT)",
"volume": "",
"issue": "",
"pages": "7--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jitendra Kumar Rout, Amiya Kumar Dash, and Niran- jan Kumar Ray. 2018. A framework for fake re- view detection: issues and challenges. In 2018 In- ternational Conference on Information Technology (ICIT), pages 7-10. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Densealert: Incremental densesubtensor detection in tensor streams",
"authors": [
{
"first": "Kijung",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Hooi",
"suffix": ""
},
{
"first": "Jisu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Faloutsos",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1057--1066",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kijung Shin, Bryan Hooi, Jisu Kim, and Christos Faloutsos. 2017. Densealert: Incremental dense- subtensor detection in tensor streams. In Proceed- ings of the 23rd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, pages 1057-1066.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A multiscale visualization of attention in the transformer model",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.05714"
]
},
"num": null,
"urls": [],
"raw_text": "Jesse Vig. 2019. A multiscale visualization of at- tention in the transformer model. arXiv preprint arXiv:1906.05714.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Handling cold-start problem in review spam detection by jointly embedding texts and behaviors",
"authors": [
{
"first": "Xuepeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "366--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuepeng Wang, Kang Liu, and Jun Zhao. 2017. Han- dling cold-start problem in review spam detection by jointly embedding texts and behaviors. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 366-376.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Opinionseer: interactive visualization of hotel customer feedback",
"authors": [
{
"first": "Yingcai",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Shixia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Norman",
"middle": [],
"last": "Au",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Huamin",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE transactions on visualization and computer graphics",
"volume": "16",
"issue": "6",
"pages": "1109--1118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yingcai Wu, Furu Wei, Shixia Liu, Norman Au, Wei- wei Cui, Hong Zhou, and Huamin Qu. 2010. Opin- ionseer: interactive visualization of hotel customer feedback. IEEE transactions on visualization and computer graphics, 16(6):1109-1118.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Review spotlight: a user interface for summarizing user-generated reviews using adjective-noun word pairs",
"authors": [
{
"first": "Koji",
"middle": [],
"last": "Yatani",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Novati",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Trusty",
"suffix": ""
},
{
"first": "Khai N",
"middle": [],
"last": "Truong",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1541--1550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koji Yatani, Michael Novati, Andrew Trusty, and Khai N Truong. 2011. Review spotlight: a user in- terface for summarizing user-generated reviews us- ing adjective-noun word pairs. In Proceedings of the SIGCHI Conference on Human Factors in Com- puting Systems, pages 1541-1550.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "An attribute enhanced domain adaptive model for coldstart spam review detection",
"authors": [
{
"first": "Zhenni",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Tieyun",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1884--1895",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenni You, Tieyun Qian, and Bing Liu. 2018. An attribute enhanced domain adaptive model for cold- start spam review detection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1884-1895.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Teddy: A system for interactive review analysis",
"authors": [
{
"first": "Xiong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Engel",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Evensen",
"suffix": ""
},
{
"first": "Yuliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wang-Chiew",
"middle": [],
"last": "Demiralp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiong Zhang, Jonathan Engel, Sara Evensen, Yuliang Li, \u00c7 agatay Demiralp, and Wang-Chiew Tan. 2020. Teddy: A system for interactive review analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-13.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Experiment Test Bed for YONG -Experiment #2",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Accuracy performance comparison of conventional classification models against BERT on OpSpam."
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Average scores (i.e., the number of correct decision w.r.t. the ground truth) of participants and the model, and the level of trust (i.e., participant-model agreement). Here, low and high groups are those with below and above average scores in Exp. 1, respectively."
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Influence score for each feature of the gold indicator."
}
}
}
}