{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:24.944337Z" }, "title": "Robustness Analysis of Grover for Machine-Generated News Detection", "authors": [ { "first": "Rinaldo", "middle": [], "last": "Gagiano", "suffix": "", "affiliation": { "laboratory": "", "institution": "RMIT University", "location": { "country": "Australia" } }, "email": "" }, { "first": "Maria Myung-Hee", "middle": [], "last": "Kim", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Xiuzhen", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "RMIT University", "location": { "country": "Australia" } }, "email": "xiuzhen.zhang@rmit.edu.au" }, { "first": "Jennifer", "middle": [], "last": "Biggs", "suffix": "", "affiliation": {}, "email": "jennifer.biggs@dst.defence.gov.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Advancements in Natural Language Generation have raised concerns on its potential misuse for deep fake news. Grover is a model for both generation and detection of neural fake news. While its performance on automatically discriminating neural fake news surpassed GPT-2 and BERT, Grover could face a variety of adversarial attacks to deceive detection. In this work, we present an investigation of Grover's susceptibility to adversarial attacks such as characterlevel and word-level perturbations. The experiment results show that even a singular character alteration can cause Grover to fail, affecting up to 97% of target articles with unlimited attack attempts, exposing a lack of robustness. We further analyse these misclassified cases to highlight affected words, identify vulnerability within Grover's encoder, and perform a novel visualisation of cumulative classification scores to assist in interpreting model behaviour.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Advancements in Natural Language Generation have raised concerns on its potential misuse for deep fake news. Grover is a model for both generation and detection of neural fake news. While its performance on automatically discriminating neural fake news surpassed GPT-2 and BERT, Grover could face a variety of adversarial attacks to deceive detection. In this work, we present an investigation of Grover's susceptibility to adversarial attacks such as characterlevel and word-level perturbations. The experiment results show that even a singular character alteration can cause Grover to fail, affecting up to 97% of target articles with unlimited attack attempts, exposing a lack of robustness. We further analyse these misclassified cases to highlight affected words, identify vulnerability within Grover's encoder, and perform a novel visualisation of cumulative classification scores to assist in interpreting model behaviour.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Online disinformation has become a crucial issue in current society and has been the focus of extensive study in recent years (Buning, 2018; Fletcher, 2018; Zerback, 2020) . Fake news, one form of online disinformation, can deceive people with intent of monetary gain, political slander, or entity discreditation (Quandt et al., 2019) . While current sources of fake news are mainly derived from human hand, recent developments in Natural Language Generation (NLG) (Radford, 2018 (Radford, , 2019 Brown, 2020) have made it possible to produce neural fake news 1 at scale. The key problem with this technology is that it is harder for humans to distinguish machine-generated text from human-produced text (Heaven, 2020; Hao, 2020) .", "cite_spans": [ { "start": 126, "end": 140, "text": "(Buning, 2018;", "ref_id": "BIBREF4" }, { "start": 141, "end": 156, "text": "Fletcher, 2018;", "ref_id": "BIBREF8" }, { "start": 157, "end": 171, "text": "Zerback, 2020)", "ref_id": null }, { "start": 313, "end": 334, "text": "(Quandt et al., 2019)", "ref_id": null }, { "start": 465, "end": 479, "text": "(Radford, 2018", "ref_id": "BIBREF18" }, { "start": 480, "end": 496, "text": "(Radford, , 2019", "ref_id": "BIBREF19" }, { "start": 497, "end": 509, "text": "Brown, 2020)", "ref_id": "BIBREF3" }, { "start": 704, "end": 718, "text": "(Heaven, 2020;", "ref_id": "BIBREF11" }, { "start": 719, "end": 729, "text": "Hao, 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To counter the rising threat of neural fake news, an automatic discriminator has been developed that can serve as a defence mechanism. In 2019, Grover (Zellers et al., 2019 ) (Generating aRticles by Only Viewing mEtadata Records), a neural fake news generator and discriminator, was released to the public. As a generator, it generates formal news articles, (including title, domain, authors, date) with given contextual metadata. As a discriminator, it detects the difference between machine and human-produced articles. By utilising articles produced by the generator, Grover's discriminator achieved 92% accuracy while detectors based deep contextual language models including GPT-2 and BERT achieved 73% (Zellers et al., 2019) .", "cite_spans": [ { "start": 151, "end": 172, "text": "(Zellers et al., 2019", "ref_id": "BIBREF26" }, { "start": 708, "end": 730, "text": "(Zellers et al., 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Grover can be misused to mass produce plausible disinformation by adversaries. For example, Grover generated propaganda articles were rated as more trustworthy than humanproduced ones of the same context by human judges (Zellers et al., 2019) . Given this alarming ability, the capability to auto-detect the differences between machine and human-produced articles can reduce the risk of neural fake news spreading online.", "cite_spans": [ { "start": 220, "end": 242, "text": "(Zellers et al., 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Following the establishment of text-based perturbations by Jia and Liang (2017) , studies on robustness interpretability through adversarial examples have grown rapidly through the Natural Language Processing (NLP) community (Vadillo, 2021; Zafar, 2021; Yuan, 2021) . Since then, there have been several attempts to manipulate NLP models by character-level alterations on its input text. For example, Belinkov and Bisk (2017) demonstrated that synthetic and natural noise can cause state-of-the-art language translation models to fail. Gao (2018) also proposed DeepWord-Bug, a novel algorithm for small character perturbations causing drastic classification inaccuracies in tasks such as text classification, sentiment analysis, and spam detection. These studies conducted characterlevel perturbations to identify a lack of robustness within various mainstream language models.", "cite_spans": [ { "start": 59, "end": 79, "text": "Jia and Liang (2017)", "ref_id": "BIBREF14" }, { "start": 225, "end": 240, "text": "(Vadillo, 2021;", "ref_id": "BIBREF21" }, { "start": 241, "end": 253, "text": "Zafar, 2021;", "ref_id": "BIBREF25" }, { "start": 254, "end": 265, "text": "Yuan, 2021)", "ref_id": "BIBREF24" }, { "start": 401, "end": 425, "text": "Belinkov and Bisk (2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a similar manner, Grover, when acting as a defence mechanism against neural fake news, can face heavy adversarial scrutiny. Thus, following the direction of recent studies (Belinkov and Bisk, 2017; Gao, 2018) , we conducted analyses through various adversarial attacks including characterlevel and token-level perturbations.", "cite_spans": [ { "start": 175, "end": 200, "text": "(Belinkov and Bisk, 2017;", "ref_id": "BIBREF1" }, { "start": 201, "end": 211, "text": "Gao, 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents an investigation of Grover to examine its performance change on various adversarial attacks. In our assessment, we find that Grover is highly susceptible to adversarial attacks with around 93% of target articles vulnerable to misclassification after alteration. Analysing the effects of successful perturbations, we identify a weakness within the model's encoding framework which influences Grover's classification scoring, with recorded score variations of 0.74 on average. In this work, we introduce our novel visualisation of cumulative classification score on various unaltered/altered articles and explore classification score polarity induced by adversarial attacks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organised as follows. Section 2 accounts related work and section 3 reports a general summary of Grover. Section 4 presents the experiments of adversarial attacks. Section 5 conveys the results of the experiments along with error analysis. Section 6 presents cumulative classification score visualisation and analysis on extreme polarity change. Finally, section 7 presents our concluding discussion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent studies on adversarial attacks in NLP follow a white-box approach leveraging accessible information from within a model as surveyed by . Many studies have utilised a whitebox gradient-based approach for various attacks such as character-based alterations (Ebrahimi, 2017 (Ebrahimi, , 2018 , word-based alterations (Cheng, 2020; Neekhara, 2018) , and word-based concatenations (Wallace, 2019; Behjati, 2019) . Blohm (2018) used white-box model attention to attack a reading comprehension model as well as a question answering model. Contrary to the white-box approach, Wolff and Wolff (2020) adopted a black-box approach and performed homoglyph and misspelling attacks on a variety of neural text classifiers including GPT-2, GLTR, RoBERTa, and Grover. They conducted adversarial attacks on 20 samples of Machine articles to draw comparison between leading neural classifiers and Grover yet refrain from exploring the results of Grover's classification in detail. Our work includes the attack concepts from Wolff and Wolff's work (2020) but explore singular applications of the attacks, rather than multiple applications. We also focus our analysis solely on Grover, studying the effect of the attacks produced on Grover, and its potential fragile points within the framework.", "cite_spans": [ { "start": 262, "end": 277, "text": "(Ebrahimi, 2017", "ref_id": "BIBREF6" }, { "start": 278, "end": 295, "text": "(Ebrahimi, , 2018", "ref_id": "BIBREF7" }, { "start": 321, "end": 334, "text": "(Cheng, 2020;", "ref_id": "BIBREF5" }, { "start": 335, "end": 350, "text": "Neekhara, 2018)", "ref_id": "BIBREF16" }, { "start": 383, "end": 398, "text": "(Wallace, 2019;", "ref_id": "BIBREF22" }, { "start": 399, "end": 413, "text": "Behjati, 2019)", "ref_id": "BIBREF0" }, { "start": 416, "end": 428, "text": "Blohm (2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Visualising a language model's outcome to increase a model's interpretability is another recent trend in NLP. Gehrmann (2019) introduced GLTR, a visualisation tool (using statistical methods) that can detect generation artifacts across a sample and display its findings through coloured annotation on the input to support a human's fake text detection. Stemming from this concept, we propose a novel visualisation approach through the plotting of cumulative classification scores. Our visualisation method aims to help a user to interpret how Grover is affected at each word vector and highlight key alteration artifacts within an article.", "cite_spans": [ { "start": 110, "end": 125, "text": "Gehrmann (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The generator component of Grover comprises a novel architecture with adapted components of GPT-2. Grover, as shown in Figure 1 , can generate the domain, date, headline, body, or author of a news article, given any subsetted combination of these fields. The generator comes in three versions -Grover-Base, consisting of 12 layers and 124 million parameters, Grover-Large, consisting of 24 layers and 355 million parameters, and Grover-Mega, with 48 layers and 1.5 billion parameters matching GPT-2's architecture; each trained on successively larger datasets (comprised of real news articles scraped from common crawl 2 ).", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 127, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Generator", "sec_num": null }, { "text": "The discriminator component of Grover acts as a detector of neurally generated articles. Utilising articles produced by the generator, the discriminator is trained to differentiate between machine-generated articles and human-produced articles. Articles can be classified on their own or with additional metadata such as domain, date, headline, and author, that aids prediction strength.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminator", "sec_num": null }, { "text": "The functionality of Grover's discriminator, given either machine-generated articles (labelled as Machine) or human-produced articles (labelled as Human), is to produce a classification label of 'Human' or 'Machine' on each article. Input articles contain the body of an article, with or without metadata (title, domain, date, or authors). To assess Grover's robustness, we conducted experiments on the discriminator's classification accuracy when classifying altered (adversarial attacked) Machine articles. Minor alterations (altering only one character or one word in a whole news article) have been performed on a subset of Machine articles applying four methods of adversarial attacks including (1) upper/lower flip, (2) homoglyph, (3) whitespace, and (4) misspelling. After each attack, the altered articles were submitted to Grover's discriminator for reclassification and the classification results were investigated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For experiments, the publicly available pre-trained Grover Mega discriminator was used; the set-up contains Grover Mega config file and necessary checkpoints 3 . We ran the discriminator in its GPU configuration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminator Setup", "sec_num": null }, { "text": "Grover provides a dataset containing 12,000 articles with metadata 4 ; it consists of 8,000 Human articles (RealNews dataset 5 ), and 4,000 Machine articles, which were generated using Grover's generator (Grover-Mega). Submitting this dataset to Grover's discriminator, we gain the predictions seen in Table 1 . From the prediction we obtain a total accuracy of 0.93, a precision score of 0.85, a recall score of 0.94, and a F1 score of 0.89.", "cite_spans": [], "ref_spans": [ { "start": 302, "end": 309, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "For our experiments, we sampled 100 articles with the highest true positive (TP) classification scores produced by the discriminator. This will be referenced as 100 Machine article subset. All articles selected have classification score over 0.49 where 0.5 is the maximum score an article could be assigned for a 'Machine' classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "4 gs://grovermodels/discrimination/generator=medium~discriminator=gr over~discsize=medium~dataset=p=0.96/checkpoint 5 https://github.com/rowanz/grover/tree/master/realnews ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "As news articles are written to a high level of coherency with minimal punctual mistakes or grammatical errors, an adversary would want to limit article alteration to preserve readability and ensure a human reader does not question the article's credibility. To simulate this mindset, we limit the application of an attack to only a single change, such as one character or one-word alteration on an article, iterating the attack through the entirety of an article to assess all possible combinations for each attack's relative application. As demonstrated in Table 2 , the following four types of adversarial attacks were applied for the experiments:", "cite_spans": [], "ref_spans": [ { "start": 559, "end": 566, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Adversarial Attack Parameters", "sec_num": null }, { "text": "(1) Upper/Lower Flip: Uppercasing or lowercasing of a letter originally lowercased or uppercased respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Attack Parameters", "sec_num": null }, { "text": "(2) Homoglyph: Replacement of certain characters with their homoglyph equivalent from either the Greek or Cyrillic alphabet 6 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Attack Parameters", "sec_num": null }, { "text": "(3) Whitespace: Removal of a space between adjacent words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Attack Parameters", "sec_num": null }, { "text": "(4) Misspelling: Replacement of certain words coinciding with a list of commonly misspelled English words on Wikipedia 7 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Attack Parameters", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Attack Results", "sec_num": null }, { "text": "We present the results from our adversarial attack experiments on Grover. As shown in Table 3 , character-level attacks (U/L Flip and Homoglyph) create a higher number of altered articles compared to word-level attacks (Whitespace and Misspelling). Based on the number of alterations, the Misspelling attack achieved the highest misclassification rates (nearly 10%) compared to the other three attacks which got a relatively lower rate of 2-4%.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Adversarial Attack Results", "sec_num": null }, { "text": "Surprisingly, across the 100 Machine article subset, Homoglyph, U/L Flip and Misspelling attacks affected 97%, 96% and 94% of the target articles, respectively. Even the simplest attack, Whitespace attack, could affect 85% of the 100 target Machine articles. This suggests that Grover is highly susceptible to adversarial efforts. Table 4 shows the ten most common words that affected (flipped the classification from 'Machine' to 'Human') Grover's discriminator during adversarial attacks. Table 3 : Classification results of all adversarial examples. Alterations indicate how many iterations of the specified attack was conducted across the dataset. Affected Articles indicate how many articles, from the 100 Machine target articles, had one or more misclassifications resulting from an alteration.", "cite_spans": [], "ref_spans": [ { "start": 331, "end": 338, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 491, "end": 498, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Adversarial Attack Results", "sec_num": null }, { "text": "Original \"A Romanian hospital will face a fine for leaving a towel in a patient's stomach\u2026\" Whitespace \"A Romanian hospital willface a fine for leaving a towel in a patient's stomach\u2026\" Upper/Lower Flip \"A Romanian hospital will face a fine for leavIng a towel in a patient's stomach\u2026\" Misspelling \"A Romanian hospital will face a fine for leaving a towel in a patient's stomache\u2026\" Homoglyph \"A Romanian hospital will face a fin\u0435* for leaving a towel in a patient's stomach\u2026\" misclassifications were caused by altering the words 'that', 'the' and 'to'. Noticeably, the majority of the affected words are stop words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Attack Results", "sec_num": null }, { "text": "We observed in general which words were altered to elicit a misclassification. To assess how character-level perturbations affect Grover, we examined how the model interprets and scores a given input. Grover uses a byte-pair encoder (BPE) to preprocess input data. BPE (Senrich et al., 2015) splits a given input into its largest subword units based on character co-occurrence frequency distribution and assigns each unit a pre-determined pairing ID. This turns a tokenised input into a vector of numbers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoding", "sec_num": null }, { "text": "Previously, BPEs have been found to be lacking in robustness when facing character-level perturbations (Heigold et al., 2017) . In Table 5 we can see the effect that the upper/lower flip attack has on a particular sequence from one of the articles. The uppercasing of the letter 'i' in 'hospital' changes the subword unit allocation. Originally encoded as [4437], 'hospItal' gets broken down into 'hosp','It','al' then encoded into [10497, 1027, 283 ].", "cite_spans": [ { "start": 103, "end": 125, "text": "(Heigold et al., 2017)", "ref_id": "BIBREF12" }, { "start": 432, "end": 439, "text": "[10497,", "ref_id": null }, { "start": 440, "end": 445, "text": "1027,", "ref_id": null }, { "start": 446, "end": 449, "text": "283", "ref_id": null } ], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Input Encoding", "sec_num": null }, { "text": "5 https://www.nltk.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Encoding", "sec_num": null }, { "text": "Grover produces a classification score at each word vector, as it processes the input from left to right. If we successively and cumulatively feed Grover word vectors in sequential order, we can obtain a classification score at each step, allowing for a cumulative classification score to be recorded. Using the classification scores recorded at each increment as word vectors are appended to the accumulating input, we can visualise how these are perceived by Grover over the course of an entire input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Analysis", "sec_num": "5" }, { "text": "Human Articles: Figure 2 illustrates the cumulative classification score of five randomly selected Human articles from the original 8,000 Human article dataset. At the initial processing of the sequence, all articles start at a strong 'Machine' classification. As more of the respective input is processed, we see the articles' classification scores increase toward 'Human' over time. It is observed that cumulative classification scores often plateau with greater encoded sequence lengths. Machine Articles: Figure 3 shows the cumulative classification scores of five randomly selected Machine articles from our target dataset. As seen in the visualisation of Human articles, the beginning of each sequence starts at a strong 'Machine' classification. Over the early stages of the sequence, we see high classification score variance due to the limited word vectors processed. Over time, the selected Machine articles tend to return to a strong 'Machine' classification, plateauing toward the end of the encoded sequence.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 509, "end": 517, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Cumulative Classification Score Visualisation", "sec_num": null }, { "text": "False Negative (FN) Case: Figure 4 presents the cumulative classification score of one of the misclassified articles from our experiments. The red line indicates the location of the adversarial attack within the encoded sequence. In this example, the input word 'that' was transformed into 'thaT' by U/L Flip attack which uppercased the second 't'. At the point where Grover processed the altered word vector, the classification score of the article dropped dramatically, falling a total of 0.98. This large variation in classification score due to alteration will be discussed in terms of 'Extreme Polarity Change' in section 5.2.", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 34, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Affected", "sec_num": null }, { "text": "True Positive (TP) Case: Figure 5 demonstrates the cumulative classification score of a Machine article that had its classification unaffected after an adversarial attack. Again, the red line indicates the location of the attack. In this example, the input word, 'These' was altered to 'these' by the U/L Flip attack which lowercased the first 'T'. This alteration causes a very minimal change in classification score at the site of alteration.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 33, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Affected", "sec_num": null }, { "text": "From visualising a FN case's cumulative classification scores, we observed a large change in classification score at the point of an adversarial attack. To analyse whether all FN cases show a drastic variation in classification score, we took a random sample of 500 FN case articles and 500 TP case articles from each of the four adversarial attacks. In total, we examined the 4,000 articles' classification score at each point of the adversarial attack. The average score variation of each subset is shown in Table 6 . The FN cases had a much higher average variation in classification score compared to the TP cases as shown in Table 7 . This implies that particular alterations caused Grover's classification score to drop dramatically (at the site of an attack) ultimately affecting the final prediction produced by Grover.", "cite_spans": [], "ref_spans": [ { "start": 510, "end": 517, "text": "Table 6", "ref_id": null }, { "start": 630, "end": 637, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Extreme Polarity Change", "sec_num": null }, { "text": "In this study, the robustness of Grover's discriminator was assessed through various adversarial attacks. We found that even a singular character change can cause the model to fail. Through analyses of successful perturbations, it was found that Grover's encoder is highly sensitive to selected perturbations, causing downstream effects in classification assignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We conducted a broad implementation of adversarial attacks and identified vulnerabilities in single alterations on certain types of words. These results outline potential dependencies within Grover's language modelling which could be potentially extorted by adversaries through implementation of multiple instances of an adversarial attack across an article or an adversary targeting and affecting more than one key word outlined in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 433, "end": 440, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "To the best of our knowledge, the proposed visualisation of cumulative classification scores are novel, allowing interpretation of model behaviour, as it gives a user the ability to visually understand the effects that each word vector has at its relative point of inference as well as the effects that alterations may produce on the classification prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Our findings open various paths for further exploration. Our adversarial attacks' focus was exclusively directed onto the body of an article. One path for future work could consist of focussing adversarial attacks on the metadata of an article, further exploring Grover's robustness. Our visualisation of cumulative classification scores highlighted the effects some character-level alterations had on the classification score of an article. The large score variations noted could allow for work to be done in the field of adversarial attack detection. Finally, the nature of our assessment was broad and based on a black-box approach. Furthering our work, the undertaking of a whitebox approach could be performed to explore model interpretability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "From here on out, we will use 'neural fake news' and 'machine-generated fake news' interchangeably.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://commoncrawl.org/ 3 https://github.com/rowanz/grover/tree/master/discriminatio n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use 19 different Greek substitutions and 30 different Cyrillic substitutions. All substitutions can be found in the appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "R. Gagiano is supported by Defence Science and Technology Group Graduate Industry Placement. X. Zhang is supported by the ARC Discovery Project DP200101441.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Appendix A: Full list of Latin characters with their respective Greek and Cyrillic substitutions and all respective character Unicode.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supplementary Material", "sec_num": null }, { "text": "Greek Cyrillic Letter ~ Unicode Letter ~ Unicode Letter ~ Unicodex ~ U+0078 \u03a7 ~ U+x03A7 \u0425 ~ U+x0425 \u0445 ~ U+x0445 Y ~ U+0059 y ~ U+0079 \u03a5 ~ U+x03A5 \u04ae ~ U+x04AE \u0443 ~ U+x0443 Z ~ U+005a z ~ U+007a \u0396 ~ U+x036", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original (Basic Latin)", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Universal adversarial attacks on text classifiers", "authors": [ { "first": "Melika", "middle": [], "last": "Behjati", "suffix": "" }, { "first": "Mahdieh", "middle": [], "last": "Seyed-Mohsen Moosavi-Dezfooli", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Soleymani Baghshah", "suffix": "" }, { "first": "", "middle": [], "last": "Frossard", "suffix": "" } ], "year": 2019, "venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "7345--7349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melika Behjati, Seyed-Mohsen Moosavi-Dezfooli, Mahdieh Soleymani Baghshah, and Pascal Frossard. \"Universal adversarial attacks on text classifiers.\" In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7345-7349. IEEE, 2019.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Synthetic and natural noise both break neural machine translation", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.02173" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov, and Yonatan Bisk. \"Synthetic and natural noise both break neural machine translation.\" arXiv preprint arXiv:1711.02173 (2017).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Comparing attentionbased convolutional and recurrent neural networks: Success and limitations in machine reading comprehension", "authors": [ { "first": "Matthias", "middle": [], "last": "Blohm", "suffix": "" }, { "first": "Glorianna", "middle": [], "last": "Jagfeld", "suffix": "" }, { "first": "Ekta", "middle": [], "last": "Sood", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.08744" ] }, "num": null, "urls": [], "raw_text": "Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, and Ngoc Thang Vu. \"Comparing attention- based convolutional and recurrent neural networks: Success and limitations in machine reading comprehension.\" arXiv preprint arXiv:1808.08744 (2018).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language models are few-shot learners", "authors": [ { "first": "Tom", "middle": [ "B" ], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.14165" ] }, "num": null, "urls": [], "raw_text": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan et al. \"Language models are few-shot learners.\" arXiv preprint arXiv:2005.14165 (2020).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation", "authors": [ { "first": "Madeleine", "middle": [], "last": "De", "suffix": "" }, { "first": "Cock", "middle": [], "last": "Buning", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Madeleine de Cock Buning. \"A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation.\" (2018).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples", "authors": [ { "first": "Minhao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Jinfeng", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Pin-Yu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Cho-Jui", "middle": [], "last": "Hsieh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "3601--3608", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minhao Cheng, Jinfeng Yi, Pin-Yu Chen, Huan Zhang, and Cho-Jui Hsieh. \"Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples.\" In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 3601-3608. 2020.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hotflip: White-box adversarial examples for Table 6: Average classification score variation at the point of an attack within an input. text classification", "authors": [ { "first": "Javid", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Anyi", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Lowd", "suffix": "" }, { "first": "Dejing", "middle": [], "last": "Dou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.06751" ] }, "num": null, "urls": [], "raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. \"Hotflip: White-box adversarial examples for Table 6: Average classification score variation at the point of an attack within an input. text classification.\" arXiv preprint arXiv:1712.06751 (2017).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On adversarial examples for character-level neural machine translation", "authors": [ { "first": "Javid", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Lowd", "suffix": "" }, { "first": "Dejing", "middle": [], "last": "Dou", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.09030" ] }, "num": null, "urls": [], "raw_text": "Javid Ebrahimi, Daniel Lowd, and Dejing Dou. \"On adversarial examples for character-level neural machine translation.\" arXiv preprint arXiv:1806.09030 (2018).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Measuring the reach of\" fake news\" and online disinformation in Europe", "authors": [ { "first": "Richard", "middle": [], "last": "Fletcher", "suffix": "" }, { "first": "Alessio", "middle": [], "last": "Cornia", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Rasmus Kleis", "middle": [], "last": "Nielsen", "suffix": "" } ], "year": 2018, "venue": "Australasian Policing", "volume": "10", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Fletcher, Alessio Cornia, Lucas Graves, and Rasmus Kleis Nielsen. \"Measuring the reach of\" fake news\" and online disinformation in Europe.\" Australasian Policing 10, no. 2 (2018).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Black-box generation of adversarial text sequences to evade deep learning classifiers", "authors": [ { "first": "Ji", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Lanchantin", "suffix": "" }, { "first": "Mary", "middle": [ "Lou" ], "last": "Soffa", "suffix": "" }, { "first": "Yanjun", "middle": [], "last": "Qi", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE Security and Privacy Workshops (SPW)", "volume": "", "issue": "", "pages": "50--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. \"Black-box generation of adversarial text sequences to evade deep learning classifiers.\" In 2018 IEEE Security and Privacy Workshops (SPW), pp. 50-56. IEEE, 2018.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "GLTR: Statistical detection and visualization of generated text", "authors": [ { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Hendrik", "middle": [], "last": "Strobelt", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.04043" ] }, "num": null, "urls": [], "raw_text": "Sebastian Gehrmann , Hendrik Strobelt, and Alexander M. Rush. \"GLTR: Statistical detection and visualization of generated text.\" arXiv preprint arXiv:1906.04043 (2019).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A GPT-3 Bot Posted Comments on Reddit for a Week and No One Noticed", "authors": [ { "first": "Will", "middle": [], "last": "Douglas Heaven", "suffix": "" } ], "year": 2020, "venue": "MIT TECHNOLOGY REVIEW (blog)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Will Douglas Heaven. 2020. \"A GPT-3 Bot Posted Comments on Reddit for a Week and No One Noticed.\" MIT TECHNOLOGY REVIEW (blog). October 8, 2020.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?", "authors": [ { "first": "Georg", "middle": [], "last": "Heigold", "suffix": "" }, { "first": "G\u00fcnter", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.04441" ] }, "num": null, "urls": [], "raw_text": "Georg Heigold, G\u00fcnter Neumann, and Josef van Genabith. \"How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?.\" arXiv preprint arXiv:1704.04441 (2017).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A College Kid's Fake, AI-Generated Blog Fooled Tens of Thousands. This Is How He Made It", "authors": [ { "first": "Karen", "middle": [], "last": "Hao", "suffix": "" } ], "year": 2020, "venue": "MIT TECHNOLOGY REVIEW (blog)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Hao. 2020. \"A College Kid's Fake, AI- Generated Blog Fooled Tens of Thousands. This Is How He Made It.\" MIT TECHNOLOGY REVIEW (blog). August 14, 2020.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adversarial examples for evaluating reading comprehension systems", "authors": [ { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1707.07328" ] }, "num": null, "urls": [], "raw_text": "Robin Jia, and Percy Liang. \"Adversarial examples for evaluating reading comprehension systems.\" arXiv preprint arXiv:1707.07328 (2017).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Deep text classification can be fooled", "authors": [ { "first": "Bin", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Hongcheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Miaoqiang", "middle": [], "last": "Su", "suffix": "" }, { "first": "Pan", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Xirong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wenchang", "middle": [], "last": "Shi", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.08006" ] }, "num": null, "urls": [], "raw_text": "Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. \"Deep text classification can be fooled.\" arXiv preprint arXiv:1704.08006 (2017).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adversarial reprogramming of text classification neural networks", "authors": [ { "first": "Paarth", "middle": [], "last": "Neekhara", "suffix": "" }, { "first": "Shehzeen", "middle": [], "last": "Hussain", "suffix": "" }, { "first": "Shlomo", "middle": [], "last": "Dubnov", "suffix": "" }, { "first": "Farinaz", "middle": [], "last": "Koushanfar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.01829" ] }, "num": null, "urls": [], "raw_text": "Paarth Neekhara, Shehzeen Hussain, Shlomo Dubnov, and Farinaz Koushanfar. \"Adversarial reprogramming of text classification neural networks.\" arXiv preprint arXiv:1809.01829 (2018).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. \"Improving language understanding by generative pre-training.\" (2018).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. \"Language models are unsupervised multitask learners.\" OpenAI blog 1, no. 8 (2019): 9.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.07909" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. \"Neural machine translation of rare words with subword units.\" arXiv preprint arXiv:1508.07909 (2015).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "When and How to Fool Explainable Models (and Humans) with Adversarial Examples", "authors": [ { "first": "Jon", "middle": [], "last": "Vadillo", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Santana", "suffix": "" }, { "first": "Jose", "middle": [ "A" ], "last": "Lozano", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2107.01943" ] }, "num": null, "urls": [], "raw_text": "Jon Vadillo, Roberto Santana, and Jose A. Lozano. \"When and How to Fool Explainable Models (and Humans) with Adversarial Examples.\" arXiv preprint arXiv:2107.01943 (2021).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Universal adversarial triggers for attacking and analyzing NLP", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Kandpal", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.07125" ] }, "num": null, "urls": [], "raw_text": "Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. \"Universal adversarial triggers for attacking and analyzing NLP.\" arXiv preprint arXiv:1908.07125 (2019).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Attacking neural text detectors", "authors": [ { "first": "Max", "middle": [], "last": "Wolff", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.11768" ] }, "num": null, "urls": [], "raw_text": "Max Wolff, and Stuart Wolff. \"Attacking neural text detectors.\" arXiv preprint arXiv:2002.11768 (2020).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The Current Status and progress of Adversarial Examples Attacks", "authors": [ { "first": "Chaoran", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Xiaobin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhengyuan", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2021, "venue": "2021 International Conference on Communications, Information System and Computer Engineering (CISCE)", "volume": "", "issue": "", "pages": "707--711", "other_ids": { "DOI": [ "10.1109/CISCE52179.2021.9445917" ] }, "num": null, "urls": [], "raw_text": "Chaoran Yuan, Xiaobin Liu and Zhengyuan Zhang, \"The Current Status and progress of Adversarial Examples Attacks.\" 2021 International Conference on Communications, Information System and Computer Engineering (CISCE), 2021, pp. 707-711, doi: 10.1109/CISCE52179.2021.9445917. (2021)", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "On the Lack of Robust Interpretability of Neural Text Classifiers", "authors": [ { "first": "Muhammad", "middle": [], "last": "Bilal Zafar", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Donini", "suffix": "" }, { "first": "Dylan", "middle": [], "last": "Slack", "suffix": "" }, { "first": "C\u00e9dric", "middle": [], "last": "Archambeau", "suffix": "" }, { "first": "Sanjiv", "middle": [], "last": "Das", "suffix": "" }, { "first": "Krishnaram", "middle": [], "last": "Kenthapadi", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.04631" ] }, "num": null, "urls": [], "raw_text": "Muhammad Bilal Zafar, Michele Donini, Dylan Slack, C\u00e9dric Archambeau, Sanjiv Das, and Krishnaram Kenthapadi. \"On the Lack of Robust Interpretability of Neural Text Classifiers.\" arXiv preprint arXiv:2106.04631 (2021).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Defending against neural fake news", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Franziska", "middle": [], "last": "Roesner", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.12616" ] }, "num": null, "urls": [], "raw_text": "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. \"Defending against neural fake news.\" arXiv preprint arXiv:1905.12616 (2019).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them", "authors": [ { "first": "Thomas", "middle": [], "last": "Zerback", "suffix": "" }, { "first": "Florian", "middle": [], "last": "T\u00f6pfl", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Kn\u00f6pfle", "suffix": "" } ], "year": 2021, "venue": "New Media & Society", "volume": "23", "issue": "5", "pages": "1080--1098", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Zerback, Florian T\u00f6pfl, and Maria Kn\u00f6pfle. \"The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them.\" New Media & Society 23, no. 5 (2021): 1080-1098.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Adversarial attacks on deeplearning models in natural language processing: A survey", "authors": [ { "first": "Wei", "middle": [ "Emma" ], "last": "Zhang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Ahoud", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Chenliang", "middle": [], "last": "Alhazmi", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "11", "issue": "3", "pages": "1--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, and Chenliang Li. \"Adversarial attacks on deep- learning models in natural language processing: A survey.\" ACM Transactions on Intelligent Systems and Technology (TIST) 11, no. 3 (2020): 1-41.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Comparison of cumulative classification scores between five Human articles.", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Comparison of cumulative classification scores between five Machine articles.", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "Cumulative classification scores of misclassified altered Machine article after the U/L Flip attack.", "num": null, "type_str": "figure", "uris": null }, "FIGREF4": { "text": "Cumulative classification scores of correctly classified altered Machine article after the U/L Flip attack.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "html": null, "text": "Confusion Matrix of 12,000 articles classified by Grover Mega discriminator. True Positives (TP). False Positives (FP). False Negatives (FN). True Negatives (TN).", "num": null, "content": "", "type_str": "table" }, "TABREF2": { "html": null, "text": "Adversarial attacks and their respective change on an article. *The word 'Fine' in the homoglyph example contains Cyrillic 'e' ~ Unicode: U+x0435 compared to the regular Latin 'e' ~ Unicode: U+0065.", "num": null, "content": "
", "type_str": "table" }, "TABREF4": { "html": null, "text": "", "num": null, "content": "
: Statistics of affected words from all
misclassified inputs. POS is the part-of-speech tag for
that respective word obtained from NLTK 5 . IN ~
Preposition, DT ~ Determiner, TO ~ To, CC ~
Coordinating Conjunction. Note we only take the top
10 most occurring words within the misclassified
subset.
", "type_str": "table" }, "TABREF5": { "html": null, "text": "", "num": null, "content": "
OriginalVector IDsAltered
A33A
Romanian34345Romanian
10497hosp
hospital4437 1027It
283al
will482will
face1987face
a258a
fine3735fine
for330for
: An original encoding sequence compared to
the same encoded sequence after a single character
alteration.
", "type_str": "table" } } } }