ACL-OCL / Base_JSON /prefixN /json /nlposs /2020.nlposs-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:43:52.477049Z"
},
"title": "TextAttack: Lessons learned in designing Python frameworks for NLP",
"authors": [
{
"first": "John",
"middle": [
"X"
],
"last": "Morris",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Virginia",
"location": {}
},
"email": ""
},
{
"first": "Jin",
"middle": [
"Yong"
],
"last": "Yoo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Virginia",
"location": {}
},
"email": ""
},
{
"first": "Yanjun",
"middle": [],
"last": "Qi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Virginia",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "TextAttack is an open-source Python toolkit for adversarial attacks, adversarial training, and data augmentation in NLP. TextAttack unites 15+ papers from the NLP adversarial attack literature into a single framework, with many components reused across attacks. This framework allows both researchers and developers to test and study the weaknesses of their NLP models. To build such an open-source NLP toolkit requires solving some common problems: How do we enable users to supply models from different deep learning frameworks? How can we build tools to support as many different datasets as possible? We share our insights into developing a well-written, well-documented NLP Python framework in hope that they can aid future development of similar packages.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "TextAttack is an open-source Python toolkit for adversarial attacks, adversarial training, and data augmentation in NLP. TextAttack unites 15+ papers from the NLP adversarial attack literature into a single framework, with many components reused across attacks. This framework allows both researchers and developers to test and study the weaknesses of their NLP models. To build such an open-source NLP toolkit requires solving some common problems: How do we enable users to supply models from different deep learning frameworks? How can we build tools to support as many different datasets as possible? We share our insights into developing a well-written, well-documented NLP Python framework in hope that they can aid future development of similar packages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep neural network (DNN) models have seen dominant use in NLP tasks such text classification, natural language inference, machine translation, and question answering. However, despite their state-of-the-art performance, NLP DNNs are still vulnerable to adversarial attacks . As a result, there have been growing efforts to develop tools that can help researchers and developers better understand the capability of their NLP models. Both Wallace et al. (2019) and Tenney et al. (2020) introduced web-based visual interactive tools that enable users to see model's local explanations. Ribeiro et al. (2020) introduced a behavioral testing framework that runs a suite of tests to sanity check NLP models.",
"cite_spans": [
{
"start": 438,
"end": 459,
"text": "Wallace et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 464,
"end": 484,
"text": "Tenney et al. (2020)",
"ref_id": "BIBREF13"
},
{
"start": 584,
"end": 605,
"text": "Ribeiro et al. (2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the challenges for building such tools is that the tool should be flexible enough to work with many different deep learning frameworks (e.g. PyTorch, Tensorflow, Scikit-learn). Also, the tool CamemBERT and its tokenizer are initialized using HuggingFace transformers (Wolf et al., 2019) and wrapped in TextAttack model wrappers. Adversarial attack is PWWS modified to use WordNet in French (Sagot and Fiser, 2008) instead of English. TextAttack's flexible API makes these customizations possible in just a few lines of code.",
"cite_spans": [
{
"start": 274,
"end": 293,
"text": "(Wolf et al., 2019)",
"ref_id": null
},
{
"start": 397,
"end": 420,
"text": "(Sagot and Fiser, 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "should be able to work with datasets from various sources and in various formats. Lastly, the tools needs to be compatible with different hardware setups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We developed TextAttack, an open-source Python framework for adversarial attacks, adversarial training, and data augmentation. Our modular and extendable design allows us to reuse many components to offer 15+ different adversarial attack methods proposed by literature. Our modelagnostic and dataset-agnostic design allows users to easily run adversarial attacks against their own models built using any deep learning framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes some lessons learned along the path to creating TextAttack. Figure 1 shows our API in action. Our advice is tailored towards researchers developing NLP libraries in Python that support a variety of models and datasets, and use them for downstream applications.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 89,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We provide the following broad advice to help other future developers create user-friendly NLP libraries in Python:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. To become model-agnostic, implement a model wrapper class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. To become data-agnostic, take dataset inputs as (input, output) pairs, where each model input is represented as an OrderedDict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Do not plan for inputs (tensors, lists, etc.) to be a certain size or shape unless explicitly necessary.",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "(tensors, lists, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. Centralize common text operations, like parsing and string-level operations, in one class. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are growing number of deep learning frameworks and different researchers and groups have preferences about which frameworks to use for different tasks. Unless the library relates to model training or development (and sometimes then), it is possible to build a library that supports deep learning models from any framework. TextAttack supports both black-box and whitebox attacks on NLP models. Black-box attacks can only access the model for inference. In essence, the attack sends lists of text to the model and receives predictions. Model predictions come as lists of floats (for classification), strings, or dictionaries. No other information about the model is required. From the start, we wanted TextAttack to work on models from any framework, without too much headache.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model agnosticism",
"sec_num": "2"
},
{
"text": "2.1 Original approach: \"magic\" (model detection logic)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model agnosticism",
"sec_num": "2"
},
{
"text": "Our original approach was to take a model and tokenizer as input to each attack and wrangle data into the correct format behind the scenes. This involved a complex series of decisions based by checking the format of the dataset, testing model and tokenizer superclasses, and handling errors as they arose. In the end, it worked: based on the model, tokenizer, and dataset, as well as based on errors raised by passing different data formats to the model, we could perform inference on PyTorch and TensorFlow models. It was ugly, but it worked. This approach did not scale as there were many edge cases. For example, some TensorFlow Hub models were designed to take strings as predictions, and did not have a tokenizer at all. Some Scikit-learn models took a dataframe as input. We supported both these use cases, but edges cases requiring complex workarounds kept popping up, with no clear end in sight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model agnosticism",
"sec_num": "2"
},
{
"text": "Our long-term solution was to abstract away the tokenizer and require a new model wrapper class for each model. The idea of model wrappers is that each model is wrapped in a model wrapper that implements a single function, call , which takes a list of text inputs and returns a list of predictions. We designed TextAttack to interact exclusively with the model wrapper-not directly with the model, or the tokenizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better approach: model wrappers",
"sec_num": "2.2"
},
{
"text": "Model wrappers allow each model to handle its own internals: including tokenization and batch size. TextAttack does not know or care about how information is tokenized before it's sent to the model. TextAttack sends the model a list of strings and receives a list, numpy.ndarray, or torch.Tensor of predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better approach: model wrappers",
"sec_num": "2.2"
},
{
"text": "In this way, TextAttack becomes totally modelagnostic: any user can implement a model wrapper to enable compatibility for a new model or framework. To make the process easier, TextAttack provides model wrappers for common frameworks and patterns. Currently, TextAttack provides model wrappers and example for models implemented with PyTorch (Paszke et al., 2019) , Hug-gingFace transformers (Wolf et al., 2019) , Tensor-Flow (Abadi et al., 2016) , Scikit-learn (Pedregosa et al., 2011) , and AllenNLP (Gardner et al., 2018) .",
"cite_spans": [
{
"start": 341,
"end": 362,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 391,
"end": 410,
"text": "(Wolf et al., 2019)",
"ref_id": null
},
{
"start": 425,
"end": 445,
"text": "(Abadi et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 461,
"end": 485,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 501,
"end": 523,
"text": "(Gardner et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Better approach: model wrappers",
"sec_num": "2.2"
},
{
"text": "Another goal of TextAttack was to be able to run the same attack on any dataset. This has obvious benefits: two attacks that report results on different datasets can easily be compared with TextAttack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data agnosticism",
"sec_num": "3"
},
{
"text": "We rely on other libraries for providing default datasets. We provide dataset wrappers for loading datasets from these external libraries. We also allow users to provide their own datasets-via CSV files or Python scripts that load datasets. In essence, each dataset is a list of (input, output) pairs. Each text input is a string (for single-input tasks) or an OrderedDict (for tasks that require more complex input formats).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text inputs as OrderedDict objects",
"sec_num": "3.1"
},
{
"text": "Each input is an OrderedDict for two reasons: (i) to maintain column labels for display purposes and to make column-specific logic possible and (ii) to maintain ordering so that inputs can be provided to the model in the proper order. An individual text input to the model is a tuple of strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text inputs as OrderedDict objects",
"sec_num": "3.1"
},
{
"text": "To create these OrderedDict objects from dictionaries loaded from popular dataset libraries, we maintain a tuple of input columns and a string representing the output column. Then, objects from any dataset can be mapped to a data pair for TextAttack: the input is an OrderedDict created from taking the input values in order of the input columns, and an output is the value corresponding to the dataset's output column.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text inputs as OrderedDict objects",
"sec_num": "3.1"
},
{
"text": "With the proper input and ouput columns and a corresponding model, adversarial attacks can be run on any dataset on any model. Models may have different output formats. For example, a sentiment classifier would produces a list of the probabilities of each class, while a sequence-to-sequence models produce a text output. Task-specific subclasses of the TextAttack GoalFunction class allow adversarial attack goal functions to be defined at a high level, such that the same goal function can be used for any model with the same output type. For example, the MinimizeBleuScore goal function attempts to minimize the BLEU score (Papineni et al., 2002) between the correct output and the output the model produces for a given perturbation. This goal function only assumes that the model output a prediction as a string. Given this design pattern, the MinimizeBleuScore goal function can be applied to attack any sequence-tosequence model. Similar goal functions can be designed for other output formats, like classification models or sentence taggers.",
"cite_spans": [
{
"start": 626,
"end": 649,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model output flexibility with GoalFunction",
"sec_num": "4"
},
{
"text": "Across TextAttack modules, some functionality is required over and over again. Many transformations want to split text inputs into a list of words. Many constraints require part-of-speech tagging. We want to avoid repeating code in too many places, and also to set a standard as to which tokenization, part-of-speech tagger, etc. is used. Therefore, with the exception of models (which take string inputs), TextAttack modules operate on AttackedText objects -not vanilla Python strings. The AttackedText contains string functionality that performs word replacement, prepares text to input to the model, prints inputs along with their column names, and manages attack-specific context attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Common functions for text inputs with AttackedText",
"sec_num": "5"
},
{
"text": "It is relatively common for NLP libraries to provide some base class that provides additional functionality to what are essentially enhanced string objects. For example, flair (Akbik et al., 2018) performs text-level operations on a Sentence class. TextAttack follows a similar strategy and stores each text input as an AttackedText object.",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Common functions for text inputs with AttackedText",
"sec_num": "5"
},
{
"text": "A single input may consist of multiple strings. Tex-tAttack transformations apply string-level transformations to inputs -for example, reordering words, or replacing a single word with its synonym. Most transformations are defined in the attack papers to operate on a single string-input. For multi-input classification tasks, adversarial attacks often just choose a single input on which to operate, like the hypothesis in the case of entailment (Jin et al., 2019) .",
"cite_spans": [
{
"start": 447,
"end": 465,
"text": "(Jin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Everything is a single string",
"sec_num": "5.1"
},
{
"text": "TextAttack enables such single-string transformations and constraints without restricting itself to single-input tasks. Transformations and constraints assume the input is a single string. The AttackedText contains a property (AttackedText.text) that joins all text inputs with a space in between. This text value is passed to each transformation & constraint, and then broken up again by column.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Everything is a single string",
"sec_num": "5.1"
},
{
"text": "Model inference memoization Adversarial attacks in NLP spend most of their time on the GPU. For each text input, the attack must obtain the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Performance",
"sec_num": "6"
},
{
"text": "Queries Cache hits Alzantot et al. (2018) 1029 736 Zang et al. (2020) 3745 3080 Table 1 : \"Queries\" stands for average number of queries to victim model to attack one sample, while \"cache hits\" represents the average number of times a query has resulted in a hit to the model output cache. Each cache hit saves a query to the model, so more cache hits indicates a higher performance boost due to caching.",
"cite_spans": [
{
"start": 19,
"end": 41,
"text": "Alzantot et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 51,
"end": 69,
"text": "Zang et al. (2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 80,
"end": 87,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attack",
"sec_num": null
},
{
"text": "model's output, as well as the output of any models used to apply certain linguistic constraints, like a sentence encoder to ensure semantic similarity between adversarial example and the original text. Upon further examination, many of these model inferences appear over and over again during the attack process. For example, the attack needs to compute the model's score for an input that has already been seen. Some population-based stochastic search methods, like the genetic algorithm of Alzantot et al. (2018) , may revisit the same input multiple times during the search process, which increases the number of redundant computations. TextAttack caches model outputs to avoid redundant computations. This is done using a leastrecently-used (LRU) function cache. Since outputs are generally small, TextAttack can maintain a very large LRU cache for each purpose without using an excessive amount of memory. In some cases, this high-level caching can cause a significant performance increase. We experimented with attacking 100 samples for BERT-base model (Devlin et al., 2018 ) trained on SST-2 dataset (Socher et al., 2013) using methods proposed by Alzantot et al. (2018) and Zang et al. (2020) . Table 1 shows that in both cases, significant number of queries to the victim model result in hits to the model output cache, helping us save time by avoiding unnecessary computations.",
"cite_spans": [
{
"start": 493,
"end": 515,
"text": "Alzantot et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 1060,
"end": 1080,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF3"
},
{
"start": 1108,
"end": 1129,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 1156,
"end": 1178,
"text": "Alzantot et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 1183,
"end": 1201,
"text": "Zang et al. (2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1204,
"end": 1211,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attack",
"sec_num": null
},
{
"text": "Multiprocessing strategy Efficient use of GPUs is critical for any deep learning job. If a GPU is available, TextAttack attacks typically use it for victim model inference and for inference on any models required for constraints. These inference times are the main bottleneck for many attacks. On systems with multiple GPUs, running attacks on samples sequentially results in use of only one GPU. We provide multiprocessing feature with the --parallel flag to instead runs attacks in paral-lel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attack",
"sec_num": null
},
{
"text": "TextAttack parallel mode works by starting a new attack worker process for each GPU. Each worker takes dataset samples off of an in-queue, runs an attack on a single sample, puts the attack result on an out-queue, and repeats, until the inqueue is empty. An additional non-GPU worker works to print attack results as they appear on the out queue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attack",
"sec_num": null
},
{
"text": "This multiprocessing paradigm is quite simple, and works nicely with various current deep learning packages. Other libraries that face similar single-GPU-intensive workloads could employ this pattern to parallelize many GPUs. In the future, the additional help of a distributed computing interface like MPI could allow an attack to be run across multiple machines as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attack",
"sec_num": null
},
{
"text": "7 Enabling use across different operating systems and devices",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attack",
"sec_num": null
},
{
"text": "Operating system compatibility Different operating systems follow different filesystem conventions. Specifying full file paths explicitly is almost never a good idea. Instead, prefer using absolute paths. TextAttack uses absolute paths and combines filenames using Python's os.path.join utility function. This enables file manipulation on any system (not just Unix).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attack",
"sec_num": null
},
{
"text": "GPU Hubris Current deep learning frameworks allow explicit device placement of tensors -choosing whether a given tensor is on CPU or a specific GPU. It is easy to design specifically for your system: putting each tensor explicitly on the GPU where it belongs. However, this hurts cross-system compatibility: the code is now only able to run on systems with GPUs. TextAttack checks to see if CUDA is available before putting tensors on the GPU, and puts them on the CPU otherwise. This allows the library to run on machines without GPUs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attack",
"sec_num": null
},
{
"text": "Writing an excellent, well-documented library that is easy to install and run is a good way to get researchers interested in a research topic as it lowers the barriers to entry. Moreover, a well-structured, extendable design empowers newcomers to make their contributions to the field. We hope that our lessons from developing TextAttack will help others create user-friendly open-source NLP libraries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "Thanks to all the TextAttack contributors who helped us solve these tough problems-including Eli Lifland, Jake Grigsby, Di Jin, Kevin Ivey, Alan Zheng, and others. Thanks also to Robin Jia and Paul Michel who provided invaluable feedback toward the development and design of TextAttack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Ten-sorFlow: A system for large-scale machine learning",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Kudlur",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Levenberg",
"suffix": ""
},
{
"first": "Rajat",
"middle": [],
"last": "Monga",
"suffix": ""
},
{
"first": "Sherry",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"G"
],
"last": "Murray",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tucker",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Vasudevan",
"suffix": ""
},
{
"first": "Pete",
"middle": [],
"last": "Warden",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wicke",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Ten- sorFlow: A system for large-scale machine learning.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING 2018, 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Con- ference on Computational Linguistics, pages 1638- 1649.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating natural language adversarial examples",
"authors": [
{
"first": "Moustafa",
"middle": [],
"last": "Alzantot",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Bo-Jhang",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07998"
]
},
"num": null,
"urls": [],
"raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. arXiv preprint arXiv:1804.07998.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "AllenNLP: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Is bert really robust? natural language attack on text classification and entailment",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhijing",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Joey",
"middle": [
"Tianyi"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11932"
]
},
"num": null,
"urls": [],
"raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classification and entailment. arXiv preprint arXiv:1907. 11932.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "\u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2019,
"venue": "CamemBERT: a tasty french language model",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric Ville- monte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. CamemBERT: a tasty french language model.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "PyTorch: An imperative style, High-Performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, High-Performance deep learn- ing library. In Advances in Neural Information Pro- cessing Systems 32, pages 8026-8037. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "Duchesnay",
"middle": [],
"last": "And\u00e9douard",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Mach. Learn. Res",
"volume": "12",
"issue": "85",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12(85):2825-2830.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.442"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Building a free french wordnet from multilingual resources",
"authors": [
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Darja",
"middle": [],
"last": "Fiser",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beno\u00eet Sagot and Darja Fiser. 2008. Building a free french wordnet from multilingual resources.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The language interpretability tool: Extensible, interactive visualizations and analysis for nlp models",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Wexler",
"suffix": ""
},
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Coenen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mahima",
"middle": [],
"last": "Pushkarna",
"suffix": ""
},
{
"first": "Carey",
"middle": [],
"last": "Radebaugh",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Reif",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, and Ann Yuan. 2020. The language in- terpretability tool: Extensible, interactive visualiza- tions and analysis for nlp models.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Allennlp interpret: A framework for explaining predictions of nlp models",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Tuyls",
"suffix": ""
},
{
"first": "Junlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subra- manian, Matt Gardner, and Sameer Singh. 2019. Al- lennlp interpret: A framework for explaining predic- tions of nlp models.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Word-level textual adversarial attacking as combinatorial optimization",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Fanchao",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Chenghao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6066--6080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combina- torial optimization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 6066-6080, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adversarial attacks on deep-learning models in natural language processing: A survey",
"authors": [
{
"first": "Wei",
"middle": [
"Emma"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Ahoud",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Alhazmi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Trans. Intell. Syst. Technol",
"volume": "11",
"issue": "3",
"pages": "1--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. 2020. Adversarial attacks on deep-learning models in natural language process- ing: A survey. ACM Trans. Intell. Syst. Technol., 11(3):1-41.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Example usage of the TextAttack API."
}
}
}
}