|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:35:23.561350Z" |
|
}, |
|
"title": "An Interactive Exploratory Tool for the Task of Hate Speech Detection", |
|
"authors": [ |
|
{ |
|
"first": "Angelina", |
|
"middle": [], |
|
"last": "Mcmillan-Major", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Washington", |
|
"location": { |
|
"settlement": "Seattle", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Amandalynne", |
|
"middle": [], |
|
"last": "Paullada", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "paullada@uw.edu" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "yacine@huggingface.co" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "With the growth of Automatic Content Moderation (ACM) on widely used social media platforms, transparency into the design of moderation technology and policy is necessary for online communities to advocate for themselves when harms occur. In this work, we describe a suite of interactive modules to support the exploration of various aspects of this technology, and particularly of those components that rely on English models and datasets for hate speech detection, a subtask within ACM. We intend for this demo to support the various stakeholders of ACM in investigating the definitions and decisions that underpin current technologies such that those with technical knowledge and those with contextual knowledge may both better understand existing systems.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "With the growth of Automatic Content Moderation (ACM) on widely used social media platforms, transparency into the design of moderation technology and policy is necessary for online communities to advocate for themselves when harms occur. In this work, we describe a suite of interactive modules to support the exploration of various aspects of this technology, and particularly of those components that rely on English models and datasets for hate speech detection, a subtask within ACM. We intend for this demo to support the various stakeholders of ACM in investigating the definitions and decisions that underpin current technologies such that those with technical knowledge and those with contextual knowledge may both better understand existing systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The field of natural language processing (NLP) is organized into tasks, definitions of which minimally include the combination of a modeling paradigm and benchmark datasets (Vu et al. (2020) ; Reuver et al. (2021) ; Schlangen (2021) ; see also BIG-bench 1 ). This organization, however, is not necessarily apparent to those outside of NLP research. Making these established tasks outwardly visible is one step towards the recent push for accessible documentation of NLP (Bender and Friedman, 2018; Holland et al., 2018; Mitchell et al., 2019; Arnold et al., 2019; McMillan-Major et al., 2021; Gebru et al., 2021) and promoting the importance of careful data treatment (Paullada et al., 2021; Sambasivan et al., 2021b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 190, |
|
"text": "(Vu et al. (2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 213, |
|
"text": "Reuver et al. (2021)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 232, |
|
"text": "Schlangen (2021)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 497, |
|
"text": "(Bender and Friedman, 2018;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 519, |
|
"text": "Holland et al., 2018;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 542, |
|
"text": "Mitchell et al., 2019;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 563, |
|
"text": "Arnold et al., 2019;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 592, |
|
"text": "McMillan-Major et al., 2021;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 593, |
|
"end": 612, |
|
"text": "Gebru et al., 2021)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 668, |
|
"end": 691, |
|
"text": "(Paullada et al., 2021;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 692, |
|
"end": 717, |
|
"text": "Sambasivan et al., 2021b)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One task that has attracted sustained interest in NLP is the problem of content moderation. While many manual and hybrid paradigms for content moderation exist (Pershan, 2020) , several major platforms have invested heavily in automated methods that they see as necessary to support scaling 1 https://github.com/google/BIG-bench/ up moderation to address their colossal content loads (Gillespie, 2020) . Automatic Content Moderation (ACM) includes strategies that range from keyword-or regular expression-based approaches, to hash-based content recognition, to data-driven machine learning models. These approaches employ different families of algorithms, resulting in various downstream effects and necessitating documentation and algorithmic accountability processes that address the needs of a variety of stakeholders.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 175, |
|
"text": "(Pershan, 2020)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 401, |
|
"text": "(Gillespie, 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Synchronizing research around consistent modeling paradigms and benchmark datasets is an ongoing problem for ACM (Fortuna et al., 2020; Madukwe et al., 2020) , with experts calling for more grounding in related areas in the social sciences, communication studies and psychology (Vidgen and Derczynski, 2020; Kiritchenko et al., 2021) . Without this grounding and without consideration for the contexts into which ACM is integrated, the technology intended to prevent harms ends up magnifying them, especially for vulnerable communities (Dias Oliva et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 135, |
|
"text": "(Fortuna et al., 2020;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 157, |
|
"text": "Madukwe et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 307, |
|
"text": "(Vidgen and Derczynski, 2020;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 333, |
|
"text": "Kiritchenko et al., 2021)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 561, |
|
"text": "(Dias Oliva et al., 2021)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The present paper proposes an interactive tool aimed at allowing a diverse audience to explore examples of NLP data and models used in data-driven ACM, focusing on the subtask of hate speech detection. Our tool outlines various aspects of the social and technical considerations for ACM, provides an overview of the data and modeling landscape for hate speech detection, and enables comparison of different resources and approaches to the task. Our goal is to understand the role of multidisciplinary education and documentation in promoting algorithmic transparency and contestability (Vaccaro et al., 2019) . We provide a brief overview of ACM as well as the interactions between its many stakeholders ( \u00a72) and describe related work in dataset and model exploration ( \u00a73). We then present our demo ( \u00a74), highlighting its constituent sections and describing our rationale for each. We conclude with a summary of limitations and future work ( \u00a75). Content moderation is the process by which online platforms manage which kinds of content, in the form of images, video, or text, that users are allowed to share. Policies for content moderation, which vary across platforms, are often guided by a combination of legal, commercial, and social pressures. Broadly, these policies tend to prohibit explicit sexual content, graphic depictions of violence, hate speech 2 , and harassment or trolling between platform users (Gillespie, 2018) . Platforms take a variety of actions to moderate content, including removal of the offending content, reducing the visibility of the content, adding a flag or warning, and/or suspending accounts that violate content guidelines. Moderation decisions can, however, lead to undesired reactions. For example, removing conspiracy theory content tends to reinforce conspiracy theory claims, and ousting hateful groups from larger platforms can result in these groups flocking to smaller platforms with fewer resources for moderation (Pershan, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 586, |
|
"end": 608, |
|
"text": "(Vaccaro et al., 2019)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1417, |
|
"end": 1434, |
|
"text": "(Gillespie, 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1963, |
|
"end": 1978, |
|
"text": "(Pershan, 2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Conflicts in moderation decisions often arise due to the size and diversity of a platform's community members and a divergence in priorities between community members and platform managers. A report from the Brennan Center for Justice found that 'double standards' pervade in content moderation actions, and that inconsistently applied content policies overwhelmingly silence marginalized voices (D\u00edaz and Hecht-Felella) . For example, Facebook erroneously labeled hashtags referencing Al-Aqsa, a mosque in a predominately Palestinian neighborhood of Jerusalem, as pertaining to a terrorist organization, and was also found to censor deroga-tory speech against white people more frequently than slurs against Black, Jewish, and transgender people (Eidelman et al., 2021) . To address the often stark gap between model performance on intrinsic metrics and performance in real-world, user-facing scenarios for toxic content classifiers, Gordon et al. 2021propose an evaluation paradigm that takes into account inter-annotator disagreements on training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 396, |
|
"end": 420, |
|
"text": "(D\u00edaz and Hecht-Felella)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 770, |
|
"text": "(Eidelman et al., 2021)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Even when moderation rules are applied consistently, they may result in over-moderating communities that use terms that are deemed 'explicit' outside the community but are acceptable to the community members themselves, as often happens for LGBTQ communities online (Dias Oliva et al., 2021) . These kinds of harms show that content moderation algorithms must be developed with transparency, care for the context in which the algorithms will be integrated, and mechanisms for the community to contest moderation decisions. One approach to consulting diverse perspectives on 'toxic' content relies on jury learning, as in a model proposed by Gordon et al. (2022) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 291, |
|
"text": "(Dias Oliva et al., 2021)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 661, |
|
"text": "Gordon et al. (2022)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In addition to calling for more inclusion by various stakeholders in decision-making processes for each platform, Pershan (2020) advocates for the development of regional policies that consider the moderation styles of smaller platforms as well as larger ones. Regional policies are especially important as the large platforms, primarily located in the US, are ported outside the US with moderation policies that are ill-equipped to support local communities appropriately, for example in India where hate speech may also occur on the basis of caste (Sambasivan et al., 2021a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 550, |
|
"end": 576, |
|
"text": "(Sambasivan et al., 2021a)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Approaches to content moderation commonly involve a hybrid strategy that uses reports from users and algorithmic systems to identify content that may violate platform guidelines, and then relies on human review to determine a course of action (i.e., retain, obscure, or remove the content). This process exposes human moderators to high volumes of violent and hateful content (Roberts, 2014), motivating a push for enhanced automatic methods to alleviate the burden on human moderators. Automated content moderation can rely on analyses of the content itself using NLP or computer vision (CV), features of user dynamics, and hashing to match instances of pre-identified forbidden content. Within the realm of text-based ACM, approaches vary from wordlist-based approaches to data-driven models. When platforms opt not to build their own systems, Perspective API 3 is commonly used to flag various kinds of content for moderation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Forbidding hate speech on online platforms is seen as a way to prevent the proliferation of hateful discourse from leading to hate-driven violence offline 4 . Common datasets used for training and evaluating hate speech detectors can be found at https://hatespeechdata.com/. We refer readers to Kiritchenko et al. (2021) for a comprehensive overview of definitions, resources, and ethical challenges incurred in the development of hate speech detection technologies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 320, |
|
"text": "Kiritchenko et al. (2021)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A variety of methods and tools that enable dataset users to explore and familiarize themselves with the contents of the datasets have been proposed. For example, Know Your Data 5 , provided by Google's PAIR research group, aims to provide users with views of datasets that surface errors or issues with particular instances, systematic gaps in representation, or problematic content that requires human judgment to assess. This tool thus far has focused on image datasets. The Dataset Cartography method, proposed by Swayamdipta et al. (2020) , uses model training dynamics to create maps of dataset instances organized by difficulty or ambiguity, which can surface problematic instances. Recently, Xiao et al. (2022) released a tool for comparing datasets aimed at enabling dataset users to understand potential sources of bias in the data. While much previous work has focused on ex-ploratory tools for dataset users, our tool is meant to cater to an audience who will not necessarily be training machine learning models, but constitute a variety of impacted or interested stakeholders. Wright et al. (2021) tackle the problem of interrogating a toxicity detection model using a tool they call RECAST. They fine-tune a BERT-based Transformer model on the Jigsaw Kaggle dataset of toxic comments from Wikipedia and provide an online text-editing application that visually highlights words that the models detects as toxic, suggesting alternate phrases that may be less toxic using both word embeddings and language modeling predictions. They evaluate the tool using a text-editing task, presenting user study participants with comments drawn from both the Kaggle dataset and Twitter threads, and show that the users in their study are learning about the model behavior by editing toxic comments to be less toxic according to the model prediction scores.", |
|
"cite_spans": [ |
|
{ |
|
"start": 517, |
|
"end": 542, |
|
"text": "Swayamdipta et al. (2020)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work: Interactive Dataset and Model Exploration", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We aim to make the exploration tool as accessible and useful as possible to the many stakeholders involved in ACM. Particularly in light of the closed nature of many contemporary content moderation pipelines that impact people who use social media, our demo familiarizes these stakeholders with the general framework of how such systems might work behind the scenes. In order to conceptualize the breadth of uses that ACM stakeholders may have for such an exploratory tool, we considered the stakeholders and their goals detailed in Pershan (2020) using the framework developed by Suresh et al. (2021) . Rather than identifying stakeholders based on their roles, they propose mapping stakeholders based on the type of knowledge they hold and the context of that knowledge, such as technical, domain, and contextual knowledge.", |
|
"cite_spans": [ |
|
{ |
|
"start": 581, |
|
"end": 601, |
|
"text": "Suresh et al. (2021)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Demo Development and Structure", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In mapping out our envisioned stakeholders, we tried to consider how they might use the tool towards their goals. Policymakers, journalists and impacted communities may use the demo to understand where and how things go wrong in hate speech detection in order to advocate for changes to platform policies. Domain experts may use the tool to understand where their work is used in a pipeline, such as in label definitions, and envision potential locations in the pipeline where additional domain information could be useful. Students and current developers may use the tool to reflect upon their own design decisions in light of the historical and sociotechnical framing we provide for ACM and consider new possibilities for research development. Finally, we imagine that our demo may generally provide common ground for these and other stakeholders in order to facilitate more productive discussions on how to develop ACM technologies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Demo Development and Structure", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Additionally, in order to more fully understand the perspectives of stakeholders outside of the academic context, we discussed our demo and the state of the field of hate speech detection with several experts in the field, particularly those with experience deploying models in the industry context and working with non-technical stakeholders. Following these discussions, we built the interactive, openly available demo using Streamlit 6 , the first page of which is shown in Fig. 1 . We provide screenshots of the other modules in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 477, |
|
"end": 483, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Demo Development and Structure", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The introduction to the demo is intended to provide common ground for the various stakeholders with key terms and the kinds of data that are subject to moderation. The key terms include hate speech and content moderation, for which we provide the following definitions to help build a shared understanding given the broad audience we identified:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S1. Welcome and Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hate speech Any kind of communication in speech, writing, or behaviour that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor (United Nations, 2019).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S1. Welcome and Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Content moderation A collection of interventions used by online platforms to partially obscure, or remove entirely from user-facing view, content that is objectionable based on the company's values or community guidelines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S1. Welcome and Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Additionally, we provide a list of the datasets and models that we feature in the tool along with links to further documentation for each resource.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S1. Welcome and Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To contextualize automatic hate speech detection tools, we describe of the kinds of content that moderation is intended to target and how automatic methods are used to support manual approaches to content moderation, as discussed in \u00a72 and \u00a73.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S2. Context of ACM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "6 https://streamlit.io/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S2. Context of ACM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also illustrate the ongoing challenges in hate speech detection with links to platforms' content guidelines and press releases in addition to critical works in response to content moderation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S2. Context of ACM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Meaningfully exploring datasets composed of up to hundreds of thousands of instances constitutes a signifcant difficulty. To address this challenge, we rely on hierarchical clustering to group similar examples at different levels of granularity, using SentenceBERT (Reimers and Gurevych, 2019) embeddings of the example text to evaluate closeness. For each cluster (including the top-level one corresponding to the full dataset), the text of a selection of examplars for that cluster may be viewed along with their labels, as well as the distribution of labels within the entire cluster. This allows users of our system to zoom in on specific regions, and gain insights into what sorts of examples are represented in a dataset and how different topics are labeled. Comparison across datasets also illustrates the different assumptions that are made at the time of dataset creation even within the same established task. For this demo, we pre-selected datasets constructed for hate speech detection in English. These include the FRENK Dataset of Socially Unacceptable Discourse in English (Ljube\u0161i\u0107 et al., 2019) , the Measuring Hate Speech dataset (Kennedy et al., 2020) , and the Twitter Sentiment Analysis dataset (Sharma, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 293, |
|
"text": "(Reimers and Gurevych, 2019)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1088, |
|
"end": 1111, |
|
"text": "(Ljube\u0161i\u0107 et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1148, |
|
"end": 1170, |
|
"text": "(Kennedy et al., 2020)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1216, |
|
"end": 1230, |
|
"text": "(Sharma, 2019)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S3. Hate Speech Dataset Exploration", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the model exploration section, we provide two ways of probing models. The first allows viewers to submit one or more test inputs to a single model. The results are then shown such that viewers may select a label and order the output scores for the inputs based on that label. The second module compares the same input sentence with two different models. The module then returns the label and score given by each model as well as a confidence graphs for each model. The confidence graphs show the model's accuracy, errors, and scores over hateful and non-hateful instances as well as out-of-domain content from the Hateful Memes and Open Subtitles datasets (Kiela et al., 2020; Tiedemann, 2016) . These graphs provide context for an individual instance of model behavior, showing whether the model is likely to be more or less confident when labeling an instance, regardless of the model's overall accuracy. Again, for the demo we pre-selected models including a RoBERTa model trained on the FRENK dataset (Ljube\u0161i\u0107 et al., 2019 ), a RoBERTa model trained on the TweetEval benchmark (Barbieri et al., 2020) , and a DeHateBERT model trained on Twitter and StormFront data (Aluru et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 659, |
|
"end": 679, |
|
"text": "(Kiela et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 680, |
|
"end": 696, |
|
"text": "Tiedemann, 2016)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1008, |
|
"end": 1030, |
|
"text": "(Ljube\u0161i\u0107 et al., 2019", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1085, |
|
"end": 1108, |
|
"text": "(Barbieri et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1173, |
|
"end": 1193, |
|
"text": "(Aluru et al., 2021)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S4. Hate Speech Model Exploration", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To support viewers in probing the models, the demo also supplies test case instances from R\u00f6ttger et al. (2021) 's HateCheck tests. The tests use templates to generate examples of text that are likely to mislabeled by hate speech detection models. The templates are organized into classes based on the linguistic properties of the sentence, such as whether it contains negation or counter speech, and have placeholders for social groups that are frequently targeted by hate speech. The counter speech template class, for example, contains sentences that directly reference or quote hate speech, as in \"Statements like '[IDENTITY] are scum' are deeply hurtful.\" (examples from R\u00f6ttger et al. (2021) , p.45). These kinds of sentences are often labeled as hateful by automatic systems due to the direct reference, but the sentence as a whole provides context that shows that the intent is not hate speech. With 29 different groups of examples over 10 different classes in addition to counter speech, the HateCheck tests encourage users to think of the variety of forms that hate speech can take and how they may be misclassified by automatic systems. While the single model exploration module is similar in function to the RECAST tool (Wright et al., 2021), we note significant differences in the imagined use cases of ours and the RECAST tool. Wright et al. emphasize RECAST's use in real time as a comment-editing tool. Our tool on the other hand is not intended for integrated use, but rather as a self-directed learning tool. While stakeholders could compare several edits of the same comment using our tool, stakeholders are not limited to this method of exploration. We instead encourage stakeholders to consider comparisons, between inputs and between models, as a way to surface expected and unexpected model behavior.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 111, |
|
"text": "R\u00f6ttger et al. (2021)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 676, |
|
"end": 697, |
|
"text": "R\u00f6ttger et al. (2021)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S4. Hate Speech Model Exploration", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To end the demo, we ask the user for feedback on their role and experience with the modules. The questions focus on what the user learned from the modules about the sociotechnical aspects of ACM and the resources for hate speech detection. In particular, we are interested in seeing how the modules were more or less informative for different stakeholder groups. See Appendix B for the specific questions asked.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S5. Demo Feedback Questionnaire", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While our tool is aimed at promoting a shared vocabulary and common ground between (1) those who build and design hate speech detection datasets and models, (2) those who are on the receiving end of moderation decisions on social media platforms, and (3) researchers and journalists who are interested in understanding some of the mechanics of automated content moderation, the tool is not designed to be a platform for facilitating connection and engagement between these groups. However, the tool can serve as a foundation for such discussions and could be integrated into a larger system designed for engagement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We plan to update the demo based on feedback from the questionnaire. Once the demo has been finalized, user studies aimed at gathering perspectives from a broader set of stakeholders, including those we did not consider in our initial design process such as content moderation workers, would help to outline how different stakeholders actually use the tool and evaluate the effectiveness of the tool with respect to the participants' use cases and contexts. Following these studies, future versions of the tool could expand to consider more issues within content moderation beyond hate speech detection or be designed to provide context for other kinds of NLP tasks. While this current demo is focused on English resources, future versions could also include resources and contexts for other languages as well as more complex configurations of datasets and models beyond binary labeling schemas.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We began this work with the intention to help provide clarity into the organization of the field of NLP into various tasks. While this demo has focused on the task of ACM, we would expect that similar demos could be developed to contextualize other well-known tasks in NLP such as machine translation, information retrieval, and automatic speech recognition. very thoughtful and helpful suggestions for improving this publication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A.P. is supported by the National Institutes of Health, National Library of Medicine (NLM) Biomedical and Health Informatics Training Program at the University of Washington (Grant Nr. T15LM007442). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Figure 2 shows the Context of Automatic Content Moderation module (Section 4). By introducing the demo users to some of the relevant context outlined in Section 2 and to selected writings both by content platforms and independent writers on their approach to (automatic) content moderation, we aim to help them better understand the information presented in the following sections. Figure 3 provide a screenshot of the Dataset Exploration Section (4). The top half presents a graphical representation of the dataset hierarchical clustering, summary information about a cluster is provided in a tooltip when the user hovers over the corresponding node. The user can then select a specific cluster for which they want to see more information, and the app shows a selected numbers of exemplars (examples that are closest to the cluster centroid) along with the distribution of labels in the cluster. The first module in this Section (Figure 4 ) allows the user to generate text examples from R\u00f6ttger et al. (2021)'s HateCheck tests. These tests are designed to examine the models' behaviors on cases that are expected to be difficult for Automatic Content Moderation system and allow users to explore their likely failure cases. Figure 5 presents the model comparison module. Models trained on different datasets might behave differently on similar examples. Being able to test them side by side should allow users to assess their fitness for specific use cases. Figure 6 presents the example ranking module. Whereas the model comparison module helps users Figure 6 : The model ranking section of the model exploration module compare model behaviors on similar examples, this one allows them to view a given models' predictions side by side for a set of selected examples, to allow them to explore for example the effect of small variations in the text or the behavior of the model on different categories of tests featured in the HateCheck module. Figure 7 presents the concluding Section (4), which summarizes some key points presented in the demo and asks users to answer a feedback questionnaire, which includes questions such as:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 347, |
|
"end": 355, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 729, |
|
"end": 737, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1277, |
|
"end": 1286, |
|
"text": "(Figure 4", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 1573, |
|
"end": 1581, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1807, |
|
"end": 1815, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1901, |
|
"end": 1909, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2293, |
|
"end": 2301, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Limitations and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 How would you describe your role?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Feedback Questions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Why are you interested in content moderation?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Feedback Questions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Which modules did you use the most?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Feedback Questions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Which module did you find most informative?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Feedback Questions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Which application were you most interested in learning more about?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Feedback Questions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 What surprised you most about the datasets?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Feedback Questions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Which models are you most concerned about as a user?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Feedback Questions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Do you have any comments or suggestions? 20", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Feedback Questions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We define hate speech in \u00a74.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://perspectiveapi.com/ 4 Discord Off-Platform Behavior Update 5 https://knowyourdata.withgoogle.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Thank you to Zeerak Talat, Dia Kayyali, Bertie Vidgen, and the Perspective API Team for their insightful comments during the development of the demo, and to the anonymous reviewers for their", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A deep dive into multilingual hate speech classification", |
|
"authors": [ |
|
{ |
|
"first": "Binny", |
|
"middle": [], |
|
"last": "Sai Saketh Aluru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Punyajoy", |
|
"middle": [], |
|
"last": "Mathew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Animesh", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mukherjee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sai Saketh Aluru, Binny Mathew, Punyajoy Saha, and Animesh Mukherjee. 2021. A deep dive into mul- tilingual hate speech classification. In Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track, pages 423- 439, Cham. Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Factsheets: Increasing trust in ai services through supplier's declarations of conformity", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Arnold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [ |
|
"K E" |
|
], |
|
"last": "Bellamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Hind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Houde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameep", |
|
"middle": [], |
|
"last": "Mehta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aleksandra", |
|
"middle": [], |
|
"last": "Mojsilovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ravi", |
|
"middle": [], |
|
"last": "Nair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Karthikeyan Natesan Ramamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Olteanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darrell", |
|
"middle": [], |
|
"last": "Piorkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Reimer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Richards", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kush", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Tsay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Varshney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "IBM Journal of Research and Development", |
|
"volume": "63", |
|
"issue": "4", |
|
"pages": "1--6", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1147/JRD.2019.2942288" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Arnold, Rachel K. E. Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mo- jsilovi\u0107, Ravi Nair, Karthikeyan Natesan Rama- murthy, Alexandra Olteanu, David Piorkowski, Dar- rell Reimer, John Richards, Jason Tsay, and Kush R. Varshney. 2019. Factsheets: Increasing trust in ai services through supplier's declarations of confor- mity. IBM Journal of Research and Development, 63(4/5):6:1-6:13.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "TweetEval: Unified benchmark and comparative evaluation for tweet classification", |
|
"authors": [ |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Barbieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jose", |
|
"middle": [], |
|
"last": "Camacho-Collados", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Espinosa Anke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonardo", |
|
"middle": [], |
|
"last": "Neves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1644--1650", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.findings-emnlp.148" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francesco Barbieri, Jose Camacho-Collados, Luis Es- pinosa Anke, and Leonardo Neves. 2020. TweetE- val: Unified benchmark and comparative evaluation for tweet classification. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020, pages 1644-1650, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bender", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Batya", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "587--604", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00041" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Fighting hate speech, silencing drag queens? artificial intelligence in content moderation and risks to lgbtq voices online", |
|
"authors": [ |
|
{ |
|
"first": "Dennys", |
|
"middle": [ |
|
"Marcelo" |
|
], |
|
"last": "Thiago Dias Oliva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandra", |
|
"middle": [], |
|
"last": "Antonialli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gomes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Sexuality & Culture", |
|
"volume": "25", |
|
"issue": "2", |
|
"pages": "700--732", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s12119-020-09790-w" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thiago Dias Oliva, Dennys Marcelo Antonialli, and Alessandra Gomes. 2021. Fighting hate speech, si- lencing drag queens? artificial intelligence in con- tent moderation and risks to lgbtq voices online. Sex- uality & Culture, 25(2):700-732.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Double standards in social media content moderation", |
|
"authors": [ |
|
{ |
|
"first": "\u00c1ngel", |
|
"middle": [], |
|
"last": "D\u00edaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Hecht-Felella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "\u00c1ngel D\u00edaz and Laura Hecht-Felella. Double standards in social media content moderation. Brennan Center for Justice at New York University School of Law.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Time and again, social media giants get content moderation wrong: Silencing speech about al-aqsa mosque is just the latest example", |
|
"authors": [ |
|
{ |
|
"first": "Adeline", |
|
"middle": [], |
|
"last": "Vera Eidelman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fikayo", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Walter-Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vera Eidelman, Adeline Lee, and Fikayo Walter- Johnson. 2021. Time and again, social media giants get content moderation wrong: Silencing speech about al-aqsa mosque is just the latest example. American Civil Liberties Union.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Toxic, hateful, offensive or abusive? what are we really classifying? an empirical analysis of hate speech datasets", |
|
"authors": [ |
|
{ |
|
"first": "Paula", |
|
"middle": [], |
|
"last": "Fortuna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Soler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leo", |
|
"middle": [], |
|
"last": "Wanner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6786--6794", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paula Fortuna, Juan Soler, and Leo Wanner. 2020. Toxic, hateful, offensive or abusive? what are we really classifying? an empirical analysis of hate speech datasets. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 6786-6794, Marseille, France. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Datasheets for datasets", |
|
"authors": [ |
|
{ |
|
"first": "Timnit", |
|
"middle": [], |
|
"last": "Gebru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Morgenstern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Briana", |
|
"middle": [], |
|
"last": "Vecchione", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"Wortman" |
|
], |
|
"last": "Vaughan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Crawford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Commun. ACM", |
|
"volume": "64", |
|
"issue": "12", |
|
"pages": "86--92", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3458723" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timnit Gebru, Jamie Morgenstern, Briana Vec- chione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daum\u00e9 III, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM, 64(12):86-92.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media", |
|
"authors": [ |
|
{ |
|
"first": "Tarleton", |
|
"middle": [], |
|
"last": "Gillespie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden de- cisions that shape social media. Yale University Press.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Content moderation, ai, and the question of scale", |
|
"authors": [ |
|
{ |
|
"first": "Tarleton", |
|
"middle": [], |
|
"last": "Gillespie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Big Data & Society", |
|
"volume": "7", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tarleton Gillespie. 2020. Content moderation, ai, and the question of scale. Big Data & Society, 7(2):2053951720943234.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Jury learning: Integrating dissenting voices into machine learning models", |
|
"authors": [ |
|
{ |
|
"first": "Michelle", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mitchell L Gordon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joon", |
|
"middle": [ |
|
"Sung" |
|
], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kayur", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Patel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatsunori", |
|
"middle": [], |
|
"last": "Hancock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael S", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bernstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "CHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--19", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell L Gordon, Michelle S Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S Bernstein. 2022. Jury learning: Inte- grating dissenting voices into machine learning mod- els. In CHI Conference on Human Factors in Com- puting Systems, pages 1-19.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The disagreement deconvolution: Bringing machine learning performance metrics in line with reality", |
|
"authors": [ |
|
{ |
|
"first": "Kaitlyn", |
|
"middle": [], |
|
"last": "Mitchell L Gordon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kayur", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatsunori", |
|
"middle": [], |
|
"last": "Patel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael S", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bernstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell L Gordon, Kaitlyn Zhou, Kayur Patel, Tat- sunori Hashimoto, and Michael S Bernstein. 2021. The disagreement deconvolution: Bringing machine learning performance metrics in line with reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-14.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The dataset nutrition label: A framework to drive higher data quality standards", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Holland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Hosny", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kasia", |
|
"middle": [], |
|
"last": "Chmielinski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Holland, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. 2018. The dataset nutrition label: A framework to drive higher data quality standards.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Constructing interval variables via faceted rasch measurement and multitask deep learning: a hate speech application", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Kennedy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoff", |
|
"middle": [], |
|
"last": "Bacon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Sahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [ |
|
"Von" |
|
], |
|
"last": "Vacano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.48550/ARXIV.2009.10277" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris J. Kennedy, Geoff Bacon, Alexander Sahn, and Claudia von Vacano. 2020. Constructing interval variables via faceted rasch measurement and multi- task deep learning: a hate speech application.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes challenge: Detecting hate speech in multimodal memes", |
|
"authors": [ |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hamed", |
|
"middle": [], |
|
"last": "Firooz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Mohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vedanuj", |
|
"middle": [], |
|
"last": "Goswami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.48550/ARXIV.2005.04790" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes chal- lenge: Detecting hate speech in multimodal memes.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Confronting abusive language online: A survey from the ethical and human rights perspective", |
|
"authors": [ |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isar", |
|
"middle": [], |
|
"last": "Nejadgholi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen C", |
|
"middle": [], |
|
"last": "Fraser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "71", |
|
"issue": "", |
|
"pages": "431--478", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Svetlana Kiritchenko, Isar Nejadgholi, and Kathleen C Fraser. 2021. Confronting abusive language online: A survey from the ethical and human rights per- spective. Journal of Artificial Intelligence Research, 71:431-478.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The frenk datasets of socially unacceptable discourse in slovene and english", |
|
"authors": [ |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darja", |
|
"middle": [], |
|
"last": "Fi\u0161er", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toma\u017e", |
|
"middle": [], |
|
"last": "Erjavec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Text, Speech, and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--114", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikola Ljube\u0161i\u0107, Darja Fi\u0161er, and Toma\u017e Erjavec. 2019. The frenk datasets of socially unacceptable dis- course in slovene and english. In Text, Speech, and Dialogue, pages 103-114, Cham. Springer Interna- tional Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "In data we trust: A critical analysis of hate speech detection datasets", |
|
"authors": [ |
|
{ |
|
"first": "Kosisochukwu", |
|
"middle": [], |
|
"last": "Madukwe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoying", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "150--161", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.alw-1.18" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kosisochukwu Madukwe, Xiaoying Gao, and Bing Xue. 2020. In data we trust: A critical analysis of hate speech detection datasets. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 150-161, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Reusable templates and guides for documenting datasets and models for natural language processing and generation: A case study of the HuggingFace and GEM data and model cards", |
|
"authors": [ |
|
{ |
|
"first": "Angelina", |
|
"middle": [], |
|
"last": "Mcmillan-Major", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salomey", |
|
"middle": [], |
|
"last": "Osei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [ |
|
"Diego" |
|
], |
|
"last": "Rodriguez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawan", |
|
"middle": [], |
|
"last": "Sasanka Ammanamanchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--135", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.gem-1.11" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angelina McMillan-Major, Salomey Osei, Juan Diego Rodriguez, Pawan Sasanka Ammanamanchi, Sebas- tian Gehrmann, and Yacine Jernite. 2021. Reusable templates and guides for documenting datasets and models for natural language processing and gener- ation: A case study of the HuggingFace and GEM data and model cards. In Proceedings of the 1st Workshop on Natural Language Generation, Eval- uation, and Metrics (GEM 2021), pages 121-135, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Model cards for model reporting", |
|
"authors": [ |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Zaldivar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Parker", |
|
"middle": [], |
|
"last": "Barnes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vasserman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Hutchinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Spitzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deborah", |
|
"middle": [], |
|
"last": "Inioluwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timnit", |
|
"middle": [], |
|
"last": "Raji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gebru", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "220--229", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3287560.3287596" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Account- ability, and Transparency, FAT* '19, page 220-229, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Data and its (dis) contents: A survey of dataset development and use in machine learning research", |
|
"authors": [ |
|
{ |
|
"first": "Amandalynne", |
|
"middle": [], |
|
"last": "Paullada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deborah", |
|
"middle": [], |
|
"last": "Inioluwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Raji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Bender", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Denton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hanna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Patterns", |
|
"volume": "2", |
|
"issue": "11", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. 2021. Data and its (dis) contents: A survey of dataset development and use in machine learning research. Patterns, 2(11):100336.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Moderating our (dis)content: Renewing the regulatory approach", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Pershan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.alw-1.14" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire Pershan. 2020. Moderating our (dis)content: Re- newing the regulatory approach. In Proceedings of the Fourth Workshop on Online Abuse and Harms, page 113, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3980--3990", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1410" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980-3990. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "No NLP task should be an island: Multidisciplinarity for diversity in news recommender systems", |
|
"authors": [ |
|
{ |
|
"first": "Myrthe", |
|
"middle": [], |
|
"last": "Reuver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antske", |
|
"middle": [], |
|
"last": "Fokkens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suzan", |
|
"middle": [], |
|
"last": "Verberne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myrthe Reuver, Antske Fokkens, and Suzan Verberne. 2021. No NLP task should be an island: Multi- disciplinarity for diversity in news recommender sys- tems. In Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Re- port Generation, pages 45-55, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Behind the screen: The hidden digital labor of commercial content moderation", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Sarah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah T Roberts. 2014. Behind the screen: The hid- den digital labor of commercial content moderation. Ph.D. thesis.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "HateCheck: Functional tests for hate speech detection models", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "R\u00f6ttger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertie", |
|
"middle": [], |
|
"last": "Vidgen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeerak", |
|
"middle": [], |
|
"last": "Waseem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Margetts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janet", |
|
"middle": [], |
|
"last": "Pierrehumbert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "41--58", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-long.4" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul R\u00f6ttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2021. HateCheck: Functional tests for hate speech detection models. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 41-58, Online. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Re-imagining algorithmic fairness in india and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Nithya", |
|
"middle": [], |
|
"last": "Sambasivan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Arnesen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Hutchinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tulsee", |
|
"middle": [], |
|
"last": "Doshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vinodkumar", |
|
"middle": [], |
|
"last": "Prabhakaran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "315--328", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3442188.3445896" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021a. Re-imagining algorithmic fairness in india and be- yond. In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, FAccT '21, page 315-328, New York, NY, USA. As- sociation for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Everyone Wants to Do the Model Work, Not the Data Work", |
|
"authors": [ |
|
{ |
|
"first": "Nithya", |
|
"middle": [], |
|
"last": "Sambasivan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shivani", |
|
"middle": [], |
|
"last": "Kapania", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannah", |
|
"middle": [], |
|
"last": "Highfill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Akrong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Praveen", |
|
"middle": [], |
|
"last": "Paritosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lora", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Aroyo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Data Cascades in High-Stakes AI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3411764.3445518" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021b. \"Everyone Wants to Do the Model Work, Not the Data Work\": Data Cascades in High-Stakes AI. Association for Computing Machinery, New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Targeting the benchmark: On methodology in current natural language processing research", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Schlangen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "670--674", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-short.85" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Schlangen. 2021. Targeting the benchmark: On methodology in current natural language processing research. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Nat- ural Language Processing (Volume 2: Short Papers), pages 670-674, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Twitter sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Roshan", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roshan Sharma. 2019. Twitter sentiment analysis.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and Their Needs", |
|
"authors": [ |
|
{ |
|
"first": "Harini", |
|
"middle": [], |
|
"last": "Suresh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Nam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Satyanarayan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3411764.3445088" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harini Suresh, Steven R. Gomez, Kevin K. Nam, and Arvind Satyanarayan. 2021. Beyond Expertise and Roles: A Framework to Characterize the Stakehold- ers of Interpretable Machine Learning and Their Needs. Association for Computing Machinery, New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Dataset cartography: Mapping and diagnosing datasets with training dynamics", |
|
"authors": [ |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Lourie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yizhong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9275--9293", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.746" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dy- namics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 9275-9293, Online. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Finding alternative translations in a large corpus of movie subtitle", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3518--3522", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 2016. Finding alternative translations in a large corpus of movie subtitle. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3518- 3522, Portoro\u017e, Slovenia. European Language Re- sources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "United nations strategy and plan of action on hate speech", |
|
"authors": [], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "United Nations. 2019. United nations strategy and plan of action on hate speech.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Contestability in algorithmic systems", |
|
"authors": [ |
|
{ |
|
"first": "Kristen", |
|
"middle": [], |
|
"last": "Vaccaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karrie", |
|
"middle": [], |
|
"last": "Karahalios", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deirdre", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Mulligan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Kluttz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tad", |
|
"middle": [], |
|
"last": "Hirsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "523--527", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristen Vaccaro, Karrie Karahalios, Deirdre K Mulli- gan, Daniel Kluttz, and Tad Hirsch. 2019. Con- testability in algorithmic systems. In Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing, pages 523-527.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Directions in abusive language training data, a systematic review: Garbage in, garbage out", |
|
"authors": [ |
|
{ |
|
"first": "Bertie", |
|
"middle": [], |
|
"last": "Vidgen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "PloS one", |
|
"volume": "15", |
|
"issue": "12", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data, a system- atic review: Garbage in, garbage out. PloS one, 15(12):e0243300.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Introduction page to the demo 2 Background: Content Moderation" |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Context of ACM Module" |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Dataset Exploration Module" |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Examples using the HateCheck templates Figures 4, 5, and 6 correspond to the Model Exploration Section (4)." |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The model comparison section of the model exploration module" |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The key takeaways and feedback module" |
|
}, |
|
"TABREF0": { |
|
"text": "Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, and Mohit Iyyer. 2020. Exploring and predicting transferability across NLP tasks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7882-7926, Online. Association for Computational Linguistics.", |
|
"content": "<table><tr><td>Austin P Wright, Omar Shaikh, Haekyu Park, Will Epperson, Muhammed Ahmed, Stephane Pinel, Duen Horng Chau, and Diyi Yang. 2021. Recast: Enabling user recourse and interpretability of toxi-city detection models with interactive visualization. Proceedings of the ACM on Human-Computer Inter-action, 5(CSCW1):1-26.</td></tr><tr><td>Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Gra-ham Neubig, and Pengfei Liu. 2022. Datalab: A platform for data analysis and intervention. arXiv preprint arXiv:2202.12875.</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |