{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:22:02.371519Z" }, "title": "Applying the Stereotype Content Model to assess disability bias in popular pre-trained NLP models underlying AI-based assistive technologies", "authors": [ { "first": "Brienna", "middle": [], "last": "Herold", "suffix": "", "affiliation": { "laboratory": "", "institution": "Gallaudet University", "location": { "addrLine": "800 Florida Ave NE", "postCode": "20002", "settlement": "Washington", "region": "DC", "country": "USA" } }, "email": "brienna.herold@gmail.com" }, { "first": "James", "middle": [], "last": "Waller", "suffix": "", "affiliation": { "laboratory": "", "institution": "Gallaudet University", "location": { "addrLine": "800 Florida Ave NE", "postCode": "20002", "settlement": "Washington", "region": "DC", "country": "USA" } }, "email": "james.waller@gallaudet.edu" }, { "first": "Raja", "middle": [ "S" ], "last": "Kushalnagar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Gallaudet University", "location": { "addrLine": "800 Florida Ave NE", "postCode": "20002", "settlement": "Washington", "region": "DC", "country": "USA" } }, "email": "raja.kushalnagar@gallaudet.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Stereotypes are a positive or negative, generalized, and often widely shared belief about the attributes of certain groups of people, such as people with sensory disabilities. If stereotypes manifest in assistive technologies used by deaf or blind people, they can harm the user in a number of ways-especially considering the vulnerable nature of the target population. AI models underlying assistive technologies have been shown to contain biased stereotypes, including racial, gender, and disability biases. We build on this work to present a psychologybased stereotype assessment of the representation of disability, deafness, and blindness in BERT using the Stereotype Content Model. We show that BERT contains disability bias, and that this bias differs along established stereotype dimensions.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Stereotypes are a positive or negative, generalized, and often widely shared belief about the attributes of certain groups of people, such as people with sensory disabilities. If stereotypes manifest in assistive technologies used by deaf or blind people, they can harm the user in a number of ways-especially considering the vulnerable nature of the target population. AI models underlying assistive technologies have been shown to contain biased stereotypes, including racial, gender, and disability biases. We build on this work to present a psychologybased stereotype assessment of the representation of disability, deafness, and blindness in BERT using the Stereotype Content Model. We show that BERT contains disability bias, and that this bias differs along established stereotype dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Pre-trained natural language processing (NLP) models are becoming more commonly deployed in pipelines for consumer tools, including those that fall under the umbrella of assistive technologies. Models such as BERT are used in tools that utilize automatic text simplification (ATS) for reading assistance (Lauscher et al., 2020) , where complex words get replaced with simpler alternatives. BERT is also used in natural language understanding tools such as automatic speech recognition (Chuang et al., 2020) .", "cite_spans": [ { "start": 304, "end": 327, "text": "(Lauscher et al., 2020)", "ref_id": "BIBREF24" }, { "start": 485, "end": 506, "text": "(Chuang et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to a continuing increase in the use cases and complexity of AI-based assistive technologies, there is also growing interest in using them. Alonzo et al. (2020) found that the deaf community expressed strong interest in ATS-based reading assistance tools. To achieve fair and inclusive experiences for deaf and blind people, it is important to understand how they may be represented by the models underlying the assistive technologies that are designed for them (Kafle et al., 2019) .", "cite_spans": [ { "start": 151, "end": 171, "text": "Alonzo et al. (2020)", "ref_id": "BIBREF1" }, { "start": 473, "end": 493, "text": "(Kafle et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "If an AI-based consumer tool perpetuates existing biases and stereotypes in society, it can inadvertently cause and reinforce structural stigma, or \"societal level conditions, cultural norms, and institutional policies that constrain the opportunities, resources, and well-being of the stigmatized\" (Hatzenbuehler, 2016) . The bias against deafness-or audism-is prevalent in both mainstream society (Humphries, 1977) and in the deaf community (Gertz, 2003) . Audism has been linked to discrimination in multiple real-world scenarios, including the job application process (Task Force Members and Contributors, 2012) . In Szymanski (2010) , 100% of highly qualified psychology internship applications that mentioned deafness were rejected, whereas 100% of those that didn't mention deafness were invited for an interview.", "cite_spans": [ { "start": 299, "end": 320, "text": "(Hatzenbuehler, 2016)", "ref_id": "BIBREF18" }, { "start": 399, "end": 416, "text": "(Humphries, 1977)", "ref_id": "BIBREF19" }, { "start": 443, "end": 456, "text": "(Gertz, 2003)", "ref_id": "BIBREF15" }, { "start": 572, "end": 615, "text": "(Task Force Members and Contributors, 2012)", "ref_id": null }, { "start": 621, "end": 637, "text": "Szymanski (2010)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Causing or reinforcing structural stigma can lead to allocational and representational harms (Blodgett et al., 2020) . Allocational harms arise if assistive technologies distribute resources or opportunities unfairly to disabled people. With representational harms, if assistive technologies represent these people unfairly, disabled people may experience alienation, decreased quality of service, stereotypes, denigration and stigmatization, erasure, and/or decreased public participation.", "cite_spans": [ { "start": 93, "end": 116, "text": "(Blodgett et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite recent ballooning of research in NLP fairness (Sheng et al., 2020; Blodgett, 2021) , there has been little investigation into how AI models represent disabled people, who comprise at least 12.5% of the global population (WHO, 2021). There has been even less of a focus on how people with sensory disabilities are represented in NLP models. Hutchinson et al. (2020) provided preliminary evidence that disability-mentioning text may be accidentally flagged as toxic. Hassan et al. (2021) detected signs of disability bias in BERT using sentiment analysis, and they investigated how this bias might shift when applying an intersectional lens to the analysis.", "cite_spans": [ { "start": 54, "end": 74, "text": "(Sheng et al., 2020;", "ref_id": null }, { "start": 75, "end": 90, "text": "Blodgett, 2021)", "ref_id": "BIBREF5" }, { "start": 348, "end": 372, "text": "Hutchinson et al. (2020)", "ref_id": "BIBREF20" }, { "start": 473, "end": 493, "text": "Hassan et al. (2021)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To further investigate sensory disability bias in NLP models, we build upon prior work in association bias in BERT. Our contributions include adapting Kurita et al. (2019) 's sentence templates to examine associations between disability qualifiers and stereotype traits, drawing from the Stereotype Content Model (SCM), an established approach in social psychology to defining stereotyped bias (Fiske et al., 2002) .", "cite_spans": [ { "start": 151, "end": 171, "text": "Kurita et al. (2019)", "ref_id": "BIBREF23" }, { "start": 394, "end": 414, "text": "(Fiske et al., 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, we answer these research questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 RQ1. In BERT, is there evidence of bias in how the model perceives disability, compared to ability?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 RQ2. Do BERT's representations of ability and disability differ across various stereotype dimensions?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We review previous work in examining stereotypes in NLP models, and then we briefly describe the SCM and its relevance to measuring bias.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Bolukbasi et al. (2016) first observed that gender stereotypes are present in static word embeddings (e.g. word2vec and GloVe) using subspace analysis. Caliskan et al. (2017) found that word embeddings capture a spectrum of implicit biases, using lexicons developed for the Implicit Association Test, or the IAT (Greenwald et al., 1998) , and calculated associations within static word embeddings. Kurita et al. (2019) extended this approach to work with contextualized embedding models such as BERT. However, using word lists pulled from the IAT is limiting when it comes to assessing disability bias, since the relevant tests incorporate images instead of words. For this reason, there has been more work in downstream tasks such as sentiment analysis and topic modelling (Hutchinson et al., 2020; Hassan et al., 2021) , and less in direct association analysis.", "cite_spans": [ { "start": 152, "end": 174, "text": "Caliskan et al. (2017)", "ref_id": "BIBREF8" }, { "start": 308, "end": 336, "text": "IAT (Greenwald et al., 1998)", "ref_id": null }, { "start": 398, "end": 418, "text": "Kurita et al. (2019)", "ref_id": "BIBREF23" }, { "start": 774, "end": 799, "text": "(Hutchinson et al., 2020;", "ref_id": "BIBREF20" }, { "start": 800, "end": 820, "text": "Hassan et al., 2021)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Stereotypes in NLP models", "sec_num": "2.1" }, { "text": "Stereotypes have been studied in social psychology for decades (Asch, 1946; Greenwald et al., 1998; Fiske et al., 2007) . To concisely summarize the current knowledge about stereotypes, Fiske et al. (2002) proposed the SCM, which postulates that stereotypes can be aligned along two dimensions: competence and warmth. When we meet someone new, our first psychological response is to subconsciously evaluate whether they are a friend or a foe. This is a judgement along the warmth dimension. Immediately after we make this evaluation, we go on to evaluate how well they may be able to act in accordance to our perception of their warmth. Abele et al. (2016) ; Nicolas et al. (2021) suggested that these dimensions can be further split into two subdimensions. Warmth is comprised of Morality and Sociability, and competence is comprised of Agency and Ability.", "cite_spans": [ { "start": 63, "end": 75, "text": "(Asch, 1946;", "ref_id": "BIBREF2" }, { "start": 76, "end": 99, "text": "Greenwald et al., 1998;", "ref_id": "BIBREF16" }, { "start": 100, "end": 119, "text": "Fiske et al., 2007)", "ref_id": "BIBREF13" }, { "start": 186, "end": 205, "text": "Fiske et al. (2002)", "ref_id": "BIBREF12" }, { "start": 637, "end": 656, "text": "Abele et al. (2016)", "ref_id": "BIBREF0" }, { "start": 659, "end": 680, "text": "Nicolas et al. (2021)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Stereotype Content Model (SCM)", "sec_num": "2.2" }, { "text": "Researchers working under the SCM framework also propose a causal link between stereotypes and structural stigma (Fiske et al., 2007) . People perceived as warm and competent evoke feelings of pride and admiration, whereas people perceived as cold and incompetent evoke feelings of disgust and contempt. Ambivalent perceptions involving warmth and incompetence typically elicit pity and sympathy. Coldness and competence evokes envy and jealousy. These biases, whether explicit or implicit, can lead to harms if they are perpetuated in AI-based assistive technologies.", "cite_spans": [ { "start": 113, "end": 133, "text": "(Fiske et al., 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Stereotype Content Model (SCM)", "sec_num": "2.2" }, { "text": "To the best of our knowledge, Fraser et al. 2021is the only work to date that has applied the SCM to analyze stereotypes in text. The SCM has not yet been used to investigate stereotypes in NLP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stereotype Content Model (SCM)", "sec_num": "2.2" }, { "text": "Following Kurita et al. (2019) and Bartl et al. (2020) , we measured association bias in BERT using a fill-in-the-blank task, and synthetic, semantically bleached sentence templates. Our goal was to directly examine representations in the model, without potential interference from unexpected context or downstream input, which may occur when using natural sentence templates or with tasks such as sentiment analysis and topic modelling. Table 1 displays the targets, stereotype attribute dimensions, and sentence templates used in our study. For the targets, we used three abled/disabled antonym pairs to represent the concepts of ability and disability for general ability, deafness, and blindness. We recognize that some words such as \"hearing\" may not be commonly used in mainstream society, and in turn may not appear often as a person-describing qualifier in the Wikipedia and Books Corpus, which BERT was pre-trained on. However this word represents how the members of the deaf community describe those who hear. It is important to explore how a model may represent a word that has different usage in certain communities, if the model is used in end-applications by those communities.", "cite_spans": [ { "start": 10, "end": 30, "text": "Kurita et al. (2019)", "ref_id": "BIBREF23" }, { "start": 35, "end": 54, "text": "Bartl et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 438, "end": 445, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Methods", "sec_num": "3" }, { "text": "Taking inspiration from Fraser et al. (2021), we constructed the stereotype subdimensions using the extended lexicon created by Nicolas et al. (2021) , with the four subdimensions of Morality, Sociability, Agency, and Ability. In this lexicon, words are annotated with either +1 or -1 to indicate a positive or negative association with the given subdimension. We removed words that were not labelled with either valence value. We represent each valence pole of these subdimensions as their own subdimension, e.g. words with a negative association to Morality represent the Immoral subdimension. We expect these 8 subdimensions to provide a more granular understanding of stereotyped representations in BERT.", "cite_spans": [ { "start": 128, "end": 149, "text": "Nicolas et al. (2021)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "We used four semantically bleached sentence templates, which are shown in Table 1 . We adapted them from Kurita et al. (2019) and Hutchinson et al. (2020) . The first two templates use identity-first language, in which [TARGET] precedes \"person.\" Despite removing context, the syntactic structure of the sentence itself is known to carry cultural connotations (Beukeboom and Burgers, 2019; Shakespeare, 2016). Members of the deaf community often prefer to use identity-first language, whereas the person-first language is usually found in a medical lens. To get a general picture of associations, we also include two templates that use person-first language, in which [TARGET] follows \"person.\"", "cite_spans": [ { "start": 105, "end": 125, "text": "Kurita et al. (2019)", "ref_id": "BIBREF23" }, { "start": 130, "end": 154, "text": "Hutchinson et al. (2020)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 74, "end": 81, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "We removed words that would not fit the grammar of our selected templates. We kept adjectives, as identified by WordNet part-of-speech labelling. This leaves 1,256 unique words in this lexicon. Most belong to one subdimension, while 87 words belong to two subdimensions (e.g. \"negligent\" belongs to both the Immoral and Unable subdimensions), and 3 words belong to three subdimensions (e.g., \"ingenuous\" belongs to the Sociable, Immoral, and Unable subdimensions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "To further reduce possible causes of variation, we also removed all multi-word attributes. Although we are able to mask a couple of words in a sentence when feeding it to BERT, as done in Bartl et al. (2020) , it is not possible to predict the probability of a multi-word phrase, only a single subtoken. Most of our targets are whole tokens, except for \"abled,\" which is a multi-token word: \"able\" + \"ed\". We multiplied the probabilities for the subtokens that make up this word, since it is implicit that these subtokens are associated.", "cite_spans": [ { "start": 188, "end": 207, "text": "Bartl et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "The final dataset consisted of 30,144 combinations of targets, attributes, and templates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "We used the PyTorch implementation of the transformers library from HuggingFace, a widely used hub for the distribution of pre-trained Transformer models (Wolf et al., 2020) . We downloaded bert-base-uncased, the most popular version of BERT according to download count, along with a language modeling head on top and its tokenizer.", "cite_spans": [ { "start": 154, "end": 173, "text": "(Wolf et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Measuring Bias in BERT", "sec_num": "3.2" }, { "text": "Below we outline our methodology to measure bias in BERT, which we adapted from Kurita et al. (2019) . Figure 1: Bias scores for pairs of targets, when the target is predicted in the presence of the attribute. Each bias score is annotated with statistical significance where n.s. means the bias is not significant at p > 0.05, * is p \u2264 0.05, * * means p \u2264 0.01, and * * * is highly significant at p \u2264 0.001. The further the score gets from zero, the more unequal the representations of ability and disability. Scores above zero indicate that BERT more closely associates the abled target with the corresponding stereotype subdimension, whereas scores below zero indicate a bias where the model prefers the disabled target more, given the stereotype context. These results show evidence of significant, nuanced bias in how BERT represents disability, compared to ability. If the association is negative, this means that the target's probability is lower than its prior probability. In other words, the attribute's context decreased the probability that BERT predicts the target. Likewise, if the association is positive, the context increased the target's probability of being predicted.", "cite_spans": [ { "start": 80, "end": 100, "text": "Kurita et al. (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Measuring Bias in BERT", "sec_num": "3.2" }, { "text": "In all bias calculations, the minuend is the abled target's association score, and the subtrahend is the disabled target's association score. Thus, if the bias is positive, the association between the abled target and the attribute subdimension is stronger. If the bias is negative, the disabled target is more strongly associated to the attribute subdimension. If the bias is zero, there is no difference in the probability of predicting either target, given the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring Bias in BERT", "sec_num": "3.2" }, { "text": "We measured statistical significance via a pairedattribute permutation test over A y,M and A x,M .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring Bias in BERT", "sec_num": "3.2" }, { "text": "We also performed the inverse analysis, where we explored the representation of stereotype content given the presence of ability or disability. To carry out this analysis, we essentially treated attributes as targets, meaning that we masked the attribute and computed its probability, given the context provided by the target. Aside from this swap, the overall methodology remains the same. Figure 1 displays the bias score between each pair of targets (abled/disabled antonyms, e.g. \"hearing\" and \"deaf\") for each stereotype subdimension in the SCM. Here we can see certain patterns in how disability is represented in BERT, compared to ability.", "cite_spans": [], "ref_spans": [ { "start": 391, "end": 399, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Measuring Bias in BERT", "sec_num": "3.2" }, { "text": "The first takeaway from this figure is that there is a bias, or a difference, in the representations, confirming RQ1. The bias is significant at varying levels across all subdimensions except the Unable subdimension. Correlation in language usage may have contributed to the lack of bias in the Unable subdimension. Mentions of disability are often accompanied by words referring to ability, and often in a negative, medical context where disability is Figure 2 : Mean association scores for each combination of target and stereotype subdimension. The further the score is from zero, the stronger the association is in BERT. If the score is above zero, this means that BERT positively associates the target with the stereotype subdimension. Conversely, if the score is below zero, BERT negatively associates the target with the stereotype subdimension. These results reveal patterns in how BERT's representations of ability and disability align to known stereotype subdimensions.", "cite_spans": [], "ref_spans": [ { "start": 453, "end": 461, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "framed as a problem on the body, rather than on society (Shakespeare, 2016) .", "cite_spans": [ { "start": 56, "end": 75, "text": "(Shakespeare, 2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The second takeaway is that BERT is generally more likely to associate the abled target to all stereotype subdimensions, except the Unable subdimension for all three pairs of targets, and the Immoral and Unsociable subdimensions for blindness. This partiality toward ability may been caused by higher frequencies of abled targets in the training data (Schick and Sch\u00fctze, 2020) . People with disabilities are an underrepresented population and are thus mentioned less in mainstream text; there is an ongoing project to improve one of the training datasets to create more text related to disability (Wikipedia contributors, 2022). It is also less common to use an abled target to describe a person without a disability (Beukeboom and Burgers, 2019), and this in addition to these words' increased frequency may have led BERT to \"understand\" them better but in different contexts.", "cite_spans": [ { "start": 351, "end": 377, "text": "(Schick and Sch\u00fctze, 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The third takeaway is that the bias is stronger if the sentence includes a positive warmth (Moral, Sociable) or competence (Able, Independent) context, presenting a high-level insight into RQ2. Given a positive stereotype context, BERT is more likely to predict the abled target than the disabled target in the fill-in-the-blank task. In other words, BERT is less likely to associate disability to warmth and competence. This bias is significant for ability, deafness, and blindness at p \u2264 0.001.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "On the other hand (or the other side of the figure), the bias between abled/disabled antonym tar-get pairs is weaker if the sentence includes a negative warmth (Immoral, Unsociable) or competence (Dependent) context. This smaller difference in representation is still significant for deafness at p \u2264 0.001, significant for general ability at varying levels, and significant for blindness with only the Dependent subdimension at p \u2264 0.01.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "To investigate RQ2 in more depth, we show in Figure 2 the mean association scores for each combination of target (an abled or disabled antonym) and stereotype subdimension. This figure reveals more nuanced patterns in BERT's representation of disability and how this representation aligns to stereotype subdimensions from the SCM.", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 53, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "One pattern that stands out is that almost all of the mean association scores are negative, regardless of target or subdimension. A negative association score indicates that BERT is less likely to predict the target given the stereotype content and the syntactic structure of the sentence template. These negative association scores provide further support for BERT having limited knowledge about abled targets' range of usage, and/or the under-representation of disabled targets in the model. Figure 2 also sheds additional light on the weaker bias shown in Figure 1 for negative subdimensions. Although BERT may have an overall preference for abled targets, the disabled targets' associations to these negative subdimensions are strong enough to appear nearly on par with the abled targets' associations to the same subdimen- Figure 1 . These results show evidence that BERT is less likely to predict any attribute given an accompanying disability context. BERT contains significantly stronger associations between all stereotype attribute subdimensions and the abled target.", "cite_spans": [], "ref_spans": [ { "start": 494, "end": 502, "text": "Figure 2", "ref_id": null }, { "start": 559, "end": 567, "text": "Figure 1", "ref_id": null }, { "start": 828, "end": 836, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "A third takeaway from Figure 2 is that disabled targets are less associated with Able, Independent, Moral, and Sociable contexts, compared to all other associations. This is especially pronounced with \"disabled\" and \"deaf\".", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "sions.", "sec_num": null }, { "text": "In Figure 4 , the bias scores from the inverse analysis present evidence that predicting different attributes given the same target do not lead to different biases. Different stereotype subdimensions are not any closely combined with different targets, when the target context is already present in the sentence. However, BERT shows a general preference for predicting any attribute in the presence of abled targets, since the bias scores are all significantly positive, especially for ability.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "sions.", "sec_num": null }, { "text": "We want to note that, despite semantic bleaching, syntactic differences in the sentence templates affected the strength of the association scores, but not the patterns. When using identity-first templates to predict a target given stereotype content, BERT more strongly associated \"abled\" and \"hearing\" to all subdimensions, whereas \"sighted\", \"disabled\", \"blind\", and \"deaf\" had stronger associations to all attribute subdimensions using person-first templates. This is interesting, because identity-first and person-first language are known to carry cultural connotations. Furthermore, some common identityfirst disability qualifiers, such as \"disabled\" and \"deaf\", and \"blind\" are used in contexts outside of social identity categories, e.g. as metaphors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "sions.", "sec_num": null }, { "text": "\"deaf as a post,\" \"deaf and blind to [insert situation]\". This may have impacted how they were understood by the model, and subsequently how they are predicted in identity-first or person-first language contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "sions.", "sec_num": null }, { "text": "Regardless of how biases manifest, the first step toward ensuring harmless use of AI-based assistive technologies is to understand how target users are represented in the underlying models. By applying the Stereotype Content Model to evaluate representational differences, we present evidence of disability association bias in a popular pre-trained NLP model that is used in state-of-the-art AI-based assistive technologies such as text simplification and speech recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "We also present a breakdown of this bias along stereotype dimensions, which uncovers nuanced patterns in undesirable associations between disability and stereotypes, the most notable being that disabled people are significantly less likely to be associated to warmth and competence. Our results emphasize the need to work toward more fair and inclusive assistive technologies, especially since disabled people are the target population for these tools.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "There are a number of limitations with our study. First, we explored these associations through a broad lens, looking at only ability versus disability. It is important to recognize that disability is not a siloed, unitary concept (Pe\u00f1a et al., 2016) . Future work should investigate the associations through an intersectional lens (Crenshaw, 1989) , to better understand how disability bias is affected by the interconnected nature of social categorizations.", "cite_spans": [ { "start": 231, "end": 250, "text": "(Pe\u00f1a et al., 2016)", "ref_id": null }, { "start": 332, "end": 348, "text": "(Crenshaw, 1989)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "A second limitation of our study is our usage of sentence templates. Despite attempts to semantically strip a sentence to provide a neutral context, BERT still draws on the syntactic structure of the sentence itself to help make its predictions (Devlin et al., 2019) . We took this into consideration by varying the structure. However, we observed that association strengths appear to be influenced to a degree by syntactic differences. Future work can investigate stabilizing the bias evaluation metrics by including more templates and a wider range of sentence structure, or randomly sampling a natural sentence dataset. It would also be interesting to further differentiate between identity-first and person-first language, as well as to explore question-answering templates.", "cite_spans": [ { "start": 245, "end": 266, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Third, we examined a limited number of targets and only in one model, BERT. Future work can extend our approach to evaluate additional disabled targets in additional models, such as GPT-2 (Radford et al., 2018) and GPT-3 (Radford et al., 2019) , to get a fuller picture of disability representation in a wider range of popular pre-trained NLP models underlying AI-based assistive technologies.", "cite_spans": [ { "start": 182, "end": 210, "text": "GPT-2 (Radford et al., 2018)", "ref_id": null }, { "start": 215, "end": 243, "text": "GPT-3 (Radford et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Future work can also draw on debiasing approaches to mitigate bias in these models. We want to note that it is important in this work to also take into consideration the specific model deployment context, because enforcing fairness in an inappropriate context can result in the unintended erasure of a marginalized population (Blodgett, 2021). We provided an array of possible causes of the stereotype patterns that we observed, and these can be avenues for exploring debiasing solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Facets of the Fundamental Content Dimensions: Agency with Competence and Assertiveness-Communion with Warmth and Morality. Frontiers in Psychology", "authors": [ { "first": "Andrea", "middle": [ "E" ], "last": "Abele", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Hauke", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Louvet", "suffix": "" }, { "first": "Aleksandra", "middle": [], "last": "Szymkow", "suffix": "" }, { "first": "Yanping", "middle": [], "last": "Duan", "suffix": "" } ], "year": 2016, "venue": "", "volume": "7", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3389/fpsyg.2016.01810" ] }, "num": null, "urls": [], "raw_text": "Andrea E. Abele, Nicole Hauke, Kim Peters, Eva Louvet, Aleksandra Szymkow, and Yanping Duan. 2016. Facets of the Fundamental Content Dimen- sions: Agency with Competence and Assertive- ness-Communion with Warmth and Morality. Fron- tiers in Psychology, 7.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Reading Experiences and Interest in Reading-Assistance Tools Among Deaf and Hard-of-Hearing Computing Professionals", "authors": [ { "first": "Oliver", "middle": [], "last": "Alonzo", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Elliot", "suffix": "" }, { "first": "Becca", "middle": [], "last": "Dingman", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Huenerfauth", "suffix": "" } ], "year": 2020, "venue": "The 22nd International ACM SIGACCESS Conference on Computers and Accessibility", "volume": "", "issue": "", "pages": "1--13", "other_ids": { "DOI": [ "10.1145/3373625.3416992" ] }, "num": null, "urls": [], "raw_text": "Oliver Alonzo, Lisa Elliot, Becca Dingman, and Matt Huenerfauth. 2020. Reading Experiences and Inter- est in Reading-Assistance Tools Among Deaf and Hard-of-Hearing Computing Professionals. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, pages 1-13, Virtual Event Greece. ACM.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Forming impressions of personality", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Asch", "suffix": "" } ], "year": 1946, "venue": "The Journal of Abnormal and Social Psychology", "volume": "41", "issue": "3", "pages": "258--290", "other_ids": { "DOI": [ "10.1037/h0055756" ] }, "num": null, "urls": [], "raw_text": "S. E. Asch. 1946. Forming impressions of personal- ity. The Journal of Abnormal and Social Psychology, 41(3):258-290.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias", "authors": [ { "first": "Marion", "middle": [], "last": "Bartl", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.14534" ] }, "num": null, "urls": [], "raw_text": "Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias. arXiv:2010.14534 [cs].", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "How Stereotypes Are Shared Through Language: A Review and Introduction of the Social Categories and Stereotypes Communication (SCSC) Framework", "authors": [ { "first": "J", "middle": [], "last": "Camiel", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Beukeboom", "suffix": "" }, { "first": "", "middle": [], "last": "Burgers", "suffix": "" } ], "year": 2019, "venue": "Review of Communication Research", "volume": "7", "issue": "1", "pages": "1--37", "other_ids": { "DOI": [ "10.12840/issn.2255-4165.017" ] }, "num": null, "urls": [], "raw_text": "Camiel J. Beukeboom and Christian Burgers. 2019. How Stereotypes Are Shared Through Language: A Review and Introduction of the Social Categories and Stereotypes Communication (SCSC) Framework. Review of Communication Research, 7(1):1-37.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Sociolinguistically Driven Approaches for Just Natural Language Processing", "authors": [ { "first": "", "middle": [], "last": "Su Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Blodgett", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.7275/20410631" ] }, "num": null, "urls": [], "raw_text": "Su Lin Blodgett. 2021. Sociolinguistically Driven Approaches for Just Natural Language Processing. Ph.D. thesis, University of Massachusetts Amherst.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Language (Technology) is Power: A Critical Survey of", "authors": [ { "first": "", "middle": [], "last": "Su Lin", "suffix": "" }, { "first": "Solon", "middle": [], "last": "Blodgett", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Barocas", "suffix": "" }, { "first": "Iii", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2020, "venue": "Bias\" in NLP", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.14050" ] }, "num": null, "urls": [], "raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (Technology) is Power: A Critical Survey of \"Bias\" in NLP. arXiv:2005.14050 [cs].", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings", "authors": [ { "first": "Tolga", "middle": [], "last": "Bolukbasi", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "James", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Venkatesh", "middle": [], "last": "Saligrama", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Kalai", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.06520" ] }, "num": null, "urls": [], "raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. arXiv:1607.06520 [cs, stat].", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Semantics derived automatically from language corpora contain human-like biases", "authors": [ { "first": "Aylin", "middle": [], "last": "Caliskan", "suffix": "" }, { "first": "Joanna", "middle": [ "J" ], "last": "Bryson", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2017, "venue": "Science", "volume": "356", "issue": "6334", "pages": "183--186", "other_ids": { "DOI": [ "10.1126/science.aal4230" ] }, "num": null, "urls": [], "raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "SpeechBERT: An Audio-and-Text Jointly Learned Language Model for End-to-End Spoken Question Answering", "authors": [ { "first": "Yung-Sung", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Chi-Liang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Lin-Shan", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "Interspeech 2020", "volume": "", "issue": "", "pages": "4168--4172", "other_ids": { "DOI": [ "10.21437/Interspeech.2020-1570" ] }, "num": null, "urls": [], "raw_text": "Yung-Sung Chuang, Chi-Liang Liu, Hung-yi Lee, and Lin-shan Lee. 2020. SpeechBERT: An Audio-and- Text Jointly Learned Language Model for End-to- End Spoken Question Answering. In Interspeech 2020, pages 4168-4172. ISCA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics", "authors": [ { "first": "Kimberl\u00e9", "middle": [], "last": "Crenshaw", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimberl\u00e9 Crenshaw. 1989. Demarginalizing the Inter- section of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. page 31.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition", "authors": [ { "first": "Susan", "middle": [ "T" ], "last": "Fiske", "suffix": "" }, { "first": "Amy", "middle": [ "J C" ], "last": "Cuddy", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Glick", "suffix": "" } ], "year": 2002, "venue": "Journal of Personality and Social Psychology", "volume": "82", "issue": "6", "pages": "878--902", "other_ids": { "DOI": [ "10.1037/0022-3514.82.6.878" ] }, "num": null, "urls": [], "raw_text": "Susan T. Fiske, Amy J. C. Cuddy, Peter Glick, and Jun Xu. 2002. A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6):878-902.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Universal dimensions of social cognition: Warmth and competence", "authors": [ { "first": "Susan", "middle": [ "T" ], "last": "Fiske", "suffix": "" }, { "first": "Amy", "middle": [ "J C" ], "last": "Cuddy", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Glick", "suffix": "" } ], "year": 2007, "venue": "Trends in Cognitive Sciences", "volume": "11", "issue": "2", "pages": "77--83", "other_ids": { "DOI": [ "10.1016/j.tics.2006.11.005" ] }, "num": null, "urls": [], "raw_text": "Susan T. Fiske, Amy J.C. Cuddy, and Peter Glick. 2007. Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Sci- ences, 11(2):77-83.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model", "authors": [ { "first": "Kathleen", "middle": [ "C" ], "last": "Fraser", "suffix": "" }, { "first": "Isar", "middle": [], "last": "Nejadgholi", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "600--616", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.50" ] }, "num": null, "urls": [], "raw_text": "Kathleen C. Fraser, Isar Nejadgholi, and Svetlana Kiritchenko. 2021. Understanding and Counter- ing Stereotypes: A Computational Approach to the Stereotype Content Model. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 600-616, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Dysconscious Audism and Critical Deaf Studies: Deaf Crit's Analysis of Unconscious Internalization of Hegemony within the Deaf Community", "authors": [ { "first": "Eugenie", "middle": [ "Nicole" ], "last": "Gertz", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugenie Nicole Gertz. 2003. Dysconscious Audism and Critical Deaf Studies: Deaf Crit's Analysis of Unconscious Internalization of Hegemony within the Deaf Community. Ph.D. thesis.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Measuring Individual Differences in Implicit Cognition: The Implicit Association Test", "authors": [ { "first": "Debbie", "middle": [ "E" ], "last": "Anthony G Greenwald", "suffix": "" }, { "first": "Jordan L K", "middle": [], "last": "Mcghee", "suffix": "" }, { "first": "", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony G Greenwald, Debbie E McGhee, and Jordan L K Schwartz. 1998. Measuring Individual Differ- ences in Implicit Cognition: The Implicit Association Test. page 17.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens", "authors": [ { "first": "Saad", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Huenerfauth", "suffix": "" }, { "first": "Cecilia", "middle": [], "last": "Ovesdotter Alm", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2110.00521" ] }, "num": null, "urls": [], "raw_text": "Saad Hassan, Matt Huenerfauth, and Cecilia Ovesdot- ter Alm. 2021. Unpacking the Interdependent Sys- tems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens. arXiv:2110.00521 [cs].", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Structural stigma: Research evidence and implications for psychological science", "authors": [ { "first": "L", "middle": [], "last": "Mark", "suffix": "" }, { "first": "", "middle": [], "last": "Hatzenbuehler", "suffix": "" } ], "year": 2016, "venue": "American Psychologist", "volume": "71", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark L Hatzenbuehler. 2016. Structural stigma: Re- search evidence and implications for psychological science. American Psychologist, 71(8):742.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Communicating across cultures (deaf-hearing) and language learning", "authors": [ { "first": "Tom", "middle": [], "last": "Humphries", "suffix": "" } ], "year": 1977, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Humphries. 1977. Communicating across cultures (deaf-hearing) and language learning. Union Insti- tute and University.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Social Biases in NLP Models as Barriers for Persons with Disabilities", "authors": [ { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Vinodkumar", "middle": [], "last": "Prabhakaran", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Denton", "suffix": "" }, { "first": "Kellie", "middle": [], "last": "Webster", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Denuyl", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.487" ] }, "num": null, "urls": [], "raw_text": "Ben Hutchinson, Vinodkumar Prabhakaran, Emily Den- ton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social Biases in NLP Models as Barriers for Persons with Disabilities. In Proceedings of the 58th", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "5491--5501", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 5491-5501, Online. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Artificial intelligence fairness in the context of accessibility research on intelligent systems for people who are deaf or hard of hearing", "authors": [ { "first": "Sushant", "middle": [], "last": "Kafle", "suffix": "" }, { "first": "Abraham", "middle": [], "last": "Glasser", "suffix": "" }, { "first": "Sedeeq", "middle": [], "last": "Al-Khazraji", "suffix": "" }, { "first": "Larwan", "middle": [], "last": "Berke", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Seita", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Huenerfauth", "suffix": "" } ], "year": 2019, "venue": "SIG ACCESS", "volume": "", "issue": "125", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, and Matt Huenerfauth. 2019. Artificial intelligence fairness in the context of accessibility research on intelligent systems for people who are deaf or hard of hearing. SIG ACCESS, (125).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Measuring Bias in Contextualized Word Representations", "authors": [ { "first": "Keita", "middle": [], "last": "Kurita", "suffix": "" }, { "first": "Nidhi", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Ayush", "middle": [], "last": "Pareek", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", "volume": "", "issue": "", "pages": "166--172", "other_ids": { "DOI": [ "10.18653/v1/W19-3823" ] }, "num": null, "urls": [], "raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring Bias in Con- textualized Word Representations. In Proceedings of the First Workshop on Gender Bias in Natural Lan- guage Processing, pages 166-172, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity", "authors": [ { "first": "Anne", "middle": [], "last": "Lauscher", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Edoardo", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Ponti", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "", "middle": [], "last": "Glava\u0161", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1371--1383", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.118" ] }, "num": null, "urls": [], "raw_text": "Anne Lauscher, Ivan Vuli\u0107, Edoardo Maria Ponti, Anna Korhonen, and Goran Glava\u0161. 2020. Specializing Unsupervised Pretraining Models for Word-Level Se- mantic Similarity. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 1371-1383, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Comprehensive stereotype content dictionaries using a semi-automated method", "authors": [ { "first": "Gandalf", "middle": [], "last": "Nicolas", "suffix": "" }, { "first": "Xuechunzi", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Fiske", "suffix": "" } ], "year": 2021, "venue": "European Journal of Social Psychology", "volume": "51", "issue": "1", "pages": "178--196", "other_ids": { "DOI": [ "10.1002/ejsp.2724" ] }, "num": null, "urls": [], "raw_text": "Gandalf Nicolas, Xuechunzi Bai, and Susan T. Fiske. 2021. Comprehensive stereotype content dictionaries using a semi-automated method. European Journal of Social Psychology, 51(1):178-196.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improving Language Understanding by Generative Pre-Training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Under- standing by Generative Pre-Training. page 12.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Language Models are Unsupervised Multitask Learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Lan- guage Models are Unsupervised Multitask Learners. page 24.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Rare Words: A Major Problem for Contextualized Embeddings and How to Fix it by Attentive Mimicking", "authors": [ { "first": "Timo", "middle": [], "last": "Schick", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "8766--8774", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6403" ] }, "num": null, "urls": [], "raw_text": "Timo Schick and Hinrich Sch\u00fctze. 2020. Rare Words: A Major Problem for Contextualized Embeddings and How to Fix it by Attentive Mimicking. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8766-8774.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The social model of disability", "authors": [ { "first": "Tom", "middle": [], "last": "Shakespeare", "suffix": "" } ], "year": 2016, "venue": "The Disability Studies Reader", "volume": "13", "issue": "", "pages": "190--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Shakespeare. 2016. The social model of disabil- ity. In Lennard J Davis, editor, The Disability Stud- ies Reader, fifth edition, chapter 13, pages 190-199. Routledge.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Premkumar Natarajan, and Nanyun Peng. 2020. Towards controllable biases in language generation", "authors": [ { "first": "Emily", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00268" ] }, "num": null, "urls": [], "raw_text": "Emily Sheng, Kai-Wei Chang, Premkumar Natara- jan, and Nanyun Peng. 2020. Towards control- lable biases in language generation. arXiv preprint arXiv:2005.00268.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "An open letter to training directors regarding accommodations for deaf interns", "authors": [ { "first": "Christen", "middle": [], "last": "Szymanski", "suffix": "" } ], "year": 2010, "venue": "AAPIC-E Newsletter", "volume": "3", "issue": "2", "pages": "16--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christen Szymanski. 2010. An open letter to training directors regarding accommodations for deaf interns. AAPIC-E Newsletter, 3(2):16-17.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Final report of the task force on health care careers for the deaf and hard-of-hearing community", "authors": [], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Task Force Members and Contributors. 2012. Final report of the task force on health care careers for the deaf and hard-of-hearing community.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Disability and health", "authors": [], "year": 2022, "venue": "Online; accessed 03", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "WHO. 2021. Disability and health. [Online; accessed 03-March-2022].", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Wikipedia:wikiproject disability", "authors": [], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wikipedia contributors. 2022. Wikipedia:wikiproject disability. [Online; accessed 03-March-2022].", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Hug-gingFace's Transformers: State-of-the-art Natural Language Processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "", "middle": [], "last": "Gugger", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.03771" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Hug- gingFace's Transformers: State-of-the-art Natural Language Processing. arXiv:1910.03771 [cs].", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Prepare semantically bleached template sentences. For example, A [TARGET] person is [ATTRIBUTE].2. For each combination of target, attribute, and template, (a) Fill in the template. \"A deaf person is eligible.\" (b) Mask the target. \"A [MASK] person is eligible.\"(c) Compute the target's probability, given the context provided by the attribute." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "p x = P([MASK]=\"deaf\" | sentence)" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "(d) Mask both the target and attribute. \"A [MASK] person is [MASK].\" (e) Compute the target's prior probability, given no context. p prior = P([MASK]=\"deaf\" | masked_sentence) (f) Compute the association (a) between the target (x) and attribute (m). ax,m = log ( Px Pprior ) (g) Compute the mean association score (A) between the target (x) and the attribute subdimension (M). Ax,M = meanm\u03f5M ax,m (h) Compute the bias score for the attribute subdimension (M) as the difference between the mean association scores for two targets. biasM = Ay,M \u2212 Ax,M" }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Bias scores for pairs of targets, when the attribute is predicted in the presence of the target. For interpretation details, please refer to" }, "TABREF0": { "type_str": "table", "num": null, "html": null, "text": "[TARGET] person is [ATTRIBUTE]. 2 [TARGET] people are [ATTRIBUTE]. 3 A person who is [TARGET] is [ATTRIBUTE]. 4 People who are [TARGET] are [ATTRIBUTE].", "content": "
Targets disabled abled deaf hearing blind sighted
Stereotype Dimension Subdimension Attributes Warmth Sociable 155 Unsociable 156 Moral 159 Immoral 334 Competence Able 153 Unable 127 Independent 156 Dependent 109
1 ATemplates
" }, "TABREF1": { "type_str": "table", "num": null, "html": null, "text": "Targets, stereotype attribute dimensions, and semantically bleached templates. The syntactic structure of templates 1 and 2 is typical of identity-first language, whereas templates 3 and 4 use person-first language.", "content": "" } } } }