{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:52:38.736423Z" }, "title": "Measuring Biases of Word Embeddings: What Similarity Measures and Descriptive Statistics to Use?", "authors": [ { "first": "Hossein", "middle": [], "last": "Azarpanah", "suffix": "", "affiliation": { "laboratory": "", "institution": "Concordia University Montreal", "location": { "region": "QC, CA" } }, "email": "hossein.azarpanah@concordia.ca" }, { "first": "Mohsen", "middle": [], "last": "Farhadloo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Concordia University Montreal", "location": { "region": "QC, CA" } }, "email": "mohsen.farhadloo@concordia.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word embeddings are widely used in Natural Language Processing (NLP) for a vast range of applications. However, it has been consistently proven that these embeddings reflect the same human biases that exist in the data used to train them. Most of the introduced bias indicators to reveal word embeddings' bias are average-based indicators based on the cosine similarity measure. In this study, we examine the impacts of different similarity measures as well as other descriptive techniques than averaging in measuring the biases of contextual and non-contextual word embeddings. We show that the extent of revealed biases in word embeddings depends on the descriptive statistics and similarity measures used to measure the bias. We found that over the ten categories of word embedding association tests, Mahalanobis distance reveals the smallest bias, and Euclidean distance reveals the largest bias in word embeddings. In addition, the contextual models reveal less severe biases than the noncontextual word embedding models with GPT showing the fewest number of WEAT biases.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Word embeddings are widely used in Natural Language Processing (NLP) for a vast range of applications. However, it has been consistently proven that these embeddings reflect the same human biases that exist in the data used to train them. Most of the introduced bias indicators to reveal word embeddings' bias are average-based indicators based on the cosine similarity measure. In this study, we examine the impacts of different similarity measures as well as other descriptive techniques than averaging in measuring the biases of contextual and non-contextual word embeddings. We show that the extent of revealed biases in word embeddings depends on the descriptive statistics and similarity measures used to measure the bias. We found that over the ten categories of word embedding association tests, Mahalanobis distance reveals the smallest bias, and Euclidean distance reveals the largest bias in word embeddings. In addition, the contextual models reveal less severe biases than the noncontextual word embedding models with GPT showing the fewest number of WEAT biases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word embedding models including Word2Vec (Mikolov et al., 2013) , GloVe (Pennington et al., 2014) , BERT (Devlin et al., 2018) , ELMo (Peters et al., 2018) , and GPT (Radford et al., 2018) have become popular components of many NLP frameworks and are vastly used for many downstream tasks. However, these word representations preserve not only statistical properties of human language but also the human-like biases that exist in the data used to train them (Bolukbasi et al., 2016; Caliskan et al., 2017; Kurita et al., 2019; Basta et al., 2019; Gonen and Goldberg, 2019) . It has also been shown that such biases propagate to the downstream NLP tasks and have negative impacts on their performance (May et al., 2019; Leino et al., 2018) . There are studies investigating how to miti-gate biases of word embeddings (Liang et al., 2020; Ravfogel et al., 2020) .", "cite_spans": [ { "start": 41, "end": 63, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF18" }, { "start": 72, "end": 97, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF20" }, { "start": 105, "end": 126, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 129, "end": 155, "text": "ELMo (Peters et al., 2018)", "ref_id": null }, { "start": 166, "end": 188, "text": "(Radford et al., 2018)", "ref_id": "BIBREF22" }, { "start": 458, "end": 482, "text": "(Bolukbasi et al., 2016;", "ref_id": "BIBREF2" }, { "start": 483, "end": 505, "text": "Caliskan et al., 2017;", "ref_id": "BIBREF4" }, { "start": 506, "end": 526, "text": "Kurita et al., 2019;", "ref_id": "BIBREF13" }, { "start": 527, "end": 546, "text": "Basta et al., 2019;", "ref_id": "BIBREF0" }, { "start": 547, "end": 572, "text": "Gonen and Goldberg, 2019)", "ref_id": "BIBREF8" }, { "start": 700, "end": 718, "text": "(May et al., 2019;", "ref_id": "BIBREF17" }, { "start": 719, "end": 738, "text": "Leino et al., 2018)", "ref_id": "BIBREF14" }, { "start": 816, "end": 836, "text": "(Liang et al., 2020;", "ref_id": "BIBREF15" }, { "start": 837, "end": 859, "text": "Ravfogel et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different approaches have been used to present and quantify corpus-level biases of word embeddings. Bolukbasi et al. (2016) proposed to measure the gender bias of word representations in Word2Vec and GloVe by calculating the projections into principal components of differences of embeddings of a list of male and female pairs. Basta et al. (2019) adapted the idea of \"gender direction\" of (Bolukbasi et al., 2016) to be applicable to contextual word embeddings such as ELMo. In (Basta et al., 2019) first, the gender subspace of ELMo vector representations is calculated and then, the presence of gender bias in ELMo is identified. Gonen and Goldberg (2019) introduced a new gender bias indicator based on the percentage of sociallybiased terms among the k-nearest neighbors of a target term and demonstrated its correlation with the gender direction indicator. Caliskan et al. (2017) developed Word Embedding Association Test (WEAT) to measure bias by comparing two sets of target words with two sets of attribute words and documented that Word2Vec and GloVe contain human-like biases such as gender and racial biases. May et al. (2019) generalized the WEAT test to phrases and sentences by inserting individual words from WEAT tests into simple sentence templates and used them for contextual word embeddings. Kurita et al. (2019) proposed a new method to quantify bias in BERT embeddings based on its masked language model objective using simple template sentences. For each attribute word, using a simple template sentence, the normalized probability that BERT assigns to that sentence for each of the target words is calculated, and the difference is considered the measure of the bias. Kurita et al. (2019) demonstrated that this probabilitybased method for quantifying bias in BERT was more effective than the cosine-based method.", "cite_spans": [ { "start": 100, "end": 123, "text": "Bolukbasi et al. (2016)", "ref_id": "BIBREF2" }, { "start": 328, "end": 347, "text": "Basta et al. (2019)", "ref_id": "BIBREF0" }, { "start": 390, "end": 414, "text": "(Bolukbasi et al., 2016)", "ref_id": "BIBREF2" }, { "start": 479, "end": 499, "text": "(Basta et al., 2019)", "ref_id": "BIBREF0" }, { "start": 863, "end": 885, "text": "Caliskan et al. (2017)", "ref_id": "BIBREF4" }, { "start": 1121, "end": 1138, "text": "May et al. (2019)", "ref_id": "BIBREF17" }, { "start": 1313, "end": 1333, "text": "Kurita et al. (2019)", "ref_id": "BIBREF13" }, { "start": 1693, "end": 1713, "text": "Kurita et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Motivated by these recent studies, we comprehensively investigate different methods for bias exposure in word embeddings. Particularly, we investigate the impacts of different similarity measures and descriptive statistics to demonstrate the degree of associations between the target sets and attribute sets in the WEAT. First, other than cosine similarity, we study Euclidean, Manhattan, and Mahalanobis distances to measure the degree of association between a single target word and a single attribute word. Second, other than averaging, we investigate minimum, maximum, median, and a discrete (gridbased) optimization approach to find the minimum possible association to report between a single target word and the two attribute sets in each of the WEAT tests. We consistently compare these bias measures for different types of word embeddings including non-contextual (Word2Vec, GloVe) and contextual ones (BERT, ELMo, GPT, GPT2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Implicit Association Test (IAT) was first introduced by Greenwald et al. (1998a) in psychology to demonstrate the enormous differences in response time when participants are asked to pair two concepts they deem similar, in contrast to two concepts they find less similar. For example, when subjects are encouraged to work as quickly as possible, they are much likely to label flowers as pleasant and insects as unpleasant. In IAT, being able to pair a concept to an attribute quickly indicates that the concept and attribute are linked together in the participants' minds. The IAT has widely been used to measure and quantify the strength of a range of implicit biases and other phenomena, including attitudes and stereotype threat (Karpinski and Hilton, 2001; Kiefer and Sekaquaptewa, 2007; Stanley et al., 2011) .", "cite_spans": [ { "start": 56, "end": 80, "text": "Greenwald et al. (1998a)", "ref_id": "BIBREF9" }, { "start": 732, "end": 760, "text": "(Karpinski and Hilton, 2001;", "ref_id": "BIBREF11" }, { "start": 761, "end": 791, "text": "Kiefer and Sekaquaptewa, 2007;", "ref_id": "BIBREF12" }, { "start": 792, "end": 813, "text": "Stanley et al., 2011)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "Inspired by IAT, Caliskan et al. (2017) introduced WEAT to measure the associations between two sets of target concepts and two sets of attributes in word embeddings learned from large text corpora. A hypothesis test is conducted to demonstrate and quantify the bias. The null hypothesis states that there is no difference between the two sets of target words in terms of their relative distance/similarity to the two sets of attribute words. A permutation test is performed to measure the null hypothesis's likelihood. This test computes the probability that target words' random permutations would produce a greater difference than the observed difference. Let X and Y be two sets of target word embeddings and A and B be two sets of attribute embeddings. The test statistics is defined as:", "cite_spans": [ { "start": 17, "end": 39, "text": "Caliskan et al. (2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "s(X, Y, A, B) = | x\u2208X s(x, A, B) \u2212 y\u2208Y s(y, A, B)|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "s(w, A, B) = f a\u2208A (s( \u2212 \u2192 w , \u2212 \u2192 a )) \u2212 f b\u2208B (s( \u2212 \u2192 w , \u2212 \u2192 b )) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "In other words, s(w, A, B) quantifies the association of a single word w with the two sets of attributes, and s(X, Y, A, B) measures the differential association of the two sets of targets with the two sets of attributes. Denoting all the partitions of X \u222a Y with (X i , Y i ) i , the one-sided p-value of the permutation test is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "P r i (s(X i , Y i , A, B) > s(X, Y, A, B))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "The magnitude of the association of the two target sets with the two attribute sets can be measured with the effect size as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "d = |s(x, A, B) \u2212 s(y, A, B)| std-dev w\u2208X\u222aY s(w, A, B)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "It is worth mentioning that d is a measure used to calculate how separated two distributions are and is basically the standardized difference of the means of the two distributions (Cohen, 2013) . Controlling for the significance, a larger effect size reflects a more severe bias. WEAT and almost all the other studies inspired by it (Garg et al., 2018; Brunet et al., 2018; Gonen and Goldberg, 2019; May et al., 2019) use the following approach to measure the association of a single target word with the two sets of attributes (equation 1). First, they use cosine similarity to measure the target word's similarity to each word in the attribute sets. Then they calculate the average of the similarities over each attribute set.", "cite_spans": [ { "start": 180, "end": 193, "text": "(Cohen, 2013)", "ref_id": "BIBREF5" }, { "start": 333, "end": 352, "text": "(Garg et al., 2018;", "ref_id": "BIBREF7" }, { "start": 353, "end": 373, "text": "Brunet et al., 2018;", "ref_id": "BIBREF3" }, { "start": 374, "end": 399, "text": "Gonen and Goldberg, 2019;", "ref_id": "BIBREF8" }, { "start": 400, "end": 417, "text": "May et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "In this paper we investigate the impacts of other functions such as min(\u2022), mean(\u2022), median(\u2022), or max(\u2022) for function f (\u2022) in equation (1) (originally only mean(\u2022) has been used). Also, in this paper in addition to cosine similarity, we consider Euclidean and Manhattan distances as well as the following measures for the s( \u2212 \u2192 w , \u2212 \u2192 a ) in equation (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "Mahalanobis distance: introduced by P. C. Mahalanobis (Mahalanobis, 1936) this distance measures the distance of a point from a distribution:", "cite_spans": [ { "start": 54, "end": 73, "text": "(Mahalanobis, 1936)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "s( \u2212 \u2192 w , \u2212 \u2192 a ) = (( \u2212 \u2192 w \u2212 \u2212 \u2192 a ) T \u03a3 \u22121 A ( \u2212 \u2192 w \u2212 \u2212 \u2192 a )) 1 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": ". It is worth noting that the Mahalanobis distance takes into account the distribution of the set of attributes while measuring the association of the target word w with an attribute vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "Discrete optimization of the association measure: In equation 1, s(w, A, B) quantifies the association of a single target word w with the two sets of attributes. To quantify the minimum possible association of a target word w with the two sets of attributes, we first calculate the distance of w from all attribute words in A and B, then calculate all possible differences and find the minimum difference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "s(w, A, B) = min a\u2208A,b\u2208B |s( \u2212 \u2192 w , \u2212 \u2192 a ) \u2212 s( \u2212 \u2192 w , \u2212 \u2192 b )| (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "3 Biases studied", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "We studied all ten bias categories introduced in IAT (Greenwald et al., 1998a) and replicated in WEAT to measure the biases in word embeddings. The ten WEAT categories are briefly introduced in Table 1 . For more detail and example of target and attribute words, please check Appendix A. Although WEAT 3 to 5 have the same names, they have different target and attribute words. (Bertrand and Mullainathan, 2004) with Pleasant vs unpleasant (Nosek et al., 2002) 6 Male vs female names with Career vs family 7 Math vs arts with male vs female terms 8 Science vs arts with male vs female terms 9 Mental vs physical disease with temporary vs permanent 10 Young vs old people's name with pleasant vs unpleasant Table 1 : The associations studied in the WEAT As described in section 2, we need each attribute set's covariance matrix to compute Mahalanobis distance. To get stable covariance matrix estimation due to the high dimension of the embeddings we first created larger attribute sets by adding synonym terms. Next, we estimated the sparse covariance matrices as the number of samples in each attribute set is smaller than the number of features. To enforce sparsity, we estimated the l1 penalty using k-fold cross validation with k=3.", "cite_spans": [ { "start": 378, "end": 411, "text": "(Bertrand and Mullainathan, 2004)", "ref_id": "BIBREF1" }, { "start": 417, "end": 460, "text": "Pleasant vs unpleasant (Nosek et al., 2002)", "ref_id": null } ], "ref_spans": [ { "start": 194, "end": 201, "text": "Table 1", "ref_id": null }, { "start": 706, "end": 713, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "We examined the 10 different types of biases in WEAT (Table 1) for word embedding models listed in Table 2 . We used publicly available pre-trained models. For contextual word embeddings, we used single word sentences as input instead of using simple template sentences used in other studies (May et al., 2019; Kurita et al., 2019) . The simple template sentences such as \"this is TARGET\" or \"TARGET is ATTRIBUTE\" used in other studies do not really provide any context to reveal the contextual capability of embeddings such as BERT or ELMo. This way, the comparisons between the contextual embeddings and non-contextual embeddings are fairer as both of them only get the target or attribute terms as input. For each model, we performed the WEAT tests using four similarity metrics mentioned in section 2: cosine, Euclidean, Manhattan, Mahalanobis. For each similarity metric, we also used min(\u2022), mean(\u2022), median(\u2022), or max(\u2022) as the f (\u2022) in equation 1. Also, as explained in section 2, we discretely optimized the association measure and found the minimum association in equation 1. In these experiments (Table 3 and Table 4 ), the larger and more significant effect sizes imply more severe biases. Impacts of different descriptive statistics: Our first goal was to report the changes in the measured biases when we change the descriptive statistics. The range of effect sizes was from 0.00 to 1.89 (\u00b5 = 0.65, \u03c3 = 0.5). Our findings show that mean has a better capability to reveal biases as it provides the most cases of significant effect sizes (\u00b5 = 0.8, \u03c3 = 0.52) across models and distance measures. Median is close to the mean with (\u00b5 = 0.74, \u03c3 = 0.48) among all its effect sizes. The effect sizes for minimum (\u00b5 = 0.68, \u03c3 = 0.48) and maximum (\u00b5 = 0.65, \u03c3 = 0.48) are close to each other, but smaller than mean and median. The discretely optimized association measure (Eq. 2) provides the smallest effect sizes (\u00b5 = 0.39, \u03c3 = 0.3) and reveals the least number of implicit biases. These differences as the result of applying different descriptive statistics in the association measure (Eq. (1)) show that the revealed biases depend on the applied statistics to measure the bias.", "cite_spans": [ { "start": 292, "end": 310, "text": "(May et al., 2019;", "ref_id": "BIBREF17" }, { "start": 311, "end": 331, "text": "Kurita et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 53, "end": 62, "text": "(Table 1)", "ref_id": null }, { "start": 99, "end": 106, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1107, "end": 1128, "text": "(Table 3 and Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results of experiments", "sec_num": "4" }, { "text": "For example, in the cosine distance of Word2Vec, if we change the descriptive statistic from mean to minimum, the biases for WEAT 3 and WEAT 4 will become insignificant (no bias will be reported). In another example, in GPT model, while the result of mean cosine is not significant for WEAT 3 and WEAT 4, they become significant for median cosine. Moreover, almost for all models, the effect size of the discretely optimized minimum distance is not significant. Our intention for considering this statistic was to report the minimum possible association of a target word with the attribute sets. If this measure is used for reporting biases, one can misleadingly claim that there is no bias.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of experiments", "sec_num": "4" }, { "text": "Impacts of different similarity measures: The effect sizes for cosine, Manhattan, and Euclidean are closer to each other and greater than the Mahalanobis distance (cosine: (\u00b5 = 0.72, \u03c3 = 0.49), Euclidean: (\u00b5 = 0.67, \u03c3 = 0.5), Manhattan: (\u00b5 = 0.63, \u03c3 = 0.48), Mahalanobis: (\u00b5 = 0.58, \u03c3 = 0.45)). Mahalanobis distance also detects the fewest number of significant bias types across all models. As an example, while mean and median effect sizes for WEAT 3 and WEAT 5 in GloVe and Word2Vec are mostly significant for cosine, Euclidean, and Manhattan; the same results are not significant for the Mahalanobis distance. That means with the Mahalanobis distance as the measure of the bias, no bias will be reported for WEAT 3 and WEAT 5 tests. This emphasizes the importance of chosen similarity measures in detecting biases of word embeddings. More importantly, as the Mahalanobis distance considers the distribution of attributes in measuring the distance, it may be a better choice than the other similarity measures for measuring and revealing biases with GPT showing fewer number of biases. Biases in different word embedding models: Using any combination of descriptive statistics and similarity measures, all the contextualized models have less significant biases than GloVe and Word2Vec. In Table 3 the number of tests with significant implicit biases out of the 10 WEAT tests along with the mean and standard deviation of the effect sizes for all embedding models have been reported. The complete list of effect sizes along with their p-value are provided in Table 4 . Following our findings in the previous sections, we choose mean of Euclidean to reveal biases. By doing so, GloVe and Word2Vec show the most number of significant biases with 9 and 7 significant biases in 10 WEAT categories (Table 3) . Using mean of Euclidean, our results confirm all the results by Caliskan et al. (2017) , which used mean of cosine in all WEAT tests. The difference is that with the mean of Euclidean measure, the biases are revealed as being more severe. (smaller p-values). Using mean of Euclidean, GPT and ELMo show the fewest number of implicit biases. GPT model shows bias in WEAT 2, 3, and 5. ELMo's significant biases are in WEAT 1, 3, and 6. Using mean Euclidean, almost all models (except for ELMo) confirm the existence of a bias in WEAT 3 to 5. Moreover, all contextualized models found no bias in associating female with arts and male with science (WEAT 7), mental diseases with temporary attributes and physical diseases with permanent attributes (WEAT 9), and young people's name with pleasant attribute and old people's name with unpleasant attributes (WEAT 10). Table 3 : Number of revealed biases out of the 10 WEAT bias types for the studied word embeddings along with the (\u00b5, \u03c3) of their effect sizes. The larger the effect size the more severe the bias.", "cite_spans": [ { "start": 1871, "end": 1893, "text": "Caliskan et al. (2017)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 1292, "end": 1299, "text": "Table 3", "ref_id": null }, { "start": 1561, "end": 1568, "text": "Table 4", "ref_id": null }, { "start": 1795, "end": 1804, "text": "(Table 3)", "ref_id": null }, { "start": 2668, "end": 2675, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results of experiments", "sec_num": "4" }, { "text": "We studied the impacts of different descriptive statistics and similarity measures on association tests for measuring biases in contextualized and non-contextualized word embeddings. Our findings demonstrate that the detected biases depend on the choice of association measure. Based on our experiments, mean reveals more severe biases and the discretely optimized version reveals fewer number of severe biases. In addition, cosine distance reveals more severe biases and the Mahalanobis distance reveals less severe ones. Reporting biases with mean of Euclidean/Mahalanobis distances identifies more/less severe biases in the models. Furthermore, contextual models show less biases than the non-contextual ones across all 10 WEAT tests with GPT showing the fewest number of biases. Table 4 : WEAT effect size, *: significance at 0.01, **: significance at 0.001, ***: significance at 0.0001, ****: significance at 0.00001.", "cite_spans": [], "ref_spans": [ { "start": 783, "end": 790, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluating the underlying gender bias in contextualized word embeddings", "authors": [ { "first": "Christine", "middle": [], "last": "Basta", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Noe", "middle": [], "last": "Casas", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.08783" ] }, "num": null, "urls": [], "raw_text": "Christine Basta, Marta R Costa-juss\u00e0, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. arXiv preprint arXiv:1904.08783.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination", "authors": [ { "first": "Marianne", "middle": [], "last": "Bertrand", "suffix": "" }, { "first": "Sendhil", "middle": [], "last": "Mullainathan", "suffix": "" } ], "year": 2004, "venue": "American economic review", "volume": "94", "issue": "4", "pages": "991--1013", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marianne Bertrand and Sendhil Mullainathan. 2004. Are emily and greg more employable than lakisha and jamal? a field experiment on labor market dis- crimination. American economic review, 94(4):991- 1013.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "authors": [ { "first": "Tolga", "middle": [], "last": "Bolukbasi", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Y", "middle": [], "last": "James", "suffix": "" }, { "first": "Venkatesh", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Adam", "middle": [ "T" ], "last": "Saligrama", "suffix": "" }, { "first": "", "middle": [], "last": "Kalai", "suffix": "" } ], "year": 2016, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "4349--4357", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in neural information processing systems, pages 4349-4357.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Understanding the origins of bias in word embeddings", "authors": [ { "first": "Marc-Etienne", "middle": [], "last": "Brunet", "suffix": "" }, { "first": "Colleen", "middle": [], "last": "Alkalay-Houlihan", "suffix": "" }, { "first": "Ashton", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.03611" ] }, "num": null, "urls": [], "raw_text": "Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ash- ton Anderson, and Richard Zemel. 2018. Under- standing the origins of bias in word embeddings. arXiv preprint arXiv:1810.03611.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semantics derived automatically from language corpora contain human-like biases", "authors": [ { "first": "Aylin", "middle": [], "last": "Caliskan", "suffix": "" }, { "first": "Joanna", "middle": [ "J" ], "last": "Bryson", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2017, "venue": "Science", "volume": "356", "issue": "6334", "pages": "183--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical power analysis for the behavioral sciences", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Cohen. 2013. Statistical power analysis for the behavioral sciences. Academic press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of", "authors": [ { "first": "Nikhil", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Londa", "middle": [], "last": "Schiebinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "James", "middle": [], "last": "Zou", "suffix": "" } ], "year": 2018, "venue": "Sciences", "volume": "115", "issue": "16", "pages": "3635--3644", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them", "authors": [ { "first": "Hila", "middle": [], "last": "Gonen", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.03862" ] }, "num": null, "urls": [], "raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Measuring individual differences in implicit cognition: the implicit association test", "authors": [ { "first": "Debbie", "middle": [ "E" ], "last": "Anthony G Greenwald", "suffix": "" }, { "first": "Jordan Lk", "middle": [], "last": "Mcghee", "suffix": "" }, { "first": "", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1998, "venue": "Journal of personality and social psychology", "volume": "74", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony G Greenwald, Debbie E McGhee, and Jor- dan LK Schwartz. 1998a. Measuring individual dif- ferences in implicit cognition: the implicit associa- tion test. Journal of personality and social psychol- ogy, 74(6):1464.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Measuring individual differences in implicit cognition: the implicit association test", "authors": [ { "first": "Debbie", "middle": [ "E" ], "last": "Anthony G Greenwald", "suffix": "" }, { "first": "Jordan Lk", "middle": [], "last": "Mcghee", "suffix": "" }, { "first": "", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1998, "venue": "Journal of personality and social psychology", "volume": "74", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony G Greenwald, Debbie E McGhee, and Jor- dan LK Schwartz. 1998b. Measuring individual dif- ferences in implicit cognition: the implicit associa- tion test. Journal of personality and social psychol- ogy, 74(6):1464.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Journal of personality and social psychology", "authors": [ { "first": "Andrew", "middle": [], "last": "Karpinski", "suffix": "" }, { "first": "James L", "middle": [], "last": "Hilton", "suffix": "" } ], "year": 2001, "venue": "", "volume": "81", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Karpinski and James L Hilton. 2001. Attitudes and the implicit association test. Journal of person- ality and social psychology, 81(5):774.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Implicit stereotypes and women's math performance: How implicit gender-math stereotypes influence women's susceptibility to stereotype threat", "authors": [ { "first": "K", "middle": [], "last": "Amy", "suffix": "" }, { "first": "Denise", "middle": [], "last": "Kiefer", "suffix": "" }, { "first": "", "middle": [], "last": "Sekaquaptewa", "suffix": "" } ], "year": 2007, "venue": "Journal of experimental social psychology", "volume": "43", "issue": "5", "pages": "825--832", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amy K Kiefer and Denise Sekaquaptewa. 2007. Im- plicit stereotypes and women's math performance: How implicit gender-math stereotypes influence women's susceptibility to stereotype threat. Journal of experimental social psychology, 43(5):825-832.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Measuring bias in contextualized word representations", "authors": [ { "first": "Keita", "middle": [], "last": "Kurita", "suffix": "" }, { "first": "Nidhi", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Ayush", "middle": [], "last": "Pareek", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.07337" ] }, "num": null, "urls": [], "raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in con- textualized word representations. arXiv preprint arXiv:1906.07337.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Feature-wise bias amplification", "authors": [ { "first": "Klas", "middle": [], "last": "Leino", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Black", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Fredrikson", "suffix": "" }, { "first": "Shayak", "middle": [], "last": "Sen", "suffix": "" }, { "first": "Anupam", "middle": [], "last": "Datta", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.08999" ] }, "num": null, "urls": [], "raw_text": "Klas Leino, Emily Black, Matt Fredrikson, Shayak Sen, and Anupam Datta. 2018. Feature-wise bias ampli- fication. arXiv preprint arXiv:1812.08999.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards debiasing sentence representations", "authors": [ { "first": "Irene", "middle": [ "Mengze" ], "last": "Paul Pu Liang", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Morency", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.08100" ] }, "num": null, "urls": [], "raw_text": "Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2020. Towards debi- asing sentence representations. arXiv preprint arXiv:2007.08100.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "On the generalized distance in statitics", "authors": [ { "first": "Prasanta", "middle": [], "last": "Chandra Mahalanobis", "suffix": "" } ], "year": 1936, "venue": "Proceedings of the National Institute of Sciences of India", "volume": "", "issue": "", "pages": "49--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prasanta Chandra Mahalanobis. 1936. On the gener- alized distance in statitics. Proceedings of the Na- tional Institute of Sciences of India, pages 49-55.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "On measuring social biases in sentence encoders", "authors": [ { "first": "Chandler", "middle": [], "last": "May", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Bordia", "suffix": "" }, { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "", "middle": [], "last": "Rudinger", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.10561" ] }, "num": null, "urls": [], "raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. 2019. On mea- suring social biases in sentence encoders. arXiv preprint arXiv:1903.10561.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Harvesting implicit group attitudes and beliefs from a demonstration web site", "authors": [ { "first": "A", "middle": [], "last": "Brian", "suffix": "" }, { "first": "", "middle": [], "last": "Nosek", "suffix": "" }, { "first": "R", "middle": [], "last": "Mahzarin", "suffix": "" }, { "first": "Anthony", "middle": [ "G" ], "last": "Banaji", "suffix": "" }, { "first": "", "middle": [], "last": "Greenwald", "suffix": "" } ], "year": 2002, "venue": "Group Dynamics: Theory, Research, and Practice", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian A Nosek, Mahzarin R Banaji, and Anthony G Greenwald. 2002. Harvesting implicit group atti- tudes and beliefs from a demonstration web site. Group Dynamics: Theory, Research, and Practice, 6(1):101.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep contextualized word representations", "authors": [ { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.05365" ] }, "num": null, "urls": [], "raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Null it out: Guarding protected attributes by iterative nullspace projection", "authors": [ { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Hila", "middle": [], "last": "Gonen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Twiton", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.07667" ] }, "num": null, "urls": [], "raw_text": "Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. arXiv preprint arXiv:2004.07667.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Implicit race attitudes predict trustworthiness judgments and economic trust decisions", "authors": [ { "first": "A", "middle": [], "last": "Damian", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "", "middle": [], "last": "Sokol-Hessner", "suffix": "" }, { "first": "R", "middle": [], "last": "Mahzarin", "suffix": "" }, { "first": "Elizabeth", "middle": [ "A" ], "last": "Banaji", "suffix": "" }, { "first": "", "middle": [], "last": "Phelps", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the National Academy of Sciences", "volume": "108", "issue": "19", "pages": "7710--7715", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damian A Stanley, Peter Sokol-Hessner, Mahzarin R Banaji, and Elizabeth A Phelps. 2011. Implicit race attitudes predict trustworthiness judgments and eco- nomic trust decisions. Proceedings of the National Academy of Sciences, 108(19):7710-7715.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "insects with pleasant vs unpleasant 2 Instruments vs weapons with pleasant vs unpleasant 3 Eur.-American vs Afr.-American names with Pleasant vs unpleasant (Greenwald et al., 1998b) 4 Eur.-American vs Afr.-American names (Bertrand and Mullainathan, 2004) with Pleasant vs unpleasant (Greenwald et al., 1998b) 5 Eur.-American vs Afr.-American names", "type_str": "figure", "num": null }, "TABREF1": { "type_str": "table", "content": "