ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.32.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:54:11.170348Z"
},
"title": "Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings",
"authors": [
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Delhi Technological University",
"location": {
"settlement": "New Delhi",
"country": "India"
}
},
"email": "kumar.vaibhav1o1@gmail.com"
},
{
"first": "Tenzin",
"middle": [],
"last": "Singhay Bhotia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Delhi Technological University",
"location": {
"settlement": "New Delhi",
"country": "India"
}
},
"email": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Delhi Technological University",
"location": {
"settlement": "New Delhi",
"country": "India"
}
},
"email": "tanmoy@iiitd.ac.in"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings are the standard model for semantic and syntactic representations of words. Unfortunately, these models have been shown to exhibit undesirable word associations resulting from gender, racial, and religious biases. Existing post-processing methods for debiasing word embeddings are unable to mitigate gender bias hidden in the spatial arrangement of word vectors. In this paper, we propose RAN-Debias, a novel gender debiasing methodology that not only eliminates the bias present in a word vector but also alters the spatial distribution of its neighboring vectors, achieving a bias-free setting while maintaining minimal semantic offset. We also propose a new bias evaluation metric, Gender-based Illicit Proximity Estimate (GIPE), which measures the extent of undue proximity in word vectors resulting from the presence of gender-based predilections. Experiments based on a suite of evaluation metrics show that RAN-Debias significantly outperforms the state-of-the-art in reducing proximity bias (GIPE) by at least 42.02%. It also reduces direct bias, adding minimal semantic disturbance, and achieves the best performance in a downstream application task (coreference resolution).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings are the standard model for semantic and syntactic representations of words. Unfortunately, these models have been shown to exhibit undesirable word associations resulting from gender, racial, and religious biases. Existing post-processing methods for debiasing word embeddings are unable to mitigate gender bias hidden in the spatial arrangement of word vectors. In this paper, we propose RAN-Debias, a novel gender debiasing methodology that not only eliminates the bias present in a word vector but also alters the spatial distribution of its neighboring vectors, achieving a bias-free setting while maintaining minimal semantic offset. We also propose a new bias evaluation metric, Gender-based Illicit Proximity Estimate (GIPE), which measures the extent of undue proximity in word vectors resulting from the presence of gender-based predilections. Experiments based on a suite of evaluation metrics show that RAN-Debias significantly outperforms the state-of-the-art in reducing proximity bias (GIPE) by at least 42.02%. It also reduces direct bias, adding minimal semantic disturbance, and achieves the best performance in a downstream application task (coreference resolution).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embedding methods (Devlin et al., 2019; Mikolov et al., 2013a; Pennington et al., 2014) have been staggeringly successful in mapping the semantic space of words to a space of real-valued vectors, capturing both semantic and syntactic relationships. However, as recent research has shown, word embeddings also possess a spectrum of biases related to gender (Bolukbasi et al., 2016; Hoyle et al., 2019) , race, and religion (Manzini et al., 2019; Otterbacher et al., 2017) . Bolukbasi et al. (2016) showed that there is a disparity in the association of professions with gender. For instance, while women are associated more closely with ''receptionist'' and ''nurse'', men are associated more closely with ''doctor'' and ''engineer''. Similarly, a word embedding model trained on data from a popular social media platform generates analogies such as ''Muslim is to terrorist as Christian is to civilian'' (Manzini et al., 2019) . Therefore, given the large scale use of word embeddings, it becomes cardinal to remove the manifestation of biases. In this work, we focus on mitigating gender bias from pre-trained word embeddings.",
"cite_spans": [
{
"start": 23,
"end": 44,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 45,
"end": 67,
"text": "Mikolov et al., 2013a;",
"ref_id": "BIBREF22"
},
{
"start": 68,
"end": 92,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 361,
"end": 385,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 386,
"end": 405,
"text": "Hoyle et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 427,
"end": 449,
"text": "(Manzini et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 450,
"end": 475,
"text": "Otterbacher et al., 2017)",
"ref_id": "BIBREF29"
},
{
"start": 478,
"end": 501,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF1"
},
{
"start": 909,
"end": 931,
"text": "(Manzini et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As shown in Table 1 , the high degree of similarity between gender-biased words largely results from their individual proclivity towards a particular notion (gender in this case) rather than from empirical utility; we refer to such proximities as ''illicit proximities''. Existing debiasing methods (Bolukbasi et al., 2016; Kaneko and Bollegala, 2019) are primarily concerned with debiasing a word vector by minimising its projection on the gender direction. Although they successfully mitigate direct bias for a word, they tend to ignore the relationship between a gender-neutral word vector and its neighbors, thus failing to remove the gender bias encoded as illicit proximities between words (Gonen and Goldberg, 2019; Williams et al., 2019) . For the sake of brevity, we refer to ''genderbased illicit proximities'' as ''illicit proximities'' in the rest of the paper. Neighbors nurse mother 12 , woman 24 , filipina 31 receptionist housekeeper 9 , hairdresser 15 , prostitute 69 prostitute housekeeper 19 , hairdresser 41 , babysitter 44 schoolteacher homemaker 2 , housewife 4 , waitress 8 Table 1 : Words and their neighbors extracted using GloVe (Pennington et al., 2014) . Subscript indicates the rank of the neighbor.",
"cite_spans": [
{
"start": 299,
"end": 323,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 324,
"end": 351,
"text": "Kaneko and Bollegala, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 696,
"end": 722,
"text": "(Gonen and Goldberg, 2019;",
"ref_id": "BIBREF8"
},
{
"start": 723,
"end": 745,
"text": "Williams et al., 2019)",
"ref_id": "BIBREF38"
},
{
"start": 1165,
"end": 1190,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": null
},
{
"start": 874,
"end": 1114,
"text": "Neighbors nurse mother 12 , woman 24 , filipina 31 receptionist housekeeper 9 , hairdresser 15 , prostitute 69 prostitute housekeeper 19 , hairdresser 41 , babysitter 44 schoolteacher homemaker 2 , housewife 4 , waitress 8 Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To account for these problems, we propose a post-processing based debiasing scheme for noncontextual word embeddings, called RAN-Debias (Repulsion, Attraction, and Neutralization based Debiasing). RAN-Debias not only minimizes the projection of gender-biased word vectors on the gender direction but also reduces the semantic similarity with neighboring word vectors having illicit proximities. We also propose KBC (Knowledge Based Classifier), a word classification algorithm for selecting the set of words to be debiased. KBC utilizes a set of existing lexical knowledge bases to maximize classification accuracy. Additionally, we propose a metric, Gender-based Illicit Proximity Estimate (GIPE), which quantifies gender bias in the embedding space resulting from the presence of illicit proximities between word vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word",
"sec_num": null
},
{
"text": "We evaluate debiasing efficacy on various evaluation metrics. For the gender relational analogy test on the SemBias dataset (Zhao et al., 2018b) , RAN-GloVe (RAN-Debias applied to GloVe word embedding) outperforms the next best baseline GN-GloVe (debiasing method proposed by Zhao et al. [2018b] ) by 21.4% in genderstereotype type. RAN-Debias also outperforms the best baseline by at least 42.02% in terms of GIPE. Furthermore, the performance of RAN-GloVe on word similarity and analogy tasks on a number of benchmark datasets indicates the addition of minimal semantic disturbance. In short, our major contributions 1 can be summarized as follows:",
"cite_spans": [
{
"start": 124,
"end": 144,
"text": "(Zhao et al., 2018b)",
"ref_id": "BIBREF41"
},
{
"start": 276,
"end": 295,
"text": "Zhao et al. [2018b]",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word",
"sec_num": null
},
{
"text": "\u2022 We provide a knowledge-based method (KBC) for classifying words to be debiased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word",
"sec_num": null
},
{
"text": "\u2022 We introduce RAN-Debias, a novel approach to reduce both direct and gender-based proximity biases in word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word",
"sec_num": null
},
{
"text": "\u2022 We propose GIPE, a novel metric to measure the extent of undue proximities in word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word",
"sec_num": null
},
{
"text": "2 Related Work 2.1 Gender Bias in Word Embedding Models Caliskan et al. (2017) highlighted that humanlike semantic biases are reflected through word embeddings (such as GloVe [Pennington et al., 2014] ) of ordinary language. They also introduced the Word Embedding Association Test (WEAT) for measuring bias in word embeddings. The authors showed a strong presence of biases in pre-trained word vectors. In addition to gender, they also identified bias related to race. For instance, European-American names are more associated with pleasant terms as compared to African-American names. In the following subsections, we discuss existing gender debiasing methods based on their mode of operation. Methods that operate on pre-trained word embeddings are known as post-processing methods, and those which aim to retrain word embeddings by either introducing corpus-level changes or modifying the training objective are known as learning-based methods.",
"cite_spans": [
{
"start": 56,
"end": 78,
"text": "Caliskan et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 175,
"end": 200,
"text": "[Pennington et al., 2014]",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word",
"sec_num": null
},
{
"text": "Bolukbasi et al. (2016) extensively studied gender bias in word embeddings and proposed two debiasing strategies-''hard debias'' and ''soft debias''. Hard debias algorithm first determines the direction that captures the gender information in the word embedding space using the difference vectors (e.g., he \u2212 she). It then transforms each word vector w to be debiased such that it becomes perpendicular to the gender direction (neutralization). Further, for a given set of word pairs (equalization set), it modifies each pair such that w becomes equidistant to each word in the pair (equalization). On the other hand, the soft debias algorithm applies a linear transformation to word vectors, which preserves pairwise inner products among all the word vectors while limiting the projection of gender-neutral words on the gender direction. The authors showed that the former performs better for debiasing than the latter. However, to determine the set of words for debiasing, a support vector machine (SVM) classifier is used, which is trained on a small set of seed words. This makes the accuracy of the approach highly dependent on the generalization of the classifier to all remaining words in the vocabulary. Kaneko and Bollegala (2019) proposed a postprocessing step in which the given vocabulary is split into four classes-non-discriminative female-biased words (e.g., ''bikini'', ''lipstick''), non-discriminative male-biased words (e.g., ''beard'', ''moustache''), gender-neutral words (e.g., ''meal'', ''memory'') , and stereotypical words (e.g., ''librarian'', ''doctor'') . A set of seed words is then used for each of the categories to train an embedding using an encoder in a denoising autoencoder, such that gender-related biases from stereotypical words are removed, while preserving feminine information for nondiscriminative female-biased words, masculine information for non-discriminative male-biased words, and neutrality of the gender-neutral words. The use of the correct set of seed words is critical for the approach. Moreover, inappropriate associations between words (such as ''nurse'' and ''receptionist'') may persist. Gonen and Goldberg (2019) showed that current approaches (Bolukbasi et al., 2016; Zhao et al., 2018b) , which depend on gender direction for the definition of gender bias and directly target it for the mitigation process, end up hiding the bias rather than reduce it. The relative spatial distribution of word vectors before and after debiasing is similar, and bias-related information can still be recovered. Ethayarajh et al. (2019) provided theoretical proof for hard debias (Bolukbasi et al., 2016) and discussed the theoretical flaws in WEAT by showing that it systematically overestimates gender bias in word embeddings. The authors presented an alternate gender bias measure, called RIPA (Relational Inner Product Association), that quantifies gender bias using gender direction. Further, they illustrated that vocabulary selection for gender debiasing is as crucial as the debiasing procedure. Zhou et al. (2019) investigated the presence of gender bias in bilingual word embeddings and languages which have grammatical gender (such as Spanish and French). Further, they defined semantic gender direction and grammatical gender direction used for quantifying and mitigating gender bias. In this paper, we only focus on languages that have non-gendered grammar (e.g., English). Our method can be applied to any such language.",
"cite_spans": [
{
"start": 1212,
"end": 1239,
"text": "Kaneko and Bollegala (2019)",
"ref_id": "BIBREF13"
},
{
"start": 1500,
"end": 1521,
"text": "''meal'', ''memory'')",
"ref_id": null
},
{
"start": 1555,
"end": 1581,
"text": "''librarian'', ''doctor'')",
"ref_id": null
},
{
"start": 2146,
"end": 2171,
"text": "Gonen and Goldberg (2019)",
"ref_id": "BIBREF8"
},
{
"start": 2203,
"end": 2227,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 2228,
"end": 2247,
"text": "Zhao et al., 2018b)",
"ref_id": "BIBREF41"
},
{
"start": 2556,
"end": 2580,
"text": "Ethayarajh et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 2624,
"end": 2648,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 3048,
"end": 3066,
"text": "Zhou et al. (2019)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Debiasing Methods (Post-processing)",
"sec_num": "2.2"
},
{
"text": "Zhao et al. (2018b) developed a word vector training approach, called Gender-Neutral Global Vectors (GN-GloVe) based on the modification of GloVe. They proposed a modified objective function that aims to confine gender-related information to a sub-vector. During the optimization process, the objective function of GloVe is minimized while simultaneously, the square of Euclidean distance between the gender-related sub-vectors is maximized. Further, it is emphasized that the representation of gender-neutral words is perpendicular to the gender direction. Being a retraining approach, this method cannot be used on pre-trained word embeddings. Lu et al. (2018) proposed a counterfactual dataaugmentation (CDA) approach to show that gender bias in language modeling and coreference resolution can be mitigated through balancing the corpus by exchanging gender pairs like ''she '' and ''he'' or ''mother'' and ''father''. Similarly, Hall Maudslay et al. (2019) proposed a learningbased approach with two enhancements to CDA-a counterfactual data substitution method which makes substitutions with a probability of 0.5 and a method for processing first names based upon bipartite graph matching.",
"cite_spans": [
{
"start": 646,
"end": 662,
"text": "Lu et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 878,
"end": 960,
"text": "'' and ''he'' or ''mother'' and ''father''. Similarly, Hall Maudslay et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Debiasing Methods (Learning-based)",
"sec_num": "2.3"
},
{
"text": "Bordia and Bowman (2019) proposed a genderbias reduction method for word-level language models. They introduced a regularization term that penalizes the projection of word embeddings on the gender direction. Further, they proposed metrics to measure bias at embedding and corpus level. Their study revealed considerable gender bias in Penn Treebank (Marcus et al., 1993) and WikiText-2 (Merity et al., 2018) .",
"cite_spans": [
{
"start": 349,
"end": 370,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF20"
},
{
"start": 375,
"end": 407,
"text": "WikiText-2 (Merity et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Debiasing Methods (Learning-based)",
"sec_num": "2.3"
},
{
"text": "Mrk\u0161i\u0107 et al. (2017) defined semantic specialization as the process of refining word vectors to improve the semantic content. Similar to the debiasing procedures, semantic specialization procedures can also be divided into postprocessing (Ono et al., 2015; Faruqui and Dyer, 2014) and learning-based (Rothe and Sch\u00fctze, 2015; Mrk\u0161i\u0107 et al., 2016; Nguyen et al., 2016) approaches. The performance of post-processing based approaches is shown to be better than learning-based approaches (Mrk\u0161i\u0107 et al., 2017) .",
"cite_spans": [
{
"start": 238,
"end": 256,
"text": "(Ono et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 257,
"end": 280,
"text": "Faruqui and Dyer, 2014)",
"ref_id": "BIBREF7"
},
{
"start": 300,
"end": 325,
"text": "(Rothe and Sch\u00fctze, 2015;",
"ref_id": "BIBREF33"
},
{
"start": 326,
"end": 346,
"text": "Mrk\u0161i\u0107 et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 347,
"end": 367,
"text": "Nguyen et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 485,
"end": 506,
"text": "(Mrk\u0161i\u0107 et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings Specialization",
"sec_num": "2.4"
},
{
"text": "Similar to the ''repulsion'' and ''attraction'' terminologies used in RAN-Debias, Mrk\u0161i\u0107 et al. (2017) defined ATTRACT-REPEL algorithm, a post-processing semantic specialization process which uses antonymy and synonymy constraints drawn from lexical resources. Although it is superficially similar to RAN-Debias, there are a number of differences between the two approaches. Firstly, the ATTRACT-REPEL algorithm operates over mini-batches of synonym and antonym pairs, while RAN-Debias operates on a set containing gender-neutral and gender-biased words. Secondly, the ''attract'' and ''repel'' terms carry different meanings with respect to the algorithms. In ATTRACT-REPEL, for each of the pairs in the mini-batches of synonyms and antonyms, negative examples are chosen. The algorithm then forces synonymous pairs to be closer to each other (attract) than from their negative examples and antonymous pairs further away from each other (repel) than from their negative examples. On the other hand, for a given word vector, RAN-Debias forces it away from its neighboring word vectors (repel) which have a high indirect bias while simultaneously forcing the post-processed word vector and the original word vector together (attract) to preserve its semantic properties.",
"cite_spans": [
{
"start": 70,
"end": 102,
"text": "RAN-Debias, Mrk\u0161i\u0107 et al. (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings Specialization",
"sec_num": "2.4"
},
{
"text": "Given a set of pre-trained word vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "{ w i } |V | i=1 over a vocabulary set V, we aim to create a transformation { w i } |V | i=1 \u2192 { w \u2032 i } |V | i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "such that the stereotypical gender information present in the resulting embedding set are minimized with minimal semantic offset. We first define the categories into which each word w \u2208 V is classified in a mutually exclusive manner. Table 2 summarizes important notations used throughout the paper.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "\u2022 Preserve set (V p ): This set consists of words for which gender carries semantic",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "Notation Denotation w Vector corresponding to a word w w d Debiased version of w V Vocabulary set V p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "The set of words which are preserved during the debiasing procedure",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "V d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "The set of words which are subjected to the debiasing procedure D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "A particular dictionary from the set D g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Set of dictionaries d i",
"sec_num": null
},
{
"text": "Gender direction D b ( w) Direct bias of a word w. \u03b2( w 1 , w 2 ) Indirect bias between a pair of words importance; such as names, gendered pronouns and words like ''beard'' and ''bikini'' that have a meaning closely associated with gender. In addition, words that are nonalphabetic are also included as debiasing them will be of no practical utility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Set of dictionaries d i",
"sec_num": null
},
{
"text": "w 1 and w 2 . \u03b7( w) Gender-based proximity bias of a word w N w Set of neighboring words of a word w F r ( w d ) Repulsion objective function F a ( w d ) Attraction objective function F n ( w d ) Neutralization objective function F ( w d ) Multi-objective optimization function KBC Knowledge Based Classifier BBN Bias Based Network GIP E Gender-based Illicit Proximity Estimate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Set of dictionaries d i",
"sec_num": null
},
{
"text": "\u2022 Debias set (V d ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Set of dictionaries d i",
"sec_num": null
},
{
"text": "This set consists of all the words in the vocabulary that are not present in V p . These words are expected to be gender-neutral in nature and hence subjected to debiasing procedure. Note that V d not only consists of gender-stereotypical words (''nurse'', ''warrior'', ''receptionist'', etc.), but also gender-neutral words (''sky '', ''table'', ''keyboard'', etc.) .",
"cite_spans": [
{
"start": 332,
"end": 366,
"text": "'', ''table'', ''keyboard'', etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Set of dictionaries d i",
"sec_num": null
},
{
"text": "Prior to the explanation of our method, we present the limitations of previous approaches for word classification. Bolukbasi et al. (2016) trained a linear SVM using a set of gender-specific seed words, which is then generalized on the whole embedding set to identify other gender-specific Table 3 : Comparison between our proposed method (KBC), RIPA- (Ethayarajh et al., 2019) , and SVM- (Bolukbasi et al., 2016) based word classification methods via precision (Prec), recall (Rec), F1-score (F1), AUC-ROC, and accuracy (Acc).",
"cite_spans": [
{
"start": 115,
"end": 138,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF1"
},
{
"start": 352,
"end": 377,
"text": "(Ethayarajh et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 389,
"end": 413,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "words. However, such methods rely on training a supervised classifier on word vectors, which are themselves gender-biased. Because such classifiers are trained on biased data, they catch onto the underlying gender-bias cues and often misclassify words. For instance, the SVM classifier trained by Bolukbasi et al. (2016) misclassifies the word ''blondes'' as gender-specific, among others. Further, we empirically show the inability of a supervised classifier (SVM) to generalize over the whole embedding using various metrics in Table 3 . Taking into consideration this limitation, we propose the Knowledge Based Classifier (KBC) that relies on knowledge bases instead of word embeddings, thereby circumventing the addition of bias in the classification procedure. Moreover, unlike RIPA (Ethayarajh et al., 2019) , our approach does not rely on creating a biased direction that may be difficult to determine. Essentially, KBC relies on the following assumption.",
"cite_spans": [
{
"start": 297,
"end": 320,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF1"
},
{
"start": 788,
"end": 813,
"text": "(Ethayarajh et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 530,
"end": 537,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "Assumption 1 If there exists a dictionary d such that it stores a definition d[w] corresponding to a word w, then w can be defined as gender-specific or not based on the existence or absence of a gender-specific reference s \u2208 seed in the definition d [w] , where the set seed consists of genderspecific references such as {''man'', ''woman '', ''boy'', ''girl''}. Algorithm 1 formally explains KBC. We denote each if condition as a stage and explain it below: ",
"cite_spans": [
{
"start": 251,
"end": 254,
"text": "[w]",
"ref_id": null
},
{
"start": 340,
"end": 363,
"text": "'', ''boy'', ''girl''}.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "d i \u2208 D, d i [w]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "represents the definition of a word w. Output: V p : set of words that will be preserved,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "V d : set of words that will be debiased",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "1 V p = {}, V d = {} 2 for w \u2208 V do 3 if w \u2208 stw or isnonalphabetic(w) then 4 V p \u2190 V p \u222a {w} 5 else if w \u2208 names \u222a seed then 6 V p \u2190 V p \u222a {w} 7 else if |{d i : d i \u2208 D & w \u2208 d i & \u2203s : s \u2208 seed \u2229 d i [w]}| > |D|/2 then 8 V p \u2190 V p \u222a {w} 9 V d \u2190 V d \u222a {w : w \u2208 V \\ V p } 10 return V p , V d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "source knowledge base. 2 Set seed consists of gender-specific reference terms. We preserve names, as they hold important gender information (Pilcher, 2017) .",
"cite_spans": [
{
"start": 140,
"end": 155,
"text": "(Pilcher, 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "\u2022 Stage 3: This stage uses a collection of dictionaries to determine whether a word is gender-specific using Assumption 1. To counter the effect of biased definitions arising from any particular dictionary, we make a decision based upon the consensus of all dictionaries. A word is classified as genderspecific and added to V p if and only if more than half of the dictionaries classify it as gender-specific. In our experiments, we employ WordNet (Miller, 1995) and the Oxford dictionary. As pointed out by Bolukbasi et al. (2016) , WordNet consists of few definitions that are gender-biased such as the definition of ''vest''; therefore, by utilizing our approach, we counter such cases as the final decision is based upon consensus.",
"cite_spans": [
{
"start": 448,
"end": 462,
"text": "(Miller, 1995)",
"ref_id": "BIBREF24"
},
{
"start": 508,
"end": 531,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "The remaining words that are not preserved by KBC are categorized into V d . It is the set of words that are debiased by RAN-Debias later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification Methodology",
"sec_num": "3.1"
},
{
"text": "First, we briefly explain two types of gender bias as defined by Bolukbasi et al. (2016) and then introduce a new type of gender bias resulting from illicit proximities in word embedding space.",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "\u2022 Direct Bias (D b ): For a word w, the direct bias is defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "D b ( w, g) = |cos( w, g)| c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "where, g is the gender direction measured by taking the first principal component from the principal component analysis of ten gender pair difference vectors, such as ( he \u2212 she) as mentioned in (Bolukbasi et al., 2016) , and c represents the strictness of measuring bias.",
"cite_spans": [
{
"start": 195,
"end": 219,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "\u2022 Indirect Bias (\u03b2): The indirect bias between a given pair of words w and v is defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "\u03b2( w, v) = ( w. v \u2212 cos( w \u22a5 , v \u22a5 )) w. v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "Here, w and v are normalized. w \u22a5 is orthogonal to the gender direction g: w \u22a5 = w \u2212 w g , and w g is the contribution from gender: w g = ( w. g) g. Indirect bias measures the change in the inner product of two word vectors as a proportion of the earlier inner product after projecting out the gender direction from both the vectors. A higher indirect bias between two words indicates a strong association due to gender.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "\u2022 Gender-based Proximity Bias (\u03b7): Gonen and Goldberg (2019) observed that the existing debiasing methods are unable to completely debias word embeddings because the relative spatial distribution of word embeddings after the debiasing process still encapsulates bias-related information. Therefore, we propose gender-based proximity bias that aims to capture the illicit proximities arising between a word and its closest k neighbors due to gender-based constructs. For a given word w i \u2208 V d , the gender-based proximity bias \u03b7 w i is defined as:",
"cite_spans": [
{
"start": 35,
"end": 60,
"text": "Gonen and Goldberg (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "\u03b7 w i = |N b w i | |N w i | (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "N w i = argmax V \u2032 :|V \u2032 |=k (cos( w i , w k ) : w k \u2208 V \u2032 \u2286 V ), N b w i = {w i : \u03b2( w i , w k ) > \u03b8 s , w k \u2208 N w i }, and \u03b8 s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "is a threshold for indirect bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "The intuition behind this is as follows. The set N w i consists of the top k neighbors of w i calculated by finding the word vectors having the maximum cosine similarity with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "w i . Further, N b w i \u2286 N w i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "is the set of neighbors having indirect bias \u03b2 greater than a threshold \u03b8 s , which is a hyperparameter that controls neighbor deselection on the basis of indirect bias. The lower is the value of \u03b8 s , the higher is the cardinality of set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "N b w i . A high value of |N b w i | compared to |N w i |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "indicates that the neighborhood of the word is gender-biased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Gender Bias",
"sec_num": "3.2"
},
{
"text": "We propose a multi-objective optimization based solution to mitigate both direct 3 and gender-based proximity bias while adding minimal impact to the semantic and analogical properties of the word embedding. For each word w \u2208 V d and its vector w \u2208 R h , where h is the embedding dimension, we find its debiased counterpart w d \u2208 R h by solving the following multi-objective optimization problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method-RAN-Debias",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "argmin w d F r ( w d ), F a ( w d ), F n ( w d )",
"eq_num": "(2)"
}
],
"section": "Proposed Method-RAN-Debias",
"sec_num": "3.3"
},
{
"text": "We solve this by formulating a single objective F ( w d ) and scalarizing the set of objectives using the weighted sum method as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method-RAN-Debias",
"sec_num": "3.3"
},
{
"text": "F ( w d ) = \u03bb 1 .F r ( w d ) + \u03bb 2 .F a ( w d ) + \u03bb 3 .F n ( w d ) such that \u03bb i \u2208 [0, 1] and i \u03bb i = 1 (3) F ( w d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method-RAN-Debias",
"sec_num": "3.3"
},
{
"text": "is minimized using the Adam (Kingma and Ba, 2015) optimized gradient descent to obtain the optimal debiased embedding w d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method-RAN-Debias",
"sec_num": "3.3"
},
{
"text": "As shown in the subsequent sections, the range of objective functions F r , F a , F n (defined later) is [0, 1]; thus we use the weights \u03bb i for determining the relative importance of one objective function over another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method-RAN-Debias",
"sec_num": "3.3"
},
{
"text": "For any word w \u2208 V d , we aim to minimize the gender bias based illicit associations. Therefore, our objective function aims to ''repel'' w d from the neighboring word vectors which have a high value of indirect bias (\u03b2) with it. Consequently, we name it ''repulsion'' (F r ) and primarily define the repulsion set S r to be used in F r as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repulsion",
"sec_num": "3.3.1"
},
{
"text": "Definition 1 For a given word w, the repulsion set S r is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repulsion",
"sec_num": "3.3.1"
},
{
"text": "S r = {n i : n i \u2208 N w and \u03b2( w, n i ) > \u03b8 r },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repulsion",
"sec_num": "3.3.1"
},
{
"text": "where N w is the set of top 100 neighbors obtained from the original word vector w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repulsion",
"sec_num": "3.3.1"
},
{
"text": "Because we aim to reduce the unwanted semantic similarity between w d and the set of vectors S r , we define the objective function F r as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repulsion",
"sec_num": "3.3.1"
},
{
"text": "F r ( w d ) = \uf8eb \uf8ed n i \u01ebS r cos( w d , n i ) \uf8f6 \uf8f8 |S r | , F r ( w d ) \u2208 [0, 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repulsion",
"sec_num": "3.3.1"
},
{
"text": "For our experiments, we find that \u03b8 r = 0.05 is the appropriate threshold to repel majority of gender-biased neighbors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repulsion",
"sec_num": "3.3.1"
},
{
"text": "For any word w \u2208 V d , we aim to minimize the loss of semantic and analogical properties for its debiased counterpart w d . Therefore, our objective function aims to attract w d towards w in the word embedding space. Consequently, we name it ''attraction'' (F a ) and define it as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attraction",
"sec_num": "3.3.2"
},
{
"text": "F a ( w d ) = | cos( w d , w) \u2212 cos( w, w)|/2 = | cos( w d , w) \u2212 1|/2, F a ( w d ) \u2208 [0, 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attraction",
"sec_num": "3.3.2"
},
{
"text": "For any word w \u2208 V d , we aim to minimize its bias towards any particular gender. Therefore, the objective function F n represents the absolute value of dot product of word vector w d with the gender direction g (as defined by Bolukbasi et al., 2016) .",
"cite_spans": [
{
"start": 227,
"end": 250,
"text": "Bolukbasi et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neutralization",
"sec_num": "3.3.3"
},
{
"text": "Consequently, we name it ''neutralization'' (F n ) and define it as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neutralization",
"sec_num": "3.3.3"
},
{
"text": "F n ( w d ) = |cos( w d , g)|, F n \u2208 [0, 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neutralization",
"sec_num": "3.3.3"
},
{
"text": "Computationally, there are two major components of RAN-Debias:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time Complexity of RAN-Debias",
"sec_num": "3.3.4"
},
{
"text": "1. Calculate neighbors for each word w \u2208 V d and store them in a hash table. This has a time complexity of O(n 2 ) where n = |V d |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time Complexity of RAN-Debias",
"sec_num": "3.3.4"
},
{
"text": "2. Debias each word using gradient descent, whose time complexity is O(n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time Complexity of RAN-Debias",
"sec_num": "3.3.4"
},
{
"text": "The overall complexity of RAN-Debias is O(n 2 ), that is, quadratic with respect to the cardinality of debias set V d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time Complexity of RAN-Debias",
"sec_num": "3.3.4"
},
{
"text": "In Section 3.2, we defined the gender proximity bias (\u03b7). In this section, we extend it to the embedding level for generating a global estimate. Intuitively, an estimate can be generated by simply taking the mean of \u03b7 w , \u2200w \u2208 V d . However, this computation assigns equal importance to all \u03b7 w values, which is an oversimplification. A word w may itself be in the proximity of another word w \u2032 \u2208 V d through gender-biased associations, thereby increasing \u03b7 w \u2032 . Such cases in which w increases \u03b7 w \u2032 for other words should also be taken into account. Therefore, we use a weighted average of \u03b7 w , \u2200w \u2208 V for determining a global estimate. We first define a weighted directed network, called Bias Based Network (BBN). The use of a graph data structure makes it easier to understand the intuition behind GIPE. For each word w i in W , we find N , the set of top n word vectors having the highest cosine similarity with w i (we keep n to be 100 to reduce computational overhead without compromising on quality). For each pair ( w i , w k ), where w k \u2208 N , a directed edge is assigned from w i to w k with the edge weight being \u03b2( w i , w k ). In case the given Figure 1 : (a): A sub-graph of BBN formed by Algorithm 2 for GloVe (Pennington et al., 2014) trained on 2017-January dump of Wikipedia; we discuss the structure of the graph with respect to the word ''nurse''. We illustrate four possible scenarios with respect to their effect on GIPE, with \u03b8 s = 0.05: (b) An edge with \u03b2 < \u03b8 s may not contribute to \u03b3 i or \u03b7 w i ; (c) An outgoing edge may contribute to \u03b7 w i only; (d) An incoming edge may contribute to \u03b3 i only; (e) Incoming and outgoing edges may contribute to \u03b3 i and \u03b7 w i respectively. Every node pair association can be categorized as one of the aforementioned four cases.",
"cite_spans": [
{
"start": 1228,
"end": 1253,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 1161,
"end": 1169,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate (GIPE)",
"sec_num": "3.4"
},
{
"text": "Algorithm 2: Compute BBN for the given set of word vectors Input : \u03be: word embedding set, W : set of non gender-specific words, n: number of neighbors Output: G: bias based network",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate (GIPE)",
"sec_num": "3.4"
},
{
"text": "1 V = [ ], E = [ ] 2 for x i \u2208 W do 3 N = argmax \u03be \u2032 :|\u03be \u2032 |=n (cos( x i , x k ) : x k \u2208 \u03be \u2032 \u2286 \u03be) V.insert(x i ) 4 for x k \u2208 N do 5 E.insert (x i , x k , \u03b2 ( x i , x k )) 6 V.insert (x k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate (GIPE)",
"sec_num": "3.4"
},
{
"text": "embedding is a debiased version, we use the nondebiased version of the embedding for computing \u03b2( w i , w k ). Figure 1 portrays a sub-graph in BBN. By representing the set of non gender-specific words as a weighted directed graph we can use the number of outgoing and incoming edges for a node (word w i ) for determining \u03b7 w i and its weight respectively, thereby leading to the formalization of GIPE as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate (GIPE)",
"sec_num": "3.4"
},
{
"text": "GIP E(G) = |V | i=1 \u03b3 i \u03b7 w i |V | i=1 \u03b3 i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3 For a BBN G, the Gender-based Illicit Proximity Estimate of G, indicated by GIP E(G) is defined as:",
"sec_num": null
},
{
"text": "where, for a word w i , \u03b7 w i is the gender-based proximity bias as defined earlier, \u01eb is a (small) positive constant, and \u03b3 i is the weight, defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3 For a BBN G, the Gender-based Illicit Proximity Estimate of G, indicated by GIP E(G) is defined as:",
"sec_num": null
},
{
"text": "\u03b3 i = 1 + |{v i :(v i ,w i ) \u2208 E,\u03b2( v i , w i )> \u03b8 s }| \u01eb+|{v i :(v i ,w i ) \u2208 E}| (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3 For a BBN G, the Gender-based Illicit Proximity Estimate of G, indicated by GIP E(G) is defined as:",
"sec_num": null
},
{
"text": "The intuition behind the metric is as follows. For a bias based network G, GIP E(G) is the weighted average of gender-based proximity bias (\u03b7 w i ) for all nodes w i \u2208 W , where the weight of a node is \u03b3 i , which signifies the importance of the node in contributing towards the genderbased proximity bias of other word vectors. \u03b3 i takes into account the number of incoming edges having \u03b2 higher than a threshold \u03b8 s . Therefore, we take into account how the neighborhood of a node contributes towards illicit proximities (having high \u03b2 values for outgoing edges) as well as how a node itself contributes towards illicit proximities of other nodes (having high \u03b2 values for incoming edges). For illustration, we analyze a sub-graph in Figure 1 . By incorporating \u03b3 i , we take into account both dual ( Figure 1e ) and incoming ( Figure 1d ) edges, which would not have been the case otherwise. In GloVe (2017-January dump of Wikipedia), the word ''sweetheart'' has ''nurse'' in the set of its top 100 neighbors and \u03b2 > \u03b8 s ; however, ''nurse'' does not have ''sweetheart'' in the set of its top 100 neighbors. Hence, while ''nurse'' contributes towards gender-based proximity bias of the word ''sweetheart'', vice versa is not true. Similarly, if dual-edge exists, then both \u03b3 i and \u03b7 w i are taken into account. Therefore, GIPE considers all possible cases of edges in BBN, making it a holistic metric.",
"cite_spans": [],
"ref_spans": [
{
"start": 736,
"end": 744,
"text": "Figure 1",
"ref_id": null
},
{
"start": 803,
"end": 812,
"text": "Figure 1e",
"ref_id": null
},
{
"start": 830,
"end": 839,
"text": "Figure 1d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition 3 For a BBN G, the Gender-based Illicit Proximity Estimate of G, indicated by GIP E(G) is defined as:",
"sec_num": null
},
{
"text": "We conduct the following performance evaluation tests:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "\u2022 We compare KBC with SVM-based (Bolukbasi et al., 2016) and RIPA-based (Ethayarajh et al., 2019) methods for word classification.",
"cite_spans": [
{
"start": 32,
"end": 56,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 72,
"end": 97,
"text": "(Ethayarajh et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "\u2022 We evaluate the capacity of RAN-Debias on GloVe (aka RAN-GloVe) for the gender relational analogy dataset-SemBias (Zhao et al., 2018b ).",
"cite_spans": [
{
"start": 116,
"end": 135,
"text": "(Zhao et al., 2018b",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "\u2022 We demonstrate the ability of RAN-GloVe to mitigate gender proximity bias by computing and contrasting the GIPE value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "\u2022 We evaluate RAN-GloVe on several benchmark datasets for similarity and analogy tasks, showing that RAN-GloVe introduces minimal semantic offset to ensure quality of the word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "\u2022 We demonstrate that RAN-GloVe successfully mitigates gender bias in a downstream application -coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "Although we report and analyze the performance of RAN-GloVe in our experiments, we also applied RAN-Debias to other popular noncontextual and monolingual word embedding, Word2vec (Mikolov et al., 2013a) to create RAN-Word2vec. As expected, we observed similar results (hence not reported for the sake of brevity), emphasizing the generality of RAN-Debias. Note that the percentages mentioned in the rest of the section are relative unless stated otherwise.",
"cite_spans": [
{
"start": 179,
"end": 202,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "We use GloVe (Pennington et al., 2014 ) trained on the 2017-January dump of Wikipedia, consisting of 322,636 unique word vectors of 300 dimensions. We apply KBC on the vocabulary set V obtaining V p and V d of size 47,912 and 274,724 respectively. Further, judging upon the basis of performance evaluation tests as discussed above, we experimentally select the weights in Equation 3 as \u03bb 1 = 1/8, \u03bb 2 = 6/8, and \u03bb 3 = 1/8.",
"cite_spans": [
{
"start": 13,
"end": 37,
"text": "(Pennington et al., 2014",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data and Weights",
"sec_num": "4.1"
},
{
"text": "We compare RAN-GloVe against the following word embedding models, each of which is trained on the 2017-January dump of Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines for Comparisons",
"sec_num": "4.2"
},
{
"text": "\u2022 GloVe: A pre-trained word embedding model as mentioned earlier. This baseline represents the non-debiased version of word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines for Comparisons",
"sec_num": "4.2"
},
{
"text": "\u2022 Hard-GloVe: Hard-Debias GloVe; we use the debiasing method 4 proposed by Bolukbasi et al., 2016 on GloVe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines for Comparisons",
"sec_num": "4.2"
},
{
"text": "\u2022 GN-GloVe: Gender-neutral GloVe; we use the original 5 debiased version of GloVe released by Zhao et al. (2018b) .",
"cite_spans": [
{
"start": 94,
"end": 113,
"text": "Zhao et al. (2018b)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines for Comparisons",
"sec_num": "4.2"
},
{
"text": "\u2022 GP-GloVe: Gender-preserving GloVe; we use the original 6 debiased version of GloVe released by Kaneko and Bollegala (2019) .",
"cite_spans": [
{
"start": 97,
"end": 124,
"text": "Kaneko and Bollegala (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines for Comparisons",
"sec_num": "4.2"
},
{
"text": "We compare KBC with RIPA-based (unsupervised) (Ethayarajh et al., 2019) and SVM-based (supervised) (Bolukbasi et al., 2016) approaches for word classification. We create a balanced labeled test set consisting of a total of 704 words, with 352 words for each category-genderspecific and non gender-specific. For the non gender-specific category, we select all the 87 neutral and biased words from the SemBias dataset (Zhao et al., 2018b) . Further, we select all 320, 40 and 60 gender-biased occupation words released by Bolukbasi et al. (2016) ; Zhao et al. (2018a) and Rudinger et al. (2018) , respectively. After combining and removing duplicate words, we obtain Zhao et al. (2018b) . We use stratified sampling to under-sample 444 words into 352 words for balancing the classes. The purpose of creating this diversely sourced dataset is to provide a robust ground-truth for evaluating the efficacy of different word classification algorithms. Table 3 shows precision, recall, F1-score, AUC-ROC, and accuracy by considering genderspecific words as the positive class and non gender-specific words as the negative class. Thus, for KBC, we consider the output set V p as the positive and V d as the negative class.",
"cite_spans": [
{
"start": 46,
"end": 71,
"text": "(Ethayarajh et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 99,
"end": 123,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 416,
"end": 436,
"text": "(Zhao et al., 2018b)",
"ref_id": "BIBREF41"
},
{
"start": 520,
"end": 543,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF1"
},
{
"start": 546,
"end": 565,
"text": "Zhao et al. (2018a)",
"ref_id": "BIBREF40"
},
{
"start": 570,
"end": 592,
"text": "Rudinger et al. (2018)",
"ref_id": "BIBREF35"
},
{
"start": 665,
"end": 684,
"text": "Zhao et al. (2018b)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 946,
"end": 953,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Classification",
"sec_num": "4.3"
},
{
"text": "The SVM-based approach achieves high precision but at the cost of a low recall. Although the majority of the words classified as genderspecific are correct, it achieves this due to the limited coverage of the rest of gender-specific words, resulting in them being classified as non gender-specific, thereby reducing the recall drastically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification",
"sec_num": "4.3"
},
{
"text": "The RIPA approach performs fairly with respect to precision and recall. Unlike SVM, RIPA is not biased towards a particular class and results in rather fair performance for both the classes. Almost similar to SVM, KBC also correctly classifies most of the gender-specific words but in an exhaustive manner, thereby leading to much fewer misclassification of gender-specific words as non gender-specific. As a result, KBC achieves sufficiently high recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification",
"sec_num": "4.3"
},
{
"text": "Overall, KBC outperforms the best baseline by an improvement of 2.7% in AUC-ROC, 15.6% in F1-score, and 9.0% in accuracy. Additionally, because KBC entirely depends on knowledge bases, the absence of a particular word in them may result in misclassification. This could be the reason behind the lower precision of KBC as compared to SVM-based classification and can be improved upon by incorporating more extensive knowledge bases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Classification",
"sec_num": "4.3"
},
{
"text": "To evaluate the extent of gender bias in RAN-GloVe, we perform gender relational analogy test on the SemBias (Zhao et al., 2018b) dataset. Each instance of SemBias contains four types of word pairs: a gender-definition word pair (Definition; ''headmaster-headmistress''), a gender-stereotype word pair (Stereotype; ''manager-secretary'') and two other word pairs which have similar meanings but no gender-based relation (None; ''treble -bass''). There are a total of 440 instances in the semBias dataset, created by the cartesian product of 20 genderstereotype word pairs and 22 gender-definition word pairs. From each instance, we select a word pair (a, b) from the four word pairs such that using the word embeddings under evaluation, cosine similarity of the word vectors ( he \u2212 she) and ( a \u2212 b) would be maximum. Table 4 shows an embedding-wise comparison on the SemBias dataset. The accuracy is measured in terms of the percentage of times each type of word pair is selected as the top for various instances. RAN-GloVe outperforms all other post-processing debiasing methods by achieving at least 9.96% and 82.8% better accuracy in gender-definition and gender-stereotype, respectively. We attribute this performance to be an effect of superior vocabulary selection by KBC and the neutralization objective of RAN-Debias. KBC classifies the words to be debiased or preserved with high accuracy, while the neutralization objective function of RAN-Debias directly minimizes the preference of a biased word between ''he'' and ''she''; reducing the gender cues that give rise to unwanted genderbiased analogies (Table 10) . Therefore, although RAN-GloVe achieves lower accuracy for genderdefinition type as compared to (learning-based) Table 5 : GIPE (range: 0-1) for different values of \u03b8 s (lower value is better).",
"cite_spans": [
{
"start": 109,
"end": 129,
"text": "(Zhao et al., 2018b)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 818,
"end": 825,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1612,
"end": 1622,
"text": "(Table 10)",
"ref_id": "TABREF14"
},
{
"start": 1737,
"end": 1744,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender Relational Analogy",
"sec_num": "4.4"
},
{
"text": "GN-GloVe, it outperforms the next best baseline in Stereotype by at least 21.4%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender Relational Analogy",
"sec_num": "4.4"
},
{
"text": "GIPE analyzes the extent of undue gender bias based proximity between word vectors. An embedding-wise comparison for various values of \u03b8 s is presented in Table 5 . For a fair comparison, we compute GIPE for a BBN created upon our debias set V d as well as for H d , the set of words debiased by Bolukbasi et al. (2016) .",
"cite_spans": [
{
"start": 296,
"end": 319,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate",
"sec_num": "4.5"
},
{
"text": "Here, \u03b8 s represents the threshold as defined earlier in Equation 4. As it may be inferred from Equations 1 and 4, upon increasing the value of \u03b8 s , for a word w i , the value of both \u03b7 w i and \u03b3 i decreases, as a lesser number of words qualifies the threshold for selection in each case. Therefore, as evident from Table 5 , the value of GIPE decreases with the increase of \u03b8 s . For the input set V d , RAN-GloVe outperforms the next best baseline (Hard-GloVe) by at least 42.02%. We attribute this to the inclusion of the repulsion objective function F r in Equation 2, which reduces the unwanted gender-biased associations between words and their neighbors. For the input set H d , RAN-GloVe performs better than other baselines for all values of \u03b8 s except for \u03b8 s = 0.07 where it closely follows Hard-GloVe.",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate",
"sec_num": "4.5"
},
{
"text": "Additionally, H d consists of many misclassified gender-specific words, as observed from the low recall performance at the word classification test in Section 4.3. Therefore, the values of GIPE corresponding to every value of \u03b8 s for the input H d is higher as compared to the values for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate",
"sec_num": "4.5"
},
{
"text": "V d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate",
"sec_num": "4.5"
},
{
"text": "Although there is a significant reduction in GIPE value for RAN-GloVe as compared to other word embedding models, word pairs with noticeable \u03b2 values still exist (as indicated by nonzero GIPE values), which is due to the tradeoff between semantic offset and bias reduction. As a result, GIPE for RAN-GloVe is not perfectly zero but close to it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-based Illicit Proximity Estimate",
"sec_num": "4.5"
},
{
"text": "The task of analogy test is to answer the following question: ''p is to q as r is to ?''. Mathematically, it aims at finding a word vector w s which has the maximum cosine similarity with ( w q \u2212 w p + w r ). However, Schluter (2018) highlights some critical issues with word analogy tests. For instance, there is a mismatch between the distributional hypothesis used for generating word vectors and the word analogy hypothesis. Nevertheless, following the practice of using word analogy test to ascertain the semantic prowess of word vectors, we evaluate RAN-GloVe to provide a fair comparison with other baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Test",
"sec_num": "4.6"
},
{
"text": "We use Google (Mikolov et al., 2013a ) (semantic [Sem] and syntactic [Syn] analogies, containing a total 19,556 questions) and MSR (Mikolov et al., 2013b ) (containing a total 7,999 syntactic questions) datasets for evaluating the performance of word embeddings. We use 3COSMUL (Levy and Goldberg, 2014) for finding w s .",
"cite_spans": [
{
"start": 14,
"end": 36,
"text": "(Mikolov et al., 2013a",
"ref_id": "BIBREF22"
},
{
"start": 131,
"end": 153,
"text": "(Mikolov et al., 2013b",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Test",
"sec_num": "4.6"
},
{
"text": "Table 6(a) shows that RAN-GloVe outperforms other baselines on the Google (Sem and Syn) dataset while closely following on the MSR dataset. The improvement in performance can be attributed to the removal of unwanted neighbors of a word vector (having gender bias based proximity), while enriching the neighborhood with those having empirical utility, leading to a better performance in analogy tests. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analogy Test",
"sec_num": "4.6"
},
{
"text": "A word semantic similarity task is a measure of how closely a word embedding model captures the similarity between two words as compared to human-annotated ratings. For a word pair, we compute the cosine similarity between the word embeddings and its Spearman correlation with the human ratings. The word pairs are selected from the following benchmark datasets: RG (Rubenstein and Goodenough, 1965), MTurk (Radinsky et al., 2011) , RW (Luong et al., 2013) , MEN (Bruni et al., 2014) , SimLex999 (Hill et al., 2015) , and AP (Almuhareb and Poesio, 2005) . The results for these tests are obtained from the word embedding benchmark package (Jastrzebski et al., 2017) . 7 Note that it is not our primary aim to achieve a state-of-the-art result in this test. It is only considered to evaluate semantic loss. Table 6(b) shows that RAN-GloVe performs better or follows closely to the best baseline. This shows that RAN-Debias introduces minimal semantic disturbance.",
"cite_spans": [
{
"start": 407,
"end": 430,
"text": "(Radinsky et al., 2011)",
"ref_id": "BIBREF32"
},
{
"start": 436,
"end": 456,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 463,
"end": 483,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 496,
"end": 515,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 525,
"end": 553,
"text": "(Almuhareb and Poesio, 2005)",
"ref_id": "BIBREF0"
},
{
"start": 639,
"end": 665,
"text": "(Jastrzebski et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 668,
"end": 669,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 806,
"end": 816,
"text": "Table 6(b)",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Word Semantic Similarity Test",
"sec_num": "4.7"
},
{
"text": "Finally, we evaluate the performance of RAN-GloVe on a downstream application task-coreference resolution. The aim of coreference resolution is to identify all expressions which refer to the same entity in a given text. We evaluate the embedding models on the OntoNotes 5.0 (Weischedel et al., 2012) and the WinoBias (Zhao et al., 2018a) benchmark datasets. WinoBias comprises sentences constrained by two prototypical templates (Type 1 and Type 2), where each template is further divided into two subsets (PRO and ANTI). Such a construction facilitates in revealing the extent of gender bias present in coreference resolution models. Although both templates are designed to assess the efficacy of coreference resolution models, Type 1 is exceedingly challenging as compared to Type 2 as it has no syntactic cues for disambiguation. Each template consists of two subsets for evaluation-prostereotype (PRO) and anti-stereotype (ANTI). PRO consists of sentences in which the gendered pronouns refer to occupations biased towards the same gender. For instance, consider the sentence ''The doctor called the nurse because he wanted a vaccine.'' Stereotypically, ''doctor'' is considered to be a male-dominated profession, and the gender of pronoun referencing it (''he'') is also male. Therefore, sentences in PRO are consistent with societal stereotypes. ANTI consists of the same sentences as PRO, but the gender of the pronoun is changed. Considering the same example but by replacing ''he'' with ''she'', we get: ''The doctor called the nurse because she wanted a vaccine.'' In this case, the gender of pronoun (''she'') which refers to ''doctor'' is female. Therefore, sentences in ANTI are not consistent with societal stereotypes. Due to such construction, gender bias in the word embeddings used for training the coreference model would naturally perform better in PRO than ANTI and lead to a higher absolute difference (Diff ) between them. While a lesser gender bias in the model would attain a smaller Diff, the ideal case produces an absolute difference of zero. Following the coreference resolution testing methodology used by Zhao et al. (2018b) , we train the coreference resolution model proposed by Lee et al. (2017) on the OntoNotes train dataset for different embeddings. Zhao et al. (2018b) . Table 7 shows that RAN-GloVe achieves the smallest absolute difference between scores on PRO and ANTI subsets of WinoBias, significantly outperforming other embedding models and achieving 97.4% better Diff (see Table 7 for the definition of Diff ) than the next best baseline (Hard-GloVe) and 98.7% better than the original GloVe. This lower Diff is achieved by an improved accuracy in ANTI and a reduced accuracy in PRO. We hypothesise that the high performance of non-debiased GloVe in PRO is due to the unwanted gender cues rather than the desired coreference resolving ability of the model. Further, the performance reduction in PRO for the other debiased versions of GloVe also corroborates this hypothesis. Despite debiasing GloVe, a considerable amount of gender cues remain in the baseline models as quantified by a lower, yet significant Diff. In contrast, RAN-GloVe is able to remove gender cues dramatically, thereby achieving an almost ideal Diff. Additionally, the performance of RAN-GloVe on the OntoNotes 5.0 test set is comparable with that of other embeddings.",
"cite_spans": [
{
"start": 274,
"end": 299,
"text": "(Weischedel et al., 2012)",
"ref_id": null
},
{
"start": 317,
"end": 337,
"text": "(Zhao et al., 2018a)",
"ref_id": "BIBREF40"
},
{
"start": 2136,
"end": 2155,
"text": "Zhao et al. (2018b)",
"ref_id": "BIBREF41"
},
{
"start": 2212,
"end": 2229,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 2287,
"end": 2306,
"text": "Zhao et al. (2018b)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 2309,
"end": 2316,
"text": "Table 7",
"ref_id": "TABREF9"
},
{
"start": 2520,
"end": 2527,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "4.8"
},
{
"text": "To quantitatively and qualitatively analyze the effect of neutralization and repulsion in RAN-Debias, we perform an ablation study. We examine the following changes in RAN-Debias independently:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.9"
},
{
"text": "1. Nullify the effect of repulsion by setting \u03bb 1 = 0, thus creating AN-GloVe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.9"
},
{
"text": "2. Nullify the effect of neutralization by setting \u03bb 3 = 0, thus creating RA-GloVe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.9"
},
{
"text": "We demonstrate the effect of the absence of neutralization or repulsion through a comparative analysis on GIPE and the SemBias analogy test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.9"
},
{
"text": "The GIPE values for AN-GloVe, RA-GloVe, and RAN-GloVe are presented in Table 8 . We observe that in the absence of repulsion (AN-GloVe), the performance is degraded by at least 72% compared to RAN-GloVe. It indicates the efficacy of repulsion in our objective function as a way to reduce the unwanted gender-biased associations between words and their neighbors, thereby reducing GIPE. Further, even in the absence of neutralization (RA-GloVe), GIPE is worse by at least 50% as compared to RAN-GloVe. In fact, the minimum GIPE is observed for RAN-GloVe, where both repulsion and neutralization are used in synergy as compared to the absence of any one of them.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.9"
},
{
"text": "To illustrate further, Table 9 shows the rank of neighbors having illicit proximities for three professions, using different version of debiased embeddings. It can be observed that the ranks in RA-GloVe are either close to or further away from the ranks in AN-GloVe, highlighting the importance of repulsion in the objective function. Further, the ranks in RAN-GloVe are the farthest, corroborating the minimum value of GIPE as observed in Table 8 . Table 10 shows that in the absence of neutralization (RA-GloVe), the tendency of favouring stereotypical analogies increases by an absolute difference of 6.2% as compared to RAN-GloVe. On the other hand, through the presence of neutralization, AN-GloVe does not favor stereotypical analogies. This suggests that reducing the projection of biased words on gender direction through neutralization is an effective measure to reduce stereotypical analogies within the embedding space. For example, consider the following instance of word pairs from the SemBias dataset: {(widower, widow), (book, magazine), (dog, cat), (doctor, nurse)}, where (widower, widow) is a gender-definition word pair while (doctor, nurse) is a gender-stereotype word pair and the remaining are of none type as explained in Section 4.4. During the evaluation, RA-GloVe incorrectly selects the gender-stereotype word pair as the closest analogy with (he, she), while AN-GloVe and RAN-GloVe correctly select the gender-definition word pair. Further, we observe that RAN-GloVe is able to maintain the high performance of AN-GloVe, and the difference is less (0.2% compared to 1.1%) which is compensated by the superior performance of RAN-GloVe over other metrics like GIPE.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 9",
"ref_id": "TABREF13"
},
{
"start": 440,
"end": 447,
"text": "Table 8",
"ref_id": "TABREF11"
},
{
"start": 450,
"end": 458,
"text": "Table 10",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.9"
},
{
"text": "Through this ablation study, we understand the importance of repulsion and neutralization in the multi-objective optimization function of RAN-Debias. The superior performance of RAN-GloVe can be attributed to the synergistic interplay of repulsion and neutralization. Hence, in RAN-GloVe we attain the best of both worlds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.9"
},
{
"text": "Here we highlight the changes in the neighborhood (collection of words sorted in the descending order of cosine similarity with the given word) of words before and after the debiasing process. To maintain readability while also demonstrating the changes in proximity, we only analyze a few selected words. However, our proposed metric GIPE quantifies this for an exhaustive vocabulary set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Neighborhood of Words",
"sec_num": "4.10"
},
{
"text": "We select a set of gender-neutral professions having high values of gender-based proximity bias \u03b7 w i as defined earlier. For each of these professions, in Table 11 , we select a set of four words from their neighborhood for two classes:",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study: Neighborhood of Words",
"sec_num": "4.10"
},
{
"text": "\u2022 Class A: Neighbors arising due to genderbased illicit proximities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Neighborhood of Words",
"sec_num": "4.10"
},
{
"text": "\u2022 Class B: Neighbors whose proximities are not due to any kind of bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Neighborhood of Words",
"sec_num": "4.10"
},
{
"text": "For the words in class A, the debiasing procedure is expected to increase their rank, thereby decreasing the semantic similarity, while for words belonging to class B, debiasing procedure is expected to retain or improve the rank for maintaining the semantic information. We observe that RAN-GloVe not only maintains the semantic information by keeping the Table 11 : For four professions, we compare the ranks of their class A and class B neighbors with respect to each embedding. Here, rank represents the position in the neighborhood of a profession, and is shown by the values under each embedding. rank of words in class B close to their initial value but unlike other debiased embeddings, it drastically increases the rank of words belonging to class A. However, in some cases like the word ''Socialite'', we observe that the ranks of words such as ''businesswoman'' and ''heiress'', despite belonging to class A, are close to their initial values. This can be attributed to the high semantic dependence of ''Socialite'' on these words, resulting in a bias removal and semantic information tradeoff.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study: Neighborhood of Words",
"sec_num": "4.10"
},
{
"text": "In this paper, we proposed a post-processing gender debiasing method called RAN-Debias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our method not only mitigates direct bias of a word but also reduces its associations with other words that arise from gender-based predilections. We also proposed a word classification method, called KBC, for identifying the set of words to be debiased. Instead of using ''biased'' word embeddings, KBC uses multiple knowledge bases for word classification. Moreover, we proposed Gender-based Illicit Proximity Estimate (GIPE), a metric to quantify the extent of illicit proximities in an embedding. RAN-Debias significantly outperformed other debiasing methods on a suite of evaluation metrics, along with the downstream application task of coreference resolution while introducing minimal semantic disturbance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In the future, we would like to enhance KBC by utilizing machine learning methods to account for the words which are absent in the knowledge base. Currently, RAN-Debias is directly applicable to non-contextual word embeddings for nongendered grammatical languages. In the wake of recent work such as , we would like to extend our work towards contextualized embedding models and other languages with grammatical gender like French and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/ganoninc/fbgender-json.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Though not done explicitly, reducing direct bias also reduces indirect bias as stated byBolukbasi et al. (2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "G = (V, E) 8 return G",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/tolga-b/debiaswe. 5 https://github.com/uclanlp/gn_GloVe. 6 https://github.com/kanekomasahiro/gp_ debias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/kudkudak/wordembeddings-benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work was partially supported by the Ramanujan Fellowship, DST (ECR/2017/00l691). T. Chakraborty would like to acknowledge the support of the Infosys Center for AI, IIIT-Delhi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
},
{
"text": "Neighbor Embedding GloVe Hard-GloVe GN-GloVe GP-GloVe RAN- GloVe Captain A sir 19 32 34 20 52 james 20 22 26 18 75 brother 34 83 98 39 323 father 39 52 117 40 326 B lieutenant 1 1 1 1 1 colonel 2 2 2 2 2 commander 3 3 4 3 3 B rancher 1 2 1 2 3 farmers 2 1 4 1 1 farm 3 3 5 4 2 landowner 4 4 2 5 5 ",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 273,
"text": "GloVe Captain A sir 19 32 34 20 52 james 20 22 26 18 75 brother 34 83 98 39 323 father 39 52 117 40 326 B lieutenant 1 1 1 1 1 colonel 2 2 2 2 2 commander 3 3 4 3 3",
"ref_id": null
},
{
"start": 274,
"end": 371,
"text": "B rancher 1 2 1 2 3 farmers 2 1 4 1 1 farm 3 3 5 4 2 landowner 4 4 2 5 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Class",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Concept learning and categorization from the Web",
"authors": [
{
"first": "Abdulrahman",
"middle": [],
"last": "Almuhareb",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Annual Meeting of the Cognitive Science Society",
"volume": "27",
"issue": "",
"pages": "103--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdulrahman Almuhareb and Massimo Poesio. 2005. Concept learning and categorization from the Web. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 27, pages 103-108.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Man is to computer programmer as woman is to homemaker? Debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"T"
],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4349--4357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T. Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems, pages 4349-4357.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identifying and reducing gender bias in wordlevel language models",
"authors": [
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word- level language models. CoRR, abs/1904.03035.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49:1-47.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automati- cally from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Understanding undesirable word embedding associations",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1696--1705",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding associations. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1696-1705.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving vector space word representations using multilingual correlation",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "462--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462-471.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "609--614",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609-614.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "It's all in the name: Mitigating gender bias with name-based counterfactual data substitution",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Rowan Hall Maudslay",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5267--5275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5267-5275. Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computa- tional Linguistics, 41(4):665-695.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised discovery of gendered language through latentvariable modeling",
"authors": [
{
"first": "Alexander Miserlis",
"middle": [],
"last": "Hoyle",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Wolf-Sonkin",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1706--1716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Miserlis Hoyle, Lawrence Wolf- Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cotterell. 2019. Unsupervised dis- covery of gendered language through latent- variable modeling. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 1706-1716.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How to evaluate word embeddings? On importance of data efficiency and simple supervised tasks",
"authors": [
{
"first": "Stanis\u0142aw",
"middle": [],
"last": "Jastrzebski",
"suffix": ""
},
{
"first": "Damian",
"middle": [],
"last": "Le\u015bniak",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [
"Marian"
],
"last": "Czarnecki",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.02170"
]
},
"num": null,
"urls": [],
"raw_text": "Stanis\u0142aw Jastrzebski, Damian Le\u015bniak, and Wojciech Marian Czarnecki. 2017. How to evaluate word embeddings? On importance of data efficiency and simple supervised tasks. arXiv preprint arXiv:1702.02170.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Gender-preserving debiasing for pretrained word embeddings",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1641--1650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre- trained word embeddings. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1641-1650.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations, ICLR",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR, pages 1-15.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Linguistic regularities in sparse and explicit word representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Linguis- tic regularities in sparse and explicit word representations. In Proceedings of the Eigh- teenth Conference on Computational Natural Language Learning, pages 171-180.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Gender bias in neural natural language processing",
"authors": [
{
"first": "Kaiji",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Mardziel",
"suffix": ""
},
{
"first": "Fangjing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Preetam",
"middle": [],
"last": "Amancharla",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Datta",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.11714"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gender bias in neural natural language processing. arXiv preprint arXiv:1807.11714.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104-113.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Manzini",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Chong",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "615--621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Manzini, Lim Yao Chong, Alan W. Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615-621.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19: 313-330.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Regularizing and optimizing lstm language models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing lstm language models. International Conference on Learning Representations, pages 1-13.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "1st International Conference on Learning Representations, ICLR 2013,Workshop Track Proceedings",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. 1st International Conference on Learning Representations, ICLR 2013,Workshop Track Proceedings, pages 1-12.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Wordnet: A lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. Wordnet: A lexical database for English. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Counter-fitting word vectors to linguistic constraints",
"authors": [
{
"first": "",
"middle": [],
"last": "Nikola Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Diarmuid\u00f3",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"M"
],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3. S\u00e9aghdha, Blaise Thomson, Milica Ga\u0161i\u0107, Lina M. Rojas- Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic con- straints. Proceedings of the 2016 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 142-148.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Diarmuid\u00f3",
"suffix": ""
},
{
"first": "Ira",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Leviant",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "309--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Ivan Vuli\u0107, Diarmuid\u00d3. S\u00e9aghdha, Ira Leviant, Roi Reichart, Milica Ga\u0161i\u0107, Anna Korhonen, and Steve Young. 2017. Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the Association for Computational Linguistics, 5:309-324.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Integrating distributional lexical contrast into word embeddings for antonym-synonym distinction",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Kim Anh Nguyen",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "454--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonym-synonym distinction. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 454-459.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Word embedding-based antonym detection using thesauri and distributional information",
"authors": [
{
"first": "Masataka",
"middle": [],
"last": "Ono",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Sasaki",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "984--989",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detection using thesauri and distributional information. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 984-989.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Competent men and warm women: Gender stereotypes and backlash in image search results",
"authors": [
{
"first": "Jahna",
"middle": [],
"last": "Otterbacher",
"suffix": ""
},
{
"first": "Jo",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Clough",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "6620--6631",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jahna Otterbacher, Jo Bates, and Paul Clough. 2017. Competent men and warm women: Gender stereotypes and backlash in image search results. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 6620-6631.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Names and ''doing gender'': How forenames and surnames contribute to gender identities, difference, and inequalities",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Pilcher",
"suffix": ""
}
],
"year": 2017,
"venue": "Sex Roles",
"volume": "77",
"issue": "",
"pages": "812--822",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Pilcher. 2017. Names and ''doing gender'': How forenames and surnames contribute to gender identities, difference, and inequalities. Sex Roles, 77(11-12):812-822.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A word at a time: Computing word relatedness using temporal semantic analysis",
"authors": [
{
"first": "Kira",
"middle": [],
"last": "Radinsky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Shaul",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "337--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: Computing word relatedness using temporal semantic analysis. In Proceed- ings of the 20th International Conference on World Wide Web, pages 337-346. ACM.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1793--1803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1793-1803.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Contextual correlates of synonymy",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Rubenstein",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Goodenough",
"suffix": ""
}
],
"year": 1965,
"venue": "Communications of the ACM",
"volume": "8",
"issue": "10",
"pages": "627--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627-633.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Gender bias in coreference resolution",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers).",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The word analogy testing caveat",
"authors": [
{
"first": "Natalie",
"middle": [],
"last": "Schluter",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalie Schluter. 2018. The word analogy testing caveat. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Quantifying the semantic core of gender systems",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Damian",
"middle": [],
"last": "Blasi",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Wolf-Sonkin",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5734--5739",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Damian Blasi, Lawrence Wolf- Sonkin, Hanna Wallach, and Ryan Cotterell. 2019. Quantifying the semantic core of gender systems. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5734-5739.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Gender bias in contextualized word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "629--634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629-634, Minneapolis, Minnesota.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learning gender-neutral word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. 2018b. Learning gender-neutral word embeddings. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847-4853.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Examining gender bias in languages with grammatical gender",
"authors": [
{
"first": "Pei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Weijia",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kuan-Hao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5276--5284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5276-5284.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "Stage 1: This stage classifies all stop words and non-alphabetic words as V p . Debiasing such words serve no practical utility; hence we preserve them.",
"content": "<table><tr><td>Algorithm 1: Knowledge Based Classifier</td></tr><tr><td>(KBC)</td></tr><tr><td>Input : V : vocabulary set, isnonaphabetic(w):</td></tr><tr><td>checks for non-alphabetic words</td></tr><tr><td>seed: set of gender-specific words</td></tr><tr><td>stw: set of stop words</td></tr><tr><td>names: set of gender-specific names</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"text": "Comparison for the gender relational analogy test on the SemBias dataset. \u2191 (\u2193) indicates that higher (lower) value is better.",
"content": "<table><tr><td>352 non gender-specific words. For the gender-</td></tr><tr><td>specific category, we use a list of 222 male and</td></tr><tr><td>222 female words provided by</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"text": "= 0.03 \u03b8 s = 0.05 \u03b8 s = 0.07",
"content": "<table><tr><td colspan=\"2\">Input Embedding GloVe Hard-GloVe \u03b8 s V d GN-GloVe</td><td>0.115 0.069 0.142</td><td>GIPE 0.038 0.015 0.052</td><td>0.015 0.004 0.022</td></tr><tr><td/><td>GP-GloVe</td><td>0.145</td><td>0.048</td><td>0.018</td></tr><tr><td/><td>RAN-GloVe</td><td>0.040</td><td>0.006</td><td>0.002</td></tr><tr><td/><td>GloVe</td><td>0.129</td><td>0.051</td><td>0.024</td></tr><tr><td/><td>Hard-GloVe</td><td>0.075</td><td>0.020</td><td>0.007</td></tr><tr><td>H d</td><td>GN-GloVe</td><td>0.155</td><td>0.065</td><td>0.031</td></tr><tr><td/><td>GP-GloVe</td><td>0.157</td><td>0.061</td><td>0.027</td></tr><tr><td/><td>RAN-GloVe</td><td>0.056</td><td>0.018</td><td>0.011</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>: Comparison of various embedding methods for (a) analogy tests (performance is measured</td></tr><tr><td>in accuracy) and (b) word semantic similarity tests (performance is measured in terms of Spearman</td></tr><tr><td>rank correlation).</td></tr></table>"
},
"TABREF9": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>shows</td></tr></table>"
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"text": "F1-Score (in %) in the task of coreference resolution. Diff denotes the absolute difference between F1-score on PRO and ANTI datasets.",
"content": "<table><tr><td>Input</td><td>Embedding</td><td>\u03b8 s = 0.03</td><td>GIPE \u03b8 s = 0.05</td><td>\u03b8 s = 0.07</td></tr><tr><td/><td>AN-GloVe</td><td>0.069</td><td>0.015</td><td>0.004</td></tr><tr><td>V d</td><td>RA-GloVe</td><td>0.060</td><td>0.014</td><td>0.007</td></tr><tr><td/><td>RAN-GloVe</td><td>0.040</td><td>0.006</td><td>0.002</td></tr></table>"
},
"TABREF11": {
"type_str": "table",
"num": null,
"html": null,
"text": "Ablation study-GIPE for AN-GloVe and RA-GloVe.",
"content": "<table><tr><td>with the absolute difference (Diff ) of F1-scores</td></tr><tr><td>on PRO and ANTI datasets for different word</td></tr><tr><td>embeddings. The results for GloVe, Hard-GloVe,</td></tr><tr><td>and GN-GloVe are obtained from</td></tr></table>"
},
"TABREF13": {
"type_str": "table",
"num": null,
"html": null,
"text": "For three professions, we compare the ranks of their neighbors due to illicit proximities (the values denote the ranks).",
"content": "<table><tr><td>Dataset</td><td>Embedding</td><td>Definition \u2191</td><td>Stereotype \u2193</td><td>None \u2193</td></tr><tr><td/><td>AN-GloVe</td><td>93.0</td><td>0.2</td><td>6.8</td></tr><tr><td>SemBias</td><td>RA-GloVe</td><td>83.2</td><td>7.3</td><td>9.5</td></tr><tr><td/><td>RAN-GloVe</td><td>92.8</td><td>1.1</td><td>6.1</td></tr></table>"
},
"TABREF14": {
"type_str": "table",
"num": null,
"html": null,
"text": "Comparison for the gender relational analogy test on the SemBias dataset. \u2191 (\u2193) indicates that higher (lower) value is better.",
"content": "<table/>"
}
}
}
}