|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:02:33.872779Z" |
|
}, |
|
"title": "HeteroCorpus: A Corpus for Heteronormative Language Detection", |
|
"authors": [ |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "V\u00e1squez", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universidad Nacional Aut\u00f3noma de M\u00e9xico", |
|
"location": {} |
|
}, |
|
"email": "juanmv@comunidad.unam.mx" |
|
}, |
|
{ |
|
"first": "Gemma", |
|
"middle": [], |
|
"last": "Bel-Enguix", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [ |
|
"Thomas" |
|
], |
|
"last": "Andersen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universidad Nacional Aut\u00f3noma de M\u00e9xico", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sergio-Luis", |
|
"middle": [], |
|
"last": "Ojeda-Trueba", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In recent years, plenty of work has been done by the NLP community regarding gender bias detection and mitigation in language systems. Yet, to our knowledge, no one has focused on the difficult task of heteronormative language detection and mitigation. We consider this an urgent issue, since language technologies are growing increasingly present in the world and, as it has been proven by various studies, NLP systems with biases can create real-life adverse consequences for women, gender minorities and racial minorities and queer people. For these reasons, we propose and evaluate Het-eroCorpus; a corpus created specifically for studying heterononormative language in English. Additionally, we propose a baseline set of classification experiments on our corpus, in order to show the performance of our corpus in classification tasks.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In recent years, plenty of work has been done by the NLP community regarding gender bias detection and mitigation in language systems. Yet, to our knowledge, no one has focused on the difficult task of heteronormative language detection and mitigation. We consider this an urgent issue, since language technologies are growing increasingly present in the world and, as it has been proven by various studies, NLP systems with biases can create real-life adverse consequences for women, gender minorities and racial minorities and queer people. For these reasons, we propose and evaluate Het-eroCorpus; a corpus created specifically for studying heterononormative language in English. Additionally, we propose a baseline set of classification experiments on our corpus, in order to show the performance of our corpus in classification tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In 1978, the french philosopher Monique Wittig gave a conference titled The Straight Mind (Wittig, 1979) , in which she introduced the idea of the straight regimen. Wittig declared that heterosexuality is a political system that encompasses all aspects of western societies, and that its basis is the separation of people in binary and opposite categories based on their sex (Wittig, 1980) . The author proposes that the idea of \"women\" -and that of all sexual minorities-is a generated byproduct of a \"superior\" category from which every institution should be modelled after. This category is, of course, \"men\" (Wittig, 1980) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 104, |
|
"text": "(Wittig, 1979)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 389, |
|
"text": "(Wittig, 1980)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 626, |
|
"text": "(Wittig, 1980)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Wittig also proposes that language is a system that has established that men, and heterosexuality, are the universals from which every particular derive from. This normalisation of heterosexuality as a political regimen through language -Wittig argues-contributes to the continuation of the oppressive systems against everyone who is not a member of the privileged \"men\" category (Wittig, 1980) . Adding to Wittig's ideas, Judith Butler proposed that the subject is itself produced in and as a gendered matrix of relations (Butler, 2011) , meaning with this that the social and inner processes that construct the \"subject\" are deeply guided by the ideas of gender. Butler even remarks that the matrix of gender is generated prior to the creation of the subject, since this structure defines the limits and possibilities of what the subject can become (Butler, 2011) . Therefore, the boundaries of what can be considered \"human\", are enforced by the matrix of gender, according to Butler. Following these ideas, we hypothesize that the majority of the language used in current social media applications must exhibit numerous rules and expressions of heterosexuality as the norm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 380, |
|
"end": 394, |
|
"text": "(Wittig, 1980)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 537, |
|
"text": "(Butler, 2011)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 851, |
|
"end": 865, |
|
"text": "(Butler, 2011)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 980, |
|
"end": 987, |
|
"text": "Butler.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In recent years, plenty of work has been done by the NLP community regarding gender bias detection and mitigation in language systems. Yet, to our knowledge, no one has focused on the difficult task of heteronormative language detection and mitigation. We consider this an urgent issue, since language technologies are growing increasingly present in the world and, as it has been proven by various studies, NLP systems with biases can create real-life adverse consequences for women, gender minorities and racial minorities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For these reasons, we propose and evaluate Hete-roCorpus; a corpus created specifically for studying heterononormative language in English. Our corpus consists of 7,265 tweets extracted from 2020 to 2022. In order to identify heterononormative lan-guage in our corpus, we manually annotated every tweet, performed agreement experiments among the six annotators, and then evaluated the performance of our corpus in classification tasks using various classification systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main contributions of our work are the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. We present the first annotated corpus specialized in the study of heteronormative language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2. We propose a baseline set of classification experiments on our corpus, in order to show the performance of our corpus in classification tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is structured as follows: Section 2 introduces the meaning of heteronormative and the negative impact it has had in society in general and the LGBTQIA+ community in particular. It also provides and overview of the work that has been done so far in gender bias detection and mitigation in NLP. Section 3 explains the configuration, annotation and challenges on compiling the HeteroCorpus, a data set especially designed for the detection of heteronormativity. In Section 4 we present the pre-processing and classification experiments. The results are discussed in Section 5. We close the paper with conclusions and future work (Section 6).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section we will consider literature that explores what heteronormativity is and how the sense of the word has evolved over time, motivations to challenging heteronormativity, heteronormativity and gender bias as explored in natural language processing (NLP), and how this paper will contribute to this domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The word heteronormativity was coined by Warner (1991) and has been applied to a variety of contexts since then. The definition was recently analyzed and redefined to differentiate between these contexts (Marchia and Sommer, 2019) . The authors propose formalizing the term heteronormativity to distinguish its usage among the following four distinct contexts; heterosexist-heteronormativity, genderedheteronormativity, hegemonic-heteronormativity, and cisnormative-heteronormativity. We adapt the definition of heteronormativty from the dictionary CAER, (Diccionario de Asilo CAER-Euskadi), This definition translated to English is as follow:", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 54, |
|
"text": "Warner (1991)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 230, |
|
"text": "(Marchia and Sommer, 2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What is heteronormativity?", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Heteronormativity refers to the social, political and economic regimen imparted by the patriarchy, extending itself through both the public and private domain. According to this regimen, the only acceptable and normal form to express sexual and affective desires, and even one's own identity is heterosexuality, which assumes that masculinity and femininity are substantially complementary with respect to desire. That is, sexual preferences as well as social roles and relationships that are established between individuals in society should be based in the 'masculine-feminine' binary, and always corresponds 'biological sex' with gender identity and the social responsibility assigned to it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What is heteronormativity?", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For simplicity, we seek to binarize the categorical definition of (Marchia and Sommer, 2019) this allows us to take advantage of binary decision classification of heteronormativity on our corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 92, |
|
"text": "(Marchia and Sommer, 2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What is heteronormativity?", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Heteronormative speech has been found to create boundaries of normative sexual behavior, and relate to behaviors and feelings against violations of these norms. Results from recent investigation suggests that heteronormative attitudes and beliefs are relevant to political alignment and aspects of personality (Janice Habarth, 2015). Furthermore, we would like to bring to light The Gender Similarities Hypothesis, the idea that the biological sexes are more similar than they are different (Hyde, 2005) . This is a stark contradiction to traditional arguments about biological differences between the sexes. Hyde finds that there is significant evidence to support her claim that many stereotypical biological differences between the sexes lack proper evidence to back them up, in fact, evidence seems to suggest the opposite in many cases. For example, some may believe that men are typically better than women at math, but Hyde's evidence concludes that the difference in mathematical ability is close to zero, and in some cases women outperform men.", |
|
"cite_spans": [ |
|
{ |
|
"start": 491, |
|
"end": 503, |
|
"text": "(Hyde, 2005)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What is heteronormativity?", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Taking this into account with the claims of Habarth, we conclude that heteronormative speech has a substantial impact on perceptions of gender and sexuality, more so than actual biological differences between the sexes impact language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What is heteronormativity?", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Given this definition we seek to justify the importance of detecting and challenging heteronormative ideology, not only to prevent harm but to promote gender equality and the inclusion of LGBTQIA+ people in society 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negative impact of heteronormativity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Recent investigation has shown that language can reflect sexist ideology. Coady (2017) has found that the process of iconisation, the partitioning of humans into two binary groups based on gender, can be projected onto language through sexist grammar and semantics in a process called fractal recursivity making the masculine gender the generic form. This linguistic gender norm leads to erasure of other genders and sexual identities from public discourse. Furthermore, Gay et al. (2018) demonstrate that presence of gender in language shows culturally acquired gender roles, and how these roles define house hold labor allocations. They go on to conclude that analysis of language use is promising because it is an observable and quantifiable indicator of values at the individual level These studies suggest that gender and sexual norms can be reflected in language use, Coady even concludes that the use of this language perpetuates such norms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 86, |
|
"text": "Coady (2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 488, |
|
"text": "Gay et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negative impact of heteronormativity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In fact, several recent studies have demonstrated that language use can be a subtle but effective barrier for gender minorities. Stout and Dasgupta (2011) demonstrate this by conducting experiments with mock job interviews with woman, finding that gender exclusive language during the interview negatively impacts the performance of women, however gender inclusive language, i.e. \"he or she\", or gender neutral language, i.e. \"one\", led to an improved performance among women. Meanwhile Davis and Reynolds (2018) . demonstrate that using language that normalizes the binary sex classification is strongly associated with a gender gap in educational attainment. That is, heteronormative language is not only indicative of sexual and gender disparity, it also is a proponent of it.", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 154, |
|
"text": "Stout and Dasgupta (2011)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 512, |
|
"text": "Davis and Reynolds (2018)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negative impact of heteronormativity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Research shows that not only does heteronormative speech disadvantage women, patterns in language use on social media can be indicative of psycho-social variables demonstrating personal-ity traits and emotional stability among men and women. For example, men more commonly use possessive pronouns before nouns referring to a female partner, i.e. \"my girlfriend\" (Schwartz et al., 2013) . Eaton and Matamala (2014) even find that heteronormative beliefs about men and women may encourage sexually coercive behavior in intimate relationships.", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 385, |
|
"text": "(Schwartz et al., 2013)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 413, |
|
"text": "Eaton and Matamala (2014)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negative impact of heteronormativity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Many of these previous studies have dealt with language use and it's relationship with discrimination based on the \"men and women\" gender binary. Let us know to explore research on heteronormative language and it's effect on LGBTQIA+ individuals. Lamont (2017) finds in a survey of LGBTQIA+ individuals, that the majority report finding that the heternormative script of relationships are constraining, unimaginative, and heavily gendered, suggesting that many members of the queer community feel restricted by the expectation set by heteronormative values. While Smits et al. 2020analyzed heteronormative speech and casual use of homonegative slurs in young men in sports and found that this language was used almost devoid of meaning except to express lack of masculinity, disapproval, and negativity, concluding that this use of speech attributes to the preservation of heteronormative discourse in spite of growing acceptance of non-heterosexual male athletes. Another study finds that many LGBTQIA+ social work students experience an overwhelming amount of discrimination, mostly perpetuated through harmful discourse (Atteberry-Ash et al., 2019). Lastly, King (2016) finds that heteronormative speech and policing of gender roles in children lead to hypermasculine and violent men, concluding that violence to the queer community can all be connected to heteronormativity in everyday life.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1161, |
|
"end": 1172, |
|
"text": "King (2016)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negative impact of heteronormativity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "While heteronormativity refers to a more comprehensive system, gender bias is an element to this system since both are based on the idea of creating separate realities for people according to one of the two genders they were assigned at birth. Since, to the best of our knowledge, there is no literature on heteronormative language detection in NLP systems, we choose gender bias efforts as both motivation and justifaction for our work. Gender bias is the preferential treatment towards men over women, often unintentionally and exhibited by all genders (Corinne A. Moss-Racusin et al., 2012) . To continue, we will take a look recent literature that seeks to address gender bias in the NLP space. Sun et al. (2019) address this with a literature review, bringing to light the lack of research pertaining to gender bias in NLP, and a lack of concrete methods for detecting and quantifying gender bias. They go on to address that debiasing methods in NLP are frequently insufficient for end-to-end models in many applications. We envision our corpus contributing to the development and verification of methods for the detection of that arises from heteronormative language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 567, |
|
"end": 593, |
|
"text": "Moss-Racusin et al., 2012)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 716, |
|
"text": "Sun et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender bias detection and mitigation in NLP", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Recent work has come forth to formalize how gender should be considered ethically in the development (Larson, 2017) , bringing to light how many recent studies have brought gender as a variable in their experiments whilst assuming binary categories. Most often however, it was found that many recent or widely cited papers gave little to no explanation for how they defined these categories, simply describing the variable as \"gender\" or \"sex\" without further clarification. This is indicative of a heteronormative mindset used in much of NLP research.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 115, |
|
"text": "(Larson, 2017)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender bias detection and mitigation in NLP", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The bias of researchers can be reflected in the work they are doing, and we hope that the work that comes from our anti-heternormative dataset can bring these biases to light. Lu et al. (2018) propose a metric to quantify gender bias in NLP in response to existing models that exhibit bias, such as text auto-completion that makes suggestions based on the gender binary. They also propose a method to mitigate gender bias. Bordia and Bowman (2019) address existing language models and point out the gender bias that they contain. They note that many text corpora exhibit problematic biases that an NLP model may learn. Gender bias, as we have seen, can reflect and be perpetuated by heteronormativity. However, the scope of our work is to further generalize the bias in question to go beyond the gender binary and include LGBTQIA+ people. Dev et al. (2021) survey non-binary people in AI to illustrate negative experiences they have experienced with natural language systems. They challenge how gender is represented in NLP systems and question whether we should be representing Gender as a discrete category at all.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 192, |
|
"text": "Lu et al. (2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 839, |
|
"end": 856, |
|
"text": "Dev et al. (2021)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender bias detection and mitigation in NLP", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Once the NLP community established that gender biases indeed exist in many NLP systems, many efforts have been made towards detecting and mitigating these biases. Next, we mention some of these techniques in various NLP tasks and systems: from machine translation, coreference resolution, word embeddings, large language models to sentiment analysis. First, we focus on the works regarding large language models, specifically, BERT. Bhardwaj et al. (2020) state that contextual language models are prone to learn intrinsic genderbias from data. They find that BERT shows a significant dependence when predicting on genderparticular words and phrases, they claim such biases could be reduced by removing gender specific words from the word embedding. Zhao et al. (2018) go on to produce gender-neutral word embeddings that aim to preserve gender information in certain dimensions of word vectors while freeing others of gender influence, they release a gender neutral variant of GloVe, GN-GloVe. Kurita et al. (2019) proposes a method to measure bias in BERT, which successfully identifies gender bias in BERT and exposes stereotypes embedded in the model. Recent models have been developed to mitigate gender bias in trained models, such as Saunders and Byrne (2020), who use transfer learning on a small set of gender-balanced data points from a data set to learn un-biasedly, rather than creating a balanced dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 433, |
|
"end": 455, |
|
"text": "Bhardwaj et al. (2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 750, |
|
"end": 768, |
|
"text": "Zhao et al. (2018)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 995, |
|
"end": 1015, |
|
"text": "Kurita et al. (2019)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender bias detection and mitigation in NLP", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Many recent efforts focus on the creation of corpora for gender bias detection and mitigation. Such as Doughman and Khreich (2022) , who create a text corpus avoiding gender bias in English, much like our research, however we focus on heteronormativity. Likewise, Bhaskaran and Bhallamudi (2019) create a dataset that is used for detecting occupational gender stereotypes in sentiment analysis systems. Parasurama and Sedoc (2021) state that there are few resources for conversational systems that contain gender inclusive language. Cao and Daum\u00e9 III (2020) present two data sets. GAP which substitues gender indicative language for more gender inclusive words, such as changing he or she for the word they or neopronouns. They also present GIcoref, an annotated dataset about trans people created by trans people.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 130, |
|
"text": "Doughman and Khreich (2022)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 430, |
|
"text": "Parasurama and Sedoc (2021)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender bias detection and mitigation in NLP", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Finally, we mention two works focused on gender-neutral pronouns in NLP systems. We find these efforts relevant to our work, since a way to challenge heteronormative language is to eliminate the gender markers in language altogether. Lauscher et al. (2022) provide an overview for gen-der neutral pronoun issues for NLP, they propose when and how to model pronouns, and present demonstrate that the omission of these pronouns in NLP systems contributes to the marginalization of underrepresented groups. Finally, Bartl et al. (2020) studies gender bias in contextualized word embeddings for NLP systems, they propose a method for measuring bias in these embeddings for English.", |
|
"cite_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 256, |
|
"text": "Lauscher et al. (2022)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 532, |
|
"text": "Bartl et al. (2020)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender bias detection and mitigation in NLP", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "These systems deal typically with detection and identification of gender bias. Research that attempts to include gender minorities deals with the issue of a lack of resources that can identify bias from heteronormativity. This paper aims to solve that problem by providing a dataset that can use existing debiasing techniques to address bias that stems from heteronormativity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender bias detection and mitigation in NLP", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In this section we will describe our process for collecting data from Twitter and the annotation process, as well as the challenges we faced and the resulting dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HeteroCorpus", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We follow the guidelines specified by (Bender and Friedman, 2018) to produce a Long Form data statement. A data statement is important when producing NLP datasets to mitigate bias in data collection.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 65, |
|
"text": "(Bender and Friedman, 2018)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Statement", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We collect tweets from popular social media platform Twitter, we use Twitter because it provides a convenient medium to collect short statements from general users in on various topics in a digital medium. We use specific search terms that are indicative of gender because we aim to build a dataset that consists of heteronormative speech.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Curation Rationale", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We scrapped a set of tweets that contained desired keywords and were in English. However, there were tweets present in other languages, and we instructed annotators to indicate them using a separate tag so they could be discarded. There are no restrictions on the region from which the tweet could come. Since all the data is collected from social media, this means the presence of hashtags, mentions, gifs, videos, images, and emojis within the tweets. Also, we found spelling mistakes, abbreviations and slang native to social media.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. Language variety", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The demographics of the authors is not available to us since we compiled the data by the tag EN that Twitter provides; however, due to our sampling methods, we expect the tweets to come from a diverse set of authors of various ages, genders, nationalities, races and ethnicities, native languages, socioeconomic classes and education backgrounds. E. Speech Situation Each tweet may have a different speech situation. Most of them are related to tendencies, events or memes from the year of extraction (2022).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C. Tweet author demographic", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The tweets collected come from a diverse set of contexts, as they could be published alone by the author, or in response to another user. The tweets are subject to the restrictions of text limit and policies of Twitter. All tweets were posted publicly, and we remove identifying characteristics of the user for anonymity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "F. Text characteristics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We extracted the tweets from the Twitter API.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "G. Recording Quality", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The first step was to acquire a set of tweets that could potentially contain heteronormative language used by the authors. To do this we crafted a list of terms that we noticed had several heavily gendered trends while reading tweets. These terms are the following: man, men, husband, son, boy, woman, women, wife, daughter, girl. In this selection, we have tried to avoid heavily-gendered and queer terms, to focus in the most general framework. However, we are aware that this can introduce bias. After defining the terms for our search, we performed the extraction of the tweets via the Twitter API. For each term, specifically in the English language, we performed a search for the period of time ranging from 1 Jan. 2020 to 10 Mar. 2022. The total number of extracted tweets was 26,183.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The next step was to perform a filtering of the obtained tweets. The first filter was based on the presence or absence of adjectives in the tweets. First, we obtained a list of the adjectives in the entire dataset. Then we used that list to create another list with terms that followed the syntactic structure: adjective + relevant search term or relevant search term + adjective. For example, we found the adjective nice among the tweets crawled. Therefore, all the tweets with the pairs nice man, girl nice, etc were kept for the next stage of filtering, since they contained a relevant search term and an adjective. The motivation behind this filter was that, by manually observing the crawled tweets, we noticed that those tweets with the syntactic structure described above contained some of the most heteronormative discourses in them. This made sense for us since it is well known that the use of adjectives in English has reflected gender bias (Rubini and Menegatti, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 952, |
|
"end": 980, |
|
"text": "(Rubini and Menegatti, 2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "After the first filter, we obtained a dataset with 9,350 tweets in it. From those tweets, we removed those that only contained our search terms. For example, tweets with only the text \"man!\" were removed. We decided to do this because we considered that those tweets did not contain a great amount of semantic information relevant to heteronormative language, and were only indicative of a conversation having place.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The final size of our dataset was 7,265 tweets. The frequency distribution of the terms in our final corpus is shown on ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The first step in the creation of the annotation protocol, was to establish the two labels that could be assigned to the tweets. These labels were 0 -Non-Heternormative and 1 -Heteronormative. We also gave the annotators the option to set a label 2 for the tweets that did not have any content relevant to the topic of the corpus. Some tweets labeled with 2 were those that only contained hashgtags (#) or mentions (@). Also, the tweets in other languages and those with only emojis in them were assigned a label of 2. The tweets under this class were removed once the annotation was finished. Afterwards, we wrote the Annotation Guide 2 , in which we defined what the annotators should understand as heteronormativity 2.1. Furthermore, we randomly selected a sample of 100 tweets, and assigned a copy of this subset to each annotator before beginning the final annotation process. Each annotator was provided with their own Google Drive Spreadsheet document that contained the following four columns: the number of the tweet, the tweet, the ID, and the label. We asked the six annotators to classify the tweets in this test sample.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Protocol and Results of the Annotation Process", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Then, we organized a meeting with the annotators in order to test how this annotation process turned out. In that meeting, the authors of this paper evaluated the performance of each annotator. We asked them to justify various label decisions they made and their thought-processes behind their annotations. Then, we gave them all some feedback on their annotations. Finally, we all discussed how to settle ambiguous cases. The next step in the annotation process was the annotation of the entire dataset. From the 7,265 tweets that comprised our dataset, we shuffled them randomly and split them in two partitions. The first partition had a size of 3,632, while the latter one had a size of 3,633. Three annotators were assigned to work on the first partition, while the other three annotators worked on the second one. In total, each tweet was annotated three times.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Protocol and Results of the Annotation Process", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Once the annotators were done, we obtained Cohen's Kappa on the annotation pairs. Using these calculations, we set on the final labels for each tweet. The pairs with an agreement of 3/0 or 0/3 made up 65% of the dataset, while the pairs with an agreement of 2/1 or 1/2 constituted the remaining 35% of the tweets. We also obtained the Fleiss' Kappa on the entire dataset. The value of this calculation was 0.4036. The final distribution of the labels was of 5,284 tweets with the label 0, and 1,981 tweets with the label 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Protocol and Results of the Annotation Process", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "A few examples of tweets can be found in Table 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Protocol and Results of the Annotation Process", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In order to establish a baseline for classification systems trained on our corpus, we performed a set of classification experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology for Heteronormativity Detection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "First, we removed the urls in the dataset. Then, we tokenized and lemmatized our entire corpus. Afterwards, we removed the mentions, punctuation marks, and stop-words 3 . The next step was to create the training and evaluation sets. For this, we split the corpus into two partitions: the first one, with 90% of the tweets in the original corpus, and the second with the remaining 10% tweets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Pre-Processing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "After the text pre-processing steps, we implemented two supervised classification algorithms. The first, a SVM classifier using as features a combination of bag-of-words with TF-IDF 4 , the second was performed using a logistic regression algorithm. For both steps, we used the same features as previously described.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Various works have focused on sexism classification in English (Jha and Mamidi (2017) , Bhaskaran and Bhallamudi (2019) ). In order to have a starting point for our experiments, we followed their steps with the use of SVM and logistic regression algorithms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 85, |
|
"text": "(Jha and Mamidi (2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 119, |
|
"text": "Bhaskaran and Bhallamudi (2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Afterwards, we proceeded to test our corpus on a binary classification task using deep-learning architectures; specifically, four different versions of BERT, following de Paula et al. 2021's work. These authors obtained the highest accuracy and F1-score on a sexism prediction shared task organized on 2021 at the IberLef 2021 using a corpus comprised of tweets in English and Spanish.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We fine-tuned the BERT-base-cased, BERTbase-uncased, BERT-large-cased, and BERT-largeuncased models 5 . The hyperparameters used while fine-tuning the BERT models were the following, as suggested by the original authors of BERT (Devlin et al., 2018) . We use 4 epochs, and a batchsize of 8; the learning rate is 2e \u22125 with 1e \u22128 steps and a max sequence length of 100 tokens. Finally, we use the AdamW optimizer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 249, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Since the task of identifying heteronormativity in NLP systems has not been studied yet, we compare our classification experiments with systems that detected gender bias. We decided not to compare with hate speech tasks, since we consider that heteronormative language does not necessarily imply hate speech.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We recognize that our baseline can only be vaguely compared with the results obtained by other authors in other classification tasks, since we aim to detect different linguistic phenomena. Following those remarks, on Table 4 we show the results obtained on our heteronormativity detection experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 224, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "It can be observed that BERT-large outperforms the supervised classification algorithms. Also, the low results shown on Table 4, indicate that the task of classifying heteronormativity is not a simple one and more work will be required in order to improve the results of this benchmark.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we presente HeteroCorpus; a novel human-annotated corpus for heteronormative language detection. This work sets a new precedent Tweet Text Label Your life, little girl, is an empty page that men will want to write on 1 This is utter bullshit, plenty of women find heavier set men attractive. 1 ur boy could most definitely use a friend this week. 0 Sweet man! Yeah, it took a minute but I'm glad I didn't have to buy from resellers 0 Beautiful you filmpje Geil beautiful you lull I your broekje are very beautiful man [...] 2 in NLP, since, to the best of our knowledge, there has not yet been developed a similar corpus that aims to study heteronormative language in English. We consider that this corpus could be of use in gender bias and sexism detection and mitigation tasks, which have proven to be quite challenging. While gender bias and sexism are not the same as the presence of heteronormativity in language, they all are nocive issues present in current NLP systems. Until the NLP community finds an efficient way to minimize these issues, language technologies will continue to amplify the discrimination based gender and sexual identity. The Fleiss' Kappa obtained on our corpus signals a moderate agreement between our annotators. This indicates that annotating heteronormativity can be complicated. Therefore, researchers must take into consideration this extra challenge while creating similar resources, since the quality of the data depends on the expertise of the annotators.", |
|
"cite_spans": [ |
|
{ |
|
"start": 533, |
|
"end": 538, |
|
"text": "[...]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also present a baseline for the task of heteronormative language detection using our corpus, with two supervised algorithms and with four variations of BERT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As future work, we plan on expanding this corpus by extracting a larger set of tweets containing more nuanced forms of heteronormative discourses, since heteronormativity is not only associated to lexical properties in the speech, but also to more complex forms of linguistic phenomena. In future projects, we hope to further investigate heteronormative language use in digital spaces, crafting a dataset that better respects the multi-class definition of heterornormativity as discussed in Section 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We propose the creation of similar corpora but for other languages, since heteronormativity is a global issue that requires joint action. Also, we encourage researchers to develop further tools for heteronormative language detection and mitigation, since language technologies are rapidly increasing their presence in human lives, and the implicit biases these models have can be very costly and damaging to human lives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We ensured that our dataset was obtained following Twitter's terms and conditions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "The full text corpus will not be released due to Twitter's Privacy Policy. Only the IDs of the tweets and their labels are be available on the following repository 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "This corpus has been created for the detection of heteronormative language in English. Other possible uses could be gender bias and sexism detection and mitigation. Every population could be benefited from the integration of our corpus into their language systems, since it's main goal is to create more equal language technologies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benefits and Limitations in the use of our Data", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Here we wish to clarify that we promote preventative action against all gender and sexual discrimination. LGBTQIA+ refers to the lesbian, gay, bisexual, transgender, queer, intersex, asexual communities as well as all additional gender and sexual identities that deviate from the traditional heteronormative relationship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This annotation guide is available in the GitHub with the HeterCorpus dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For this we used the pre-loaded set of stop-words in English provided by nltk4 The implementation of TF-IDF we used was the one provided by the scikit-learn library.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We implemented scikit-learn's wrapper for BERT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This paper has been supported by PAPIIT project TA400121, and CONACYT CB A1-S-27780. The 6 https://github.com/juanmvsa/HeteroCorpus 232 authors thank CONACYT for the computing resources provided through the Plataforma de Aprendizaje Profundo para Tecnolog\u00edas del Lenguaje of the Laboratorio de Superc\u00f3mputo del INAOE", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "8" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Does it get better? LGBTQ social work students and experiences with harmful discourse", |
|
"authors": [ |
|
{ |
|
"first": "Brittanie", |
|
"middle": [], |
|
"last": "Atteberry-Ash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [ |
|
"Rachel" |
|
], |
|
"last": "Speer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shanna", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Kattari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M. Killian", |
|
"middle": [], |
|
"last": "Kinney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "J. Gay Lesbian Soc. Serv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brittanie Atteberry-Ash, Stephanie Rachel Speer, Shanna K. Kattari, and M. Killian Kinney. 2019. Does it get better? LGBTQ social work students and experiences with harmful discourse. J. Gay Lesbian Soc. Serv.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Unmasking contextual stereotypes: Measuring and mitigating bert's gender bias", |
|
"authors": [ |
|
{ |
|
"first": "Marion", |
|
"middle": [], |
|
"last": "Bartl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Albert", |
|
"middle": [], |
|
"last": "Gatt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking contextual stereotypes: Mea- suring and mitigating bert's gender bias. CoRR, abs/2010.14534.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bender", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Batya", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "587--604", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00041" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6:587-604.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Navonil Majumder, and Soujanya Poria. 2020. Investigating gender bias in BERT", |
|
"authors": [ |
|
{ |
|
"first": "Rishabh", |
|
"middle": [], |
|
"last": "Bhardwaj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rishabh Bhardwaj, Navonil Majumder, and Soujanya Poria. 2020. Investigating gender bias in BERT.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Good secretaries, bad truck drivers? occupational gender stereotypes in sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Jayadev", |
|
"middle": [], |
|
"last": "Bhaskaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isha", |
|
"middle": [], |
|
"last": "Bhallamudi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.10256" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jayadev Bhaskaran and Isha Bhallamudi. 2019. Good secretaries, bad truck drivers? occupational gender stereotypes in sentiment analysis. arXiv preprint arXiv:1906.10256.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Identifying and reducing gender bias in word-level language models", |
|
"authors": [ |
|
{ |
|
"first": "Shikha", |
|
"middle": [], |
|
"last": "Bordia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shikha Bordia and Samuel R. Bowman. 2019. Identify- ing and reducing gender bias in word-level language models. NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Bodies that matter: On the discursive limits of sex", |
|
"authors": [ |
|
{ |
|
"first": "Judith", |
|
"middle": [], |
|
"last": "Butler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Judith Butler. 2011. Bodies that matter: On the discur- sive limits of sex. routledge.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Toward gender-inclusive coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trista", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4568--4595", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.418" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Trista Cao and Hal Daum\u00e9 III. 2020. Toward gender-inclusive coreference resolution. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4568-4595, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The origin of sexism in language", |
|
"authors": [ |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Coady", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ann Coady. 2017. The origin of sexism in language. Gender and Language.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Science faculty's subtle gender biases favor male students", |
|
"authors": [ |
|
{ |
|
"first": "Corinne", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Moss-Racusin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Dovidio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victoria", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Brescoll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo", |
|
"middle": [], |
|
"last": "Handelsman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the National Academy of Sciences", |
|
"volume": "109", |
|
"issue": "41", |
|
"pages": "16474--16479", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1073/pnas.1211286109" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Corinne A. Moss-Racusin, John F. Dovidio, Victoria L. Brescoll, Mark J. Graham, and Jo Handelsman. 2012. Science faculty's subtle gender biases favor male students. Proceedings of the National Academy of Sciences, 109(41):16474-16479.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Gendered language and the educational gender gap", |
|
"authors": [ |
|
{ |
|
"first": "Lewis", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Megan", |
|
"middle": [], |
|
"last": "Reynolds", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Econ. Lett", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lewis Davis and Megan Reynolds. 2018. Gendered language and the educational gender gap. Econ. Lett.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Roberto Fray da Silva, and Ipek Baris Schlicht. 2021. Sexism prediction in spanish and english tweets using monolingual and multilingual bert and ensemble models", |
|
"authors": [ |
|
{ |
|
"first": "Angel", |
|
"middle": [], |
|
"last": "Felipe Magnoss\u00e3o De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paula", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2111.04551" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angel Felipe Magnoss\u00e3o de Paula, Roberto Fray da Silva, and Ipek Baris Schlicht. 2021. Sexism prediction in spanish and english tweets using mono- lingual and multilingual bert and ensemble models. arXiv preprint arXiv:2111.04551.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Harms of gender exclusivity and challenges in non-binary representation in language technologies", |
|
"authors": [ |
|
{ |
|
"first": "Sunipa", |
|
"middle": [], |
|
"last": "Dev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masoud", |
|
"middle": [], |
|
"last": "Monajatipoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anaelia", |
|
"middle": [], |
|
"last": "Ovalle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Subramonian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1968--1994", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.emnlp-main.150" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Ar- jun Subramonian, Jeff Phillips, and Kai-Wei Chang. 2021. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 1968-1994, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Heteronormatividad g\u00e9nero y asilo", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Diccionario De Asilo Caer-Euskadi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2022--2026", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diccionario de Asilo CAER-Euskadi. Heteronormativi- dad g\u00e9nero y asilo. Online. Accessed: 2022-04-06.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Gender bias in text: Labeled datasets and lexicons", |
|
"authors": [ |
|
{ |
|
"first": "Jad", |
|
"middle": [], |
|
"last": "Doughman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wael", |
|
"middle": [], |
|
"last": "Khreich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jad Doughman and Wael Khreich. 2022. Gender bias in text: Labeled datasets and lexicons.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The relationship between heteronormative beliefs and verbal sexual coercion in college students", |
|
"authors": [ |
|
{ |
|
"first": "Asia", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Eaton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alejandra", |
|
"middle": [], |
|
"last": "Matamala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Arch. Sex. Behav", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asia A. Eaton and Alejandra Matamala. 2014. The rela- tionship between heteronormative beliefs and verbal sexual coercion in college students. Arch. Sex. Behav.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Decomposing culture: an analysis of gender, language, and labor supply in the household", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Gay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hicks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Estefania", |
|
"middle": [], |
|
"last": "Santacreu-Vasut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Shoham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Review of Economics of the Household", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Gay, Daniel L. Hicks, Estefania Santacreu-Vasut, and Amir Shoham. 2018. Decomposing culture: an analysis of gender, language, and labor supply in the household. Review of Economics of the Household.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The gender similarities hypothesis", |
|
"authors": [ |
|
{ |
|
"first": "Janet Shibley", |
|
"middle": [], |
|
"last": "Hyde", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Am. Psychol", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janet Shibley Hyde. 2005. The gender similarities hy- pothesis. Am. Psychol.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Development of the heteronormative attitudes and beliefs scale", |
|
"authors": [ |
|
{ |
|
"first": "Janice", |
|
"middle": [], |
|
"last": "Habarth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Psychology and Sexuality", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janice Habarth. 2015. Development of the heteronor- mative attitudes and beliefs scale. Psychology and Sexuality.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "When does a compliment become sexist? analysis and classification of ambivalent sexism using twitter data", |
|
"authors": [ |
|
{ |
|
"first": "Akshita", |
|
"middle": [], |
|
"last": "Jha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radhika", |
|
"middle": [], |
|
"last": "Mamidi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the second workshop on NLP and computational social science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akshita Jha and Radhika Mamidi. 2017. When does a compliment become sexist? analysis and classifi- cation of ambivalent sexism using twitter data. In Proceedings of the second workshop on NLP and computational social science, pages 7-16.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The violence of heteronormative language towards the queer community", |
|
"authors": [ |
|
{ |
|
"first": "Jessica", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jessica King. 2016. The violence of heteronormative language towards the queer community.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Measuring bias in contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Keita", |
|
"middle": [], |
|
"last": "Kurita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nidhi", |
|
"middle": [], |
|
"last": "Vyas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ayush", |
|
"middle": [], |
|
"last": "Pareek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contex- tualized word representations.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "we can write the scripts ourselves", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Lamont", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Queer challenges to heteronormative courtship practices", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Lamont. 2017. \"we can write the scripts ourselves\": Queer challenges to heteronormative courtship practices:. Gend. Soc.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Gender as a variable in Natural-Language processing: Ethical considerations", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Larson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EthNLP@EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Larson. 2017. Gender as a variable in Natural-Language processing: Ethical considerations. EthNLP@EACL.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Welcome to the modern world of pronouns: Identity-Inclusive natural language processing beyond gender", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Lauscher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Archie", |
|
"middle": [], |
|
"last": "Crowley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anne Lauscher, Archie Crowley, and Dirk Hovy. 2022. Welcome to the modern world of pronouns: Identity- Inclusive natural language processing beyond gender.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Preetam Amancharla, and Anupam Datta", |
|
"authors": [ |
|
{ |
|
"first": "Kaiji", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Mardziel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fangjing", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Gender bias in neural natural language processing. arXiv: Computation and Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Aman- charla, and Anupam Datta. 2018. Gender bias in neural natural language processing. arXiv: Computa- tion and Language.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "(re)defining heteronormativity", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Marchia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Sommer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Sexualities", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Marchia and Jamie M. Sommer. 2019. (re)defining heteronormativity. Sexualities.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Gendered language in resumes and its implications for algorithmic bias in hiring", |
|
"authors": [ |
|
{ |
|
"first": "Prasanna", |
|
"middle": [], |
|
"last": "Parasurama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo\u00e3o", |
|
"middle": [], |
|
"last": "Sedoc", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prasanna Parasurama and Jo\u00e3o Sedoc. 2021. Gendered language in resumes and its implications for algorith- mic bias in hiring.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Hindering women's careers in academia: Gender linguistic bias in personnel selection", |
|
"authors": [ |
|
{ |
|
"first": "Monica", |
|
"middle": [], |
|
"last": "Rubini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michela", |
|
"middle": [], |
|
"last": "Menegatti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Language and Social Psychology", |
|
"volume": "33", |
|
"issue": "6", |
|
"pages": "632--650", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1177/0261927X14542436" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Monica Rubini and Michela Menegatti. 2014. Hinder- ing women's careers in academia: Gender linguistic bias in personnel selection. Journal of Language and Social Psychology, 33(6):632-650.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Reducing gender bias in neural machine translation as a domain adaptation problem", |
|
"authors": [ |
|
{ |
|
"first": "Danielle", |
|
"middle": [], |
|
"last": "Saunders", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danielle Saunders and Bill Byrne. 2020. Reducing gender bias in neural machine translation as a domain adaptation problem.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Personality, gender, and age in the language of social media: The Open-Vocabulary approach", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Eichstaedt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margaret", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Kern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Dziurzynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Ramones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Megha", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Achal", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Kosinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Stillwell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [ |
|
"E P" |
|
], |
|
"last": "Seligman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lyle", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Ungar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "PLoS One", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Andrew Schwartz, Johannes C. Eichstaedt, Mar- garet L. Kern, Lukasz Dziurzynski, Stephanie M. Ra- mones, Megha Agrawal, Achal Shah, Michal Kosin- ski, David Stillwell, Martin E. P. Seligman, and Lyle H. Ungar. 2013. Personality, gender, and age in the language of social media: The Open-Vocabulary ap- proach. PLoS One.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Annelies Knoppers, and Agnes Elling-Machartzki. 2020. 'everything is said with a smile': Homonegative speech acts in sport", |
|
"authors": [ |
|
{ |
|
"first": "Froukje", |
|
"middle": [], |
|
"last": "Smits", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Int. Rev. Sociol. Sport", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Froukje Smits, Annelies Knoppers, and Agnes Elling- Machartzki. 2020. 'everything is said with a smile': Homonegative speech acts in sport:. Int. Rev. Sociol. Sport.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "When he doesn't mean you: Gender-Exclusive language as ostracism", |
|
"authors": [ |
|
{ |
|
"first": "Jane", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Stout", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nilanjana", |
|
"middle": [], |
|
"last": "Dasgupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Pers. Soc. Psychol. Bull", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jane G. Stout and Nilanjana Dasgupta. 2011. When he doesn't mean you: Gender-Exclusive language as ostracism. Pers. Soc. Psychol. Bull.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Mitigating gender bias in natural language processing: Literature review. ACL", |
|
"authors": [ |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Gaut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shirlyn", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxin", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mai", |
|
"middle": [], |
|
"last": "Elsherief", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jieyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diba", |
|
"middle": [], |
|
"last": "Mirza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Eliza- beth Belding, Kai-Wei Chang, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Introduction: Fear of a queer planet", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Warner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Social Text", |
|
"volume": "", |
|
"issue": "29", |
|
"pages": "3--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Warner. 1991. Introduction: Fear of a queer planet. Social Text, (29):3-17.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "The straight mind. The Future of Difference (discours liminaire en conf\u00e9rence universitaire)", |
|
"authors": [ |
|
{ |
|
"first": "Monique", |
|
"middle": [], |
|
"last": "Wittig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Monique Wittig. 1979. The straight mind. The Future of Difference (discours liminaire en conf\u00e9rence uni- versitaire), vol. 3, t. 3. New York, Barnard Center for Research on Women, coll.\u00ab Scholar and Feminist / VI.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "The straight mind", |
|
"authors": [ |
|
{ |
|
"first": "Monique", |
|
"middle": [], |
|
"last": "Wittig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Feminist Issues", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "103--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Monique Wittig. 1980. The straight mind. Feminist Issues, 1(1):103-111.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Learning Gender-Neutral word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Jieyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yichao", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeyu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning Gender-Neutral word embeddings.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Demographics as anonymously self reported by each annotator.", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"content": "<table><tr><td>Term</td><td>Frequency Term</td><td>Frequency</td></tr><tr><td>man</td><td>3070 woman</td><td>1713</td></tr><tr><td>men</td><td>1285 women</td><td>33</td></tr><tr><td>husband</td><td>708 girl</td><td>1056</td></tr><tr><td>boy</td><td>844 wife</td><td>740</td></tr><tr><td>son</td><td>655 daughter</td><td>1072</td></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Number of times each of the key terms appears in the HeteroCorpus.", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table><tr><td>Classifier SVM LR BERT-base-uncased BERT-base-cased BERT-large-uncased BERT-large-cased</td><td>Accuracy F1-score 0.64 0.55 0.67 0.50 0.63 0.59 0.68 0.62 0.71 0.72 0.72 0.72</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Example tweets from the HeteroCorp. Here we present some examples of tweets, their categorization, and the reviewer agreement. 1 indicates the tweet is heteronormative, and 0 indicates the tweet is non-heteronormative. 2 indicates a tweet that was in another language or was not intelligible.", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Results for the heteronormativity detection experiments using our corpus.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |