|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:02:16.295438Z" |
|
}, |
|
"title": "Occupational Biases in Norwegian and Multilingual Language Models", |
|
"authors": [ |
|
{ |
|
"first": "Samia", |
|
"middle": [], |
|
"last": "Touileb", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Bergen", |
|
"location": {} |
|
}, |
|
"email": "samia.touileb@uib.no" |
|
}, |
|
{ |
|
"first": "Lilja", |
|
"middle": [], |
|
"last": "\u00d8vrelid", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Oslo", |
|
"location": {} |
|
}, |
|
"email": "liljao@uio.no" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Velldal", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Oslo", |
|
"location": {} |
|
}, |
|
"email": "erikve@uio.no" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we explore how a demographic distribution of occupations, along gender dimensions, is reflected in pre-trained language models. We give a descriptive assessment of the distribution of occupations, and investigate to what extent these are reflected in four Norwegian and two multilingual models. To this end, we introduce a set of simple bias probes, and perform five different tasks combining gendered pronouns, first names, and a set of occupations from the Norwegian statistics bureau. We show that language specific models obtain more accurate results, and are much closer to the real-world distribution of clearly gendered occupations. However, we see that none of the models have correct representations of the occupations that are demographically balanced between genders. We also discuss the importance of the training data on which the models were trained on, and argue that template-based bias probes can sometimes be fragile, and a simple alteration in a template can change a model's behavior.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we explore how a demographic distribution of occupations, along gender dimensions, is reflected in pre-trained language models. We give a descriptive assessment of the distribution of occupations, and investigate to what extent these are reflected in four Norwegian and two multilingual models. To this end, we introduce a set of simple bias probes, and perform five different tasks combining gendered pronouns, first names, and a set of occupations from the Norwegian statistics bureau. We show that language specific models obtain more accurate results, and are much closer to the real-world distribution of clearly gendered occupations. However, we see that none of the models have correct representations of the occupations that are demographically balanced between genders. We also discuss the importance of the training data on which the models were trained on, and argue that template-based bias probes can sometimes be fragile, and a simple alteration in a template can change a model's behavior.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Measuring the presence of stereotypical representations of occupations in pre-trained language models has been an important effort in combating and reducing possible representational harms (Blodgett et al., 2020) . However, and as pointed out by Blodgett (2021) , most of the current work is motivated by an idealised vision of the world where occupations should not be correlated with genders, and where the expectations are that models should not be stereotypical when e.g., predicting female or male pronouns in relation to occupations. The idea that we are all equal is an important factor in our quest of reaching fair and less biased models, and reflect our normative judgments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 212, |
|
"text": "(Blodgett et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 261, |
|
"text": "Blodgett (2021)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While this is true for most stereotypes, it might not directly apply to occupations. With a descriptive and realistic view of the society, there clearly exists gender disparities in occupations. This is inherently tied to many societal constructs and cultural backgrounds, and are a reality for many occupations. Also pointed out by Blodgett et al. (2020) , the importance of the connection between language and social hierarchies, has not been considered in most previous work on bias in NLP. It is a reality that most Norwegian nurses are females. Having a model reflecting this reality might not be problematic per se, but using this disparity to for example systematically reject male applicants to a nurse position is a very harmful effect.", |
|
"cite_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 355, |
|
"text": "Blodgett et al. (2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we investigate how the real-world Norwegian demographic distribution of occupations, along the two gender dimensions male versus female, is reflected in large transformer-based pre-trained language models. We give a descriptive assessment of the distribution of occupations, and investigate to what extent these are reflected in four pre-trained Norwegian and two multilingual models. More precisely, we focus on the following research questions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 To what extent are demographic distributions of genders and occupations represented in pretrained language models?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 How are demographically clearly gendercorrelated vs. gender-balanced occupations represented in pre-trained language models?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To address these questions, we investigate the correlations of occupations with Norwegian gendered pronouns and names. We analyse five template-based tasks, and compare the outputs of the models to real-world Norwegian demographic distributions of occupations by genders.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "After first providing a bias statement in Section 2, we give an overview of previous relevant work in Section 3. Section 4 describes our experimental setup, and outlines our template-based tasks. We present and discuss our main results and findings in Section 5 and 6. We conclude with a summary of our work, and discuss our future plans in Section 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We follow the bias definition of Friedman and Nissenbaum (1996) , where bias is defined as the cases where automated systems exhibit a systematic discrimination against, and unfairly process, a certain group of individuals. In our case, we see this as reflected in large pre-trained language models and how they can contain skewed gendered representations that can be systematically unfair if this bias is not uncovered and properly taken into account in downstream applications. Another definition of bias that we rely on is that of Shah et al. (2020) , where bias is defined as the discrepancy between the distribution of predicted and ideal outcomes of a model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 63, |
|
"text": "Friedman and Nissenbaum (1996)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 552, |
|
"text": "Shah et al. (2020)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias statement", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We focus on the associations between gendered (female and male) pronouns/names and professional occupations. We investigate to what degree pre-trained language models systematically associate specific genders with given occupations. However, we explore this from the perspective of a descriptive assessment: Instead of expecting the system to treat genders equally, we compare how these gender-occupation representations reflect the actual and current Norwegian demographics. This will in no way reduce the representational harms of stereotypical female and male occupations, that could both be propagated and exaggerated by downstream tasks, but would rather shed light on which occupations are falsely represented by such models. Moreover, our work will provide knowledge about the biases contained in these models that may be important to take into account when choosing a model for a specific application.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias statement", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Arguably, a limitation of our work is that we are only able to evaluate correlations between occupations and the binary gender categories male/female, although we acknowledge the fact that gender as an identity spans a wider spectrum than this.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias statement", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Training data in NLP tasks may contain various types of bias that can be inherited by the models we train (Hovy and Prabhumoye, 2021) , and that may potentially lead to unintended and undesired effects when deployed (Bolukbasi et al., 2016) . The bias can stem from the unlabeled texts used for pretraining of Language Models (LMs), or from the language or the label distribution used for tuning a downstream classifier. Since LMs are now the backbone of most NLP model architectures, the extent to which they reflect, amplify, and spread the biases existing in the input data is very important for the further development of such models, and the understanding of their possible harmful outcomes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 133, |
|
"text": "(Hovy and Prabhumoye, 2021)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 240, |
|
"text": "(Bolukbasi et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Efforts so far have shown a multitude of biases in pre-trained LMs and contextualized embeddings. Sheng et al. (2019) show that pre-training the LM BERT (Devlin et al., 2019) on a medical corpus propagates harmful correlations between genders, ethnicity, and insurance groups. Hutchinson et al. (2020) show that English LMs contain biases against disabilities, where persons with disabilities are correlated with negative sentiment words, and mental illness too frequently co-occur with homelessness and drug addictions. Both Zhao and Bethard (2020) and Basta et al. (2019) show that ELMO (Peters et al., 2018) contains, and even amplifies gender bias. Especially, Basta et al. (2019) discuss the differences of contextualized and noncontextualized embeddings, and which types of gender bias are mitigated and which ones are amplified.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 117, |
|
"text": "Sheng et al. (2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 174, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 301, |
|
"text": "Hutchinson et al. (2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 526, |
|
"end": 549, |
|
"text": "Zhao and Bethard (2020)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 573, |
|
"text": "Basta et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 610, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 665, |
|
"end": 684, |
|
"text": "Basta et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Most work on detecting gender bias has focused on template-based approaches. These templates are simple sentences of the form \"[pronoun] is a [description]\", where a description could be anything from nouns referring to occupations, to adjectives referring to sentiment, emotions, or traits (Stanczak and Augenstein, 2021; Saunders and Byrne, 2020; Bhaskaran and Bhallamudi, 2019; Cho et al., 2019; Prates et al., 2018) . Bhardwaj et al. (2021) investigate the propagation of gender biases of BERT in five downstream tasks within emotion and sentiment prediction. They propose an approach to identify gender directions for each BERT layer, and use the Equity Evaluation Corpus (Kiritchenko and Mohammad, 2018) as an evaluation of their approach. They show that their approach can reduce some of the biases in downstream tasks. Nozza et al. (2021) also use a template-and lexicon-based approach, in this case for sentence completion. They introduce a dataset for the six languages English, French, Italian, Portuguese, Romanian, and Spanish, and show that LMs both reproduce and amplify gender-related societal stereotypes. Winograd Schemas data (Levesque et al., 2012) . This dataset was developed for the task of coreference resolution, and contains a set of manually annotated templates that requires commonsense reasoning about coreference. It is used to explore the existence of biases in coreference resolution systems, by measuring the dependence of the system on gendered pronouns along stereotypical and nonstereotypical gender associations with occupations. Similarly, the WinoBias (Zhao et al., 2018) dataset focuses on the relationship between gendered pronouns and stereotypical occupations, and is used to explore the existing stereotypes in models. The WinoGender dataset (Rudinger et al., 2018 ) also contains sentences focusing on the relationship between pronouns, persons, and occupations. Here, they also include gender-neutral pronouns. Unlike WinoBias, WinoGender's sentences are built such that there is a coreference between pronouns and occupations, and between the same pronouns and persons. Based on these datasets for coreference resolution, WinoMT (Stanovsky et al., 2019) has been developed for the task of machine translation. The dataset also contains stereotypical and nonstereotypical templates used to investigate gender bias in machine translation systems. Moreover, Bender et al. (2021) point out the dangers of LMs and how they can potentially amplify the already existing biases that occur in the data they were trained on. They highlight the importance of understanding the harmful consequences of carelessly using such models in language processing, and how they in particular can hurt minorities. They also discuss the difficulty of identifying such biases, and how complicated it can be to tackle them. This is partly due to poor framework definitions, i.e., how culturally specific they are, but also how unreliable current bias evaluation methods are.", |
|
"cite_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 322, |
|
"text": "(Stanczak and Augenstein, 2021;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 348, |
|
"text": "Saunders and Byrne, 2020;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 380, |
|
"text": "Bhaskaran and Bhallamudi, 2019;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 398, |
|
"text": "Cho et al., 2019;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 419, |
|
"text": "Prates et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 444, |
|
"text": "Bhardwaj et al. (2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 677, |
|
"end": 709, |
|
"text": "(Kiritchenko and Mohammad, 2018)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 827, |
|
"end": 846, |
|
"text": "Nozza et al. (2021)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1145, |
|
"end": 1168, |
|
"text": "(Levesque et al., 2012)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1591, |
|
"end": 1610, |
|
"text": "(Zhao et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1786, |
|
"end": 1808, |
|
"text": "(Rudinger et al., 2018", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 2176, |
|
"end": 2200, |
|
"text": "(Stanovsky et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We focus therefore in this work on investigating how culturally specific Norwegian demographics related to gender and occupations are reflected in four Norwegian and two multilingual pre-trained LMs. Our work differs from previous work in that we ground our bias probes to real-world distributions of gender, and rather than expecting the models to always have a balanced representation of genders, we explore to which degree they reflect true demographics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Following the methodology of previous research on gender bias in pre-trained language models, and due to the corresponding lack of resources for Norwegian, we generate our own set of templates that we use with the pre-trained language models to make use of their ability to compute the probabilities of words and sentences. We present an empirical analysis of gender biases towards occupational associations. By using the templates we hope to reduce variation by keeping the semantic structure of the sentence. We analyze the probability distributions of returned pronouns, occupations, and first names; and compare them to real-world gold data representing the demographic distribution in Norway. Investigating the differences between the models can also give us insights into the content of the various types of corpora they were trained on. Data and codes will be made available 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Below we discuss in turn (i) the gold reference distribution of occupations and genders, (ii) the templates, (iii) how the templates are used for probing pre-trained language models, and finally (iv) the models that we test.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use a set of 418 occupations. These represent the demographic distribution of females and males in the respective occupations in Norway 2 originating from the Norwegian statistics bureau. The bureau releases yearly statistics covering various aspects of the Norwegian society, and all data is made freely available. This list comprises a fine-grained level of occupations, where e.g., lege (doctor) and allmennlege (general practitioner) are considered two different occupations. The gender-to-occupation ratios in these statistics are used as 'gold standard' when probing the models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Table 1 we show some examples of the occupations dominated by more than 98% of either gender, and those that have a more balanced distribution (underlined). Culturally speaking, Norway is known to strive for gender balance in all occupations. While this is true for many instances, there are still some occupations that are unbalanced in gender-distribution. From the Norwegian statistics bureau, it is clear that most midwives are still women, and that most chief engineers are males. However, for occupations as Phd candidates, psychiatrist, doctor, architect, lawyer, politician, and associate professor the distribution of genders is more balanced.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Templates Our templates combine occupations, pronouns, and first names. We focus on five template-based tasks, and generate the following corresponding templates that we use as bias probes (Solaiman et al., 2019 As pronouns, our work mainly focuses on hun and han (she and he respectively). As demographic statistics are still made using a binary gender distribution, we could not include the gender neutral pronoun hen (they), which is, in addition, rarely used in Norway.", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 211, |
|
"text": "(Solaiman et al., 2019", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As first names, we also extract from the Norwegian statistics bureau 3 the 10 most frequent female and male names in Norway from 1880 to 2021, this results in 90 female names and 71 male names. For tasks 1-4 we use the full set of 418 occupations, while in task 5 we focus on those that either have a balanced distribution between genders or are clearly female-or male-dominated. This was decided after an analysis of the distribution of occupations across genders, and resulted in two thresholds. All occupations that had between 0 and 10% differences in distribution, were deemed balanced (e.g., 51% female and 49% male). All occupations that had more than 75% distribution of one gender against the other, were deemed unbalanced, and are referred to as either clearly female (\u226575%) or clearly male (\u226575%) occupations. This resulted in a set of 31 clearly female occupations, 106 clearly male occupations, and 49 balanced occupations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For tasks 1 and 2, we mask the pronouns and compute the probability distribution across the occupations for female and male pronouns. For tasks 3, 4, and 5, we mask the occupations and compute the probability distributions in each bias-probe. Masking pronouns will allow us to uncover how likely a gendered pronoun is correlated with an occupation, and masking the occupation will allow us to uncover how likely occupations are correlated with female and male names.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Probing and evaluation For each task, we first generate the probability distributions of masked tokens in each bias probe. In order to have a comparable distribution to the gold standard (which is given as a percentage), we compute a simple percentage representation of the probability distributions by following the following formula:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f_pron% = prob f_pron prob f_pron+prob m_pron", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Where f_pron% is the percentage of a female pronoun, and prob x_pron is the output probability of each model for each of the female and male pronouns. The same simple formula is used in all tasks. We are aware that this is a simplified representation of the output of each model, nevertheless, we believe that it will not change the overall distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Once probability distributions are mapped to per-centages, we quantify the difference between female and male scores by simply subtracting the scores of males from the scores of female. Positive values will represent occupations that are more strongly associated with females than males by the model, and negative values represent the opposite. This is also applied to the gold standard data. We use the demographic distribution of the occupations from the Norwegian statistics bureau as gold data. Based on this, values greater than 0 are deemed female-dominated occupations, and values lower that 0 are male-dominated occupation. This is used to compute the macro F1 values for each model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference distribution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We analyse the predictions of six pre-trained language models, four Norwegian and two multilingual. Note that Norwegian has two official written standards; Bokm\u00e5l (literally 'book tongue') and Nynorsk (literally 'new Norwegian'). While Bokm\u00e5l is the main variety, roughly 15% of the Norwegian population write in the Nynorsk variant. All the Norwegian models are trained on data comprising both Bokm\u00e5l and Nynorsk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained language models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 NorBERT (Kutuzov et al., 2021) : trained on the Norwegian newspaper corpus 4 , and Norwegian Wikipedia, comprising about two billion word tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 32, |
|
"text": "(Kutuzov et al., 2021)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained language models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 NorBERT2 5 : trained on the non-copyrighted subset of the Norwegian Colossal Corpus (NCC) 6 and the Norwegian subset of the C4 web-crawled corpus (Xue et al., 2021) . In total, it comprises about 15 billion word tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 166, |
|
"text": "(Xue et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained language models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 NB-BERT (Kummervold et al., 2021): trained on the full NCC, and follows the architecture of the BERT cased multilingual model (Devlin et al., 2019) . It comprises around 18.5 billion word tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 149, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained language models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 NB-BERT_Large 7 : trained on NCC, and follows the architecture of the BERT-large uncased model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained language models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 mBERT (Devlin et al., 2019) : pre-trained on a set of the 104 languages with the largest Wikipedia pages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 29, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained language models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 XLM-RoBERTa (Conneau et al., 2020) : trained on a collection of 100 languages from the Common Crawl corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 36, |
|
"text": "(Conneau et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained language models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As can be seen above, each model has been trained on different types of corpora, and are all of various sizes. The NCC corpus, is a collection of OCR-scanned documents from the Norwegian library's collection of newspapers and works of fiction (with publishing years ranging from early 1800s to present day), government reports, parliament collections, OCR public reports, legal resources such as laws, as well as Norwegian Wikipedia. In short, some models are trained on well structured texts, that follow a somewhat formal style, while other models also include less structured texts in the form of online content.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained language models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Table 2 summarizes the overall results for all models. We also compute class-level F1 values for each task, these can be found in Table 3 and Figure 5 . Below we discuss the task-wise results in more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 137, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 150, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the first task, we mask the pronouns she and he in our bias probes. We focus on the full set of 418 occupations. As can be seen in Table 2 , all four Norwegian models give higher scores than the two multilingual models. NB-BERT and NB-BERT_Large have a macro F1 of 0.75, and are the highest performing models overall. It should be pointed out that these are also the biggest Norwegian models in terms of token counts. NorBERT is the less performing Norwegian model in this task, and has a macro F1 a few percentiles higher than the multilingual model XLM-RoBERTa. We believe that this might be impacted by the the size of NorBERT, which is the smallest Norwegian model in terms of token counts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 141, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task1: (she|he) is a/an [occupation]", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Looking at class-level F1 scores from Table 3 , all models achieve high F1 scores for the male class, with NB-BERT_Large achieving the highest score with an F1 of 0.84, and mBERT achieving the lowest one with an F1 of 0.74. In contrast, all models have substantially lower F1 score on the female class. Again, NB-BERT_Large achieves the highest score with 0.67 F1, and mBERT the lowest with 0.30. This shows that the models are already somehow skewed towards the male class. In addition to looking at the distribution of all occupations, and based on the previous observation that all models seem to reflect male occupations but to a lesser extent reflect female occupations, we have looked at the occupations that have balanced and unbalanced distributions in the gold data. The unbalanced occupations as previously mentioned, are those which are clearly female or male occupations (more than 75% distribution of one gender against the other). The balanced distribution are those that have between 0 and 10% differences in gender distribution in the gold data. Results are depicted in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 45, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1086, |
|
"end": 1094, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task1: (she|he) is a/an [occupation]", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "When it comes to clearly female occupations, the three biggest Norwegian models, namely Nor-BERT2, NB-BERT, and NB-BERT_Large obtain highest F1 values with 0.87, 0.92, and 0.89 respectively. Followed by XLM-RoBERTa and NorBERT. For clearly male occupations, all models have high F1 values, with the three top ones being again Nor-BERT2, NB-BERT, and NB-BERT_Large. The two multilingual models achieve quite high values, with XLM-RoBERTa outperforming NorBERT here again. It is quite clear that the Norwegian models have a good representation of clearly female and male occupations. Another compelling result is that XLM-RoBERTa has a quite accurate representation of these unbalanced occupations, equating the ones from the smallest Norwegian model NorBERT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task1: (she|he) is a/an [occupation]", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Focusing on balanced occupations, most models exhibit a tendency to represent occupations as male. NorBERT, NB-BERT, and XLM-RoBERTa are the only models that seem to have a decent representation of female occupations. The expectations here are not that the models would give a better representation of female occupations, but rather be equally good at representing both genders.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task1: (she|he) is a/an [occupation]", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "[occupation]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task2: (she|he) works as a/an", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this second task, we also mask the pronouns and compute their probabilities in the bias probes. We here again focus on the full set of occupations, 418 occupations. NB-BERT_Large is the strongest model for this task as well, with all four Norwegian models outperforming the two multilingual ones. Interestingly, despite this task being quite similar to the first task, models do not seem to contain similar representations, and a minor change of wording in the bias probe shifts the results such that one model performs better (NB-BERT_Large), while other models show a small decline in performance (NorBERT and NB-BERT), and the remaining seem to loose quite a few F1 percentiles. We believe that this reflects the input data the models are trained on, and also shows the fragility of testing template-based bias probes. Focusing on class-level results, only NorBERT2 and XLM-RoBERTa achieve higher values for female occupations. Table 3 : Class-level (Male/Female) F1 when compared to the real-world \"gold\" distribution for tasks 1-4 els mostly represent male occupations, except for NB-BERT, which seems to be equally good at representing both.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 934, |
|
"end": 941, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task2: (she|he) works as a/an", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Similarly to Task1, we did a more thorough analysis by focusing on the balanced and unbalanced distributions of occupations, this can be seen in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 153, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task2: (she|he) works as a/an", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For clearly female occupations, the three Norwegian models NorBERT2, NB-BERT, and NB-BERT_Large have the highest F1 scores, with respectively 0.71, 0.91, and 0.91. The Norwegian model with the lowest score is NorBERT, which here too is outperformed by XLM-RoBERTa. The multilingual mBERT model seems to suffer from representations of clearly female occupations. Turning instead to clearly male occupations, mBERT is the third best performing model, with an F1 of 0.81, preceded by NorBERT2 with 0.87 F1, and NB-BERT and NB-BERT_Large with both an F1 of 0.97. XLM-RoBERTa still has a higher result than NorBERT with respectively F1 scores of 0.45 and 0.22. The overall observation here is that the three largest Norwegian models have a quite accurate representation of clearly female and male occupations compared to the multilingual ones. It also seems that the size of the training data matters, as NorBERT does not equate with other models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task2: (she|he) works as a/an", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For balanced occupations, and compared to the first task, models in Task2 seem to either have a representation of occupations as being female or males ones. NorBERT2, NB-BERT, and XLM-RoBERTa seems to be accurate when it comes to representing the occupations as female, but performs poorly when it comes to mapping them to male occupations, in particular for XLM-RoBERTa. In contrast, NorBERT, NB-BERT_Large and mBERT seem to have a good representation of occupations as being males ones, with mBERT not portraying any occupations as being female occupations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task2: (she|he) works as a/an", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this task, we use the set of most frequent Norwegian first names from 1880 to 2021. Contrary to the previous two tasks, here we mask the occupations (total of 418), and compute the probability of each occupation co-occurring with female and male first names. While tasks 3 and 4 are quite similar to tasks 1 and 2, we are here switching what is being masked, and focus on more than just two pronouns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task3: [name] is a/an [occupation]", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "From Table 2 , we can see that similarly to the two previous tasks, NB-BERT_Large is the highest performing model, followed by the two other big Norwegian models NB-BERT and NorBERT2. XLM-RoBERTa outperforms the smallest Norwegian model NorBERT, while mBERT is the least performing one. The results for this task are comparable to the most similar task, Task1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 12, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task3: [name] is a/an [occupation]", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Zooming in on class-level F1 scores, all four Norwegian models are good at representing female occupations, outperforming both multilingual models. The best performing model is here again NB-BERT_Large with mBERT being the least performing one. For male occupations, all models achieve high scores, with NorBERT2 achieving the high- As for the two previous tasks, we also look at the balanced and unbalanced occupations from the gold data, and explore how each of these are reflected in the models using Task3's bias probe. These can be seen in Figure 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 545, |
|
"end": 553, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task3: [name] is a/an [occupation]", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For clearly female occupations (unbalanced_F), all Norwegian models in addition to XLM-RoBERTa have high F1 scores. Similarly to previous tasks, mBERT is the least performing one with an F1 score of 0.23. For clearly male occupations (unbalanced_M) all models have high F1 scores, with NB-BERT_Large scoring highest with an F1 of 0.98, followed by NorBERT2 (0.96), NB-BERT (0.93), XLM-RoBERTa (0.89), mBERT (0.79), and NorBERT (0.71). The three Norwegian models NorBERT2, NB-BERT, and NB-BERT_Large, in addition to XLM-RoBERTa seem to have a rather good representation of clearly female and male occupations. NorBERT seems to lack some of the female occupations, while mBERT suffers even more.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task3: [name] is a/an [occupation]", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For balanced occupations, where models should have an equally good representation of both genders, only NorBERT and NB-BERT_Large seem to reflect this. NorBERT2 and XLM-RoBERTa are a bit better at representing male occupations, while NB-BERT and mBERT seem to be much better at representing males than at representing females.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task3: [name] is a/an [occupation]", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Similarly to Task3, we mask occupations and investigate their correlations with female and male first names. As for Task2, we here use the probe fixed by the sequence \"works as a/an\". From Table 2 , it is apparent that the three big Norwegian models NorBERT2, NB-BERT, and NB-BERT_Large with respective F1 scores of 0.72, 0.80, 0.74, are the models with the highest scores for the task. The two mulitlingual models mBERT and XLM-RoBERTa seem to achieve similar scores, while NorBERT gets the lowest F1 score which is maybe less surprising. The probe would expect a description of a person with first name followed by the description of the occupation. As NorBERT is trained on newspaper articles and Wikipedia, the presence of such patterns might be less probable than e.g. in books and literary works, which all of the other Norwegian models have been exposed to in their training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 196, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task4: [name] works as a/an [occupation]", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "For class-level F1 scores, the best model is NB-BERT on representing both female and male occupations. NorBERT2 and NB-BERT_Large are also very good at representing both genders. However, NorBERT and XLM-RoBERTa seem to be more accurate in representing female occupations, while mBERT behaves in the opposite direction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task4: [name] works as a/an [occupation]", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "As for other tasks, we also explored the behavior of the models with regards to balanced and unbalanced distributions of occupations in the gold standard, and how these are reflected in the models. This can be seen in Figure 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 226, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task4: [name] works as a/an [occupation]", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Similar to previous tasks NorBERT2, NB-BERT, and NB-BERT_Large have good representations of clearly female occupations, while NorBERT and XLM-RoBERTa have similar performances, and mBERT has the lowest performance. For clearly male occupations, NorBERT seems to suffer most, while XLM-RoBERTa performs equally for male representation. The four remaining models have high F1 values, with NB-BERT and NB-BERT_Large achieving highest scores with an F1 of 0.97. For balanced occupations, NorBERT, Nor-BERT2, NB-BERT_Large, and XLM-RoBERTa have decent F1 scores and seem to represent occu-pations as female ones. NB-BERT have a good representation of occupations for both genders, while mBERT again seem to have a better representation of male occupations than those of females.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task4: [name] works as a/an [occupation]", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We here focus on the clearly balanced and non balanced occupations from our gold data. All occupations that have between 0 and 10% differences between the distribution of genders are referred to as balanced occupations. Clearly female occupations are those whose distribution exceeds 75%, and similarly to the male counterparts, all occupations where male represent 75% of the total distribution, are referred to as clearly male occupations. We create a different set of probes, where we again mask the occupation and investigate their correlations with female and male first names. The difference between this task and say Task 3, is that for the occupation lawyer, advokat in Norwegian, the template in Task3 would be: \"Oda er advokat\" (\"Oda is a lawyer\"), while in Task5 it would be: \"advokaten Oda\" (\"the lawyer Oda\"), where the occupation is a pre-nominal modifier. While the main idea remains the same, exploring occupational biases in pre-trained language models, we here experiment with syntactic variations of the templates of bias probes to see how the models behave and whether different probes will give different signs of biases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task5: the [occupation] [name]", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "Focusing on the balanced occupations, from Table 2, all models achieve an F1 score of at least 0.46, with NB-BERT reaching the highest F1 value of 0.69. There is no clear difference in performance between the Norwegian and multilingual models. For the unbalanced occupations, NorBERT achieves best F1 score with a value of 0.83. Followed by NB-BERT, NorBERT2, and NB-BERT_Large with respectively 0.77, 0.76, and 0.76 F1 values. While the two multilingual models have at least 0.20 F1 values less than the least performing Norwegian model. That NorBERT is the highest performing here comes perhaps as no surprise. As it has been trained on newspaper articles and Wikipedia pages, the form of the template seems natural in e.g. reporting cases where people are introduced by their occupations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task5: the [occupation] [name]", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "Class-based F1 scores can be seen in Figure 5 . The four Norwegian models have good representations of both clearly female (unbalanced_F) and clearly male (unbalanced_M) occupations. With NorBERT achieving higher scores on both genders, and being the best model. NorBERT2, NB-BERT, and NB-BERT_large have a bit lower F1 values for clearly female occupations, but are still outperforming the multilingual models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 45, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task5: the [occupation] [name]", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "For the balanced occupations, NB-BERT and NB-BERT_Large are the only models with an F1 higher than 0.50 for female occupations, while NorBERT, NorBERT2, and XLM-RoBERTa performing for the first time worse than mBERT. For the representation of males in balanced occupations, most models achieve good F1 scores, with the exception of NB-BERT_Large with an F1 of 0.44. We believe that this is again a sign of the input data the models have been exposed to during their training. Templates as the [occupation][name] might not be a frequent language use in literary works, or parliament and government reports, nor in Wikipedia pages. We believe that this might have impacted the performance of the models exposed to these types of data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task5: the [occupation] [name]", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "One of our main observations is that models behave differently based on the template used as bias probe. The templates we have used, in e.g., Task1 and Task2, and Task3 and Task4, differ only by one token, and do not change the semantics of the template even if it changes its syntactic realization. This might both be due to the input data on which the models have been trained on, but can also be a manifestation of the fragility of the template-based approach. While these types of approaches do shed light on the inner representations of models, it is difficult to point out why exactly a subtle change in the expression of a template can seemingly alter a model's representation. Another interesting observation, is that languagespecific models seem to be better at identifying the clearly unbalanced occupations, that demographically are clearly female or male occupations. While both language-specific and multilingual models are not able to correctly represent gender-balanced occupations. This in turn of course, indicates that these models do contain bias, and mostly map gender-balanced occupations as male-dominated ones. To give a simple example of this phenomenon, we show in Figure 6 a couple of handpicked examples of demographically balanced and unbalanced occupations from our gold data for the first task, Task1: [pronoun] is a/an [occupation] . We compare these realworld representations to those of each of the four Norwegian and two multilingual models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1350, |
|
"end": 1362, |
|
"text": "[occupation]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1190, |
|
"end": 1198, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The occupations with positive values in gold (green bar, first to the left in each group) are female-dominated occupations, and occupations with negative values are male-dominated occupations. As previously mentioned, occupations with values [\u221210, +10] in gold are deemed to be genderbalanced occupations. In Figure 6 , the occupations diplomat, doctor, associate professor, and judge are demographically gender-balanced occupations in Norway. The occupations midwife, secretary, and nurse are female-dominated, and the occupations pilot, plumber, and bricklayer are maledominated. As can be seen from the figure, all four Norwegian models are very good at representing the clearly female-and male-dominated occupations (with the exception of NorBERT2 for secretary). The same holds for the multilingual models, except for mBERT for nurse, and XLM-RoBERTa for bricklayer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 317, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "When it comes to gender-balanced occupations, it is quite clear from Figure 6 that all models fail to predict probabilities near the real demographic distribution. However, NorBERT gives the closest distribution for the two occupations diplomat and associate professor, while for doctor, it is the two multilingual models and mBERT and XLM-RoBERTa that give the closest distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 77, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have presented in this paper an investigation into how a demographic distribution of occupations, along two gender dimensions, is reflected in pre-trained language models. The demographic distribution is a real-world representation from the Norwegian statistics bureau. Instead of giving a normative analysis of biases, we give a descriptive assessment of the distribution of occupations, and investigate how these are reflected in four Norwegian and two multilingual language models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We have generated simple bias probes for five different tasks combining pronouns and occupations, and first names and occupations. Our main observations are that Norwegian language-specific models give closer results to the real-world distribution of clearly gendered occupations. Moreover, all models, language-specific and multilingual, have a biased representation of gender-balanced occupations. Our investigations also show the fragility of template-based approaches, and the importance of the models' training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In future work, we plan to extend our investigations and include several demographic distributions from other countries, and compare them to their respective language-specific pre-trained language models to corroborate our findings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://github.com/SamiaTouileb/ Biases-Norwegian-Multilingual-LMs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://utdanning.no/likestilling", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.ssb.no/befolkning/navn/ statistikk/navn", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.nb.no/sprakbanken/ ressurskatalog/oai-nb-no-sbr-4/ 5 https://huggingface.co/ltgoslo/ norbert26 https://github.com/NbAiLab/notram/ blob/master/guides/corpus_description.md 7 https://huggingface.co/NbAiLab/ nb-bert-large", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by industry partners and the Research Council of Norway with funding to MediaFutures: Research Centre for Responsible Media Technology and Innovation, through the centers for Research-based Innovation scheme, project number 309339.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgment", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluating the underlying gender bias in contextualized word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Basta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Costa-Juss\u00e0", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noe", |
|
"middle": [], |
|
"last": "Casas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--39", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christine Basta, Marta R. Costa-juss\u00e0, and Noe Casas. 2019. Evaluating the underlying gender bias in con- textualized word embeddings. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 33-39, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "On the dangers of stochastic parrots: Can language models be too big", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Emily", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timnit", |
|
"middle": [], |
|
"last": "Bender", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angelina", |
|
"middle": [], |
|
"last": "Gebru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shmargaret", |
|
"middle": [], |
|
"last": "Mcmillan-Major", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shmitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proc. of the 2021 ACM Conference on Fairness, Accountability, and Transparency", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. In Proc. of the 2021 ACM Conference on Fairness, Accountability, and Transparency.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Navonil Majumder, and Soujanya Poria. 2021. Investigating gender bias in bert. Cognitive Computation", |
|
"authors": [ |
|
{ |
|
"first": "Rishabh", |
|
"middle": [], |
|
"last": "Bhardwaj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rishabh Bhardwaj, Navonil Majumder, and Soujanya Poria. 2021. Investigating gender bias in bert. Cog- nitive Computation, 13(4).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Good secretaries, bad truck drivers? occupational gender stereotypes in sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Jayadev", |
|
"middle": [], |
|
"last": "Bhaskaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isha", |
|
"middle": [], |
|
"last": "Bhallamudi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "62--68", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3809" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jayadev Bhaskaran and Isha Bhallamudi. 2019. Good secretaries, bad truck drivers? occupational gender stereotypes in sentiment analysis. In Proceedings of the First Workshop on Gender Bias in Natural Lan- guage Processing, pages 62-68, Florence, Italy. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Sociolinguistically driven approaches for just natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Su Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blodgett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Lin Blodgett. 2021. Sociolinguistically driven ap- proaches for just natural language processing.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Language (technology) is power: A critical survey of \"bias\" in NLP", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Su Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Solon", |
|
"middle": [], |
|
"last": "Blodgett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Barocas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5454--5476", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.485" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in NLP. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5454- 5476, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29. Won Ik Cho", |
|
"authors": [ |
|
{ |
|
"first": "Tolga", |
|
"middle": [], |
|
"last": "Bolukbasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkatesh", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Saligrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kalai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--181", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3824" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Ad- vances in neural information processing systems, 29. Won Ik Cho, Ji Won Kim, Seok Min Kim, and Nam Soo Kim. 2019. On measuring gender bias in translation of gender-neutral pronouns. In Proceed- ings of the First Workshop on Gender Bias in Natu- ral Language Processing, pages 173-181, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8440--8451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.747" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bias in computer systems", |
|
"authors": [ |
|
{ |
|
"first": "Batya", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Nissenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "ACM Transactions on Information Systems (TOIS)", |
|
"volume": "", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Informa- tion Systems (TOIS), 14(3).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Five sources of bias in natural language processing. Language and Linguistics Compass", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shrimai", |
|
"middle": [], |
|
"last": "Prabhumoye", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Hovy and Shrimai Prabhumoye. 2021. Five sources of bias in natural language processing. Lan- guage and Linguistics Compass, 15(8).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Social biases in NLP models as barriers for persons with disabilities", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Hutchinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vinodkumar", |
|
"middle": [], |
|
"last": "Prabhakaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Denton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kellie", |
|
"middle": [], |
|
"last": "Webster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Denuyl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5491--5501", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.487" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen De- nuyl. 2020. Social biases in NLP models as barriers for persons with disabilities. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5491-5501, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Examining gender and race bias in two hundred sentiment analysis systems", |
|
"authors": [ |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "43--53", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S18-2005" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Svetlana Kiritchenko and Saif Mohammad. 2018. Ex- amining gender and race bias in two hundred sen- timent analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Compu- tational Semantics, pages 43-53, New Orleans, Louisiana. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Operationalizing a national digital library: The case for a norwegian transformer model", |
|
"authors": [ |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "Per Egil Kummervold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rosa", |
|
"middle": [], |
|
"last": "De La", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proc. of the 23rd Nordic Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Per Egil Kummervold, Javier de la Rosa, Freddy Wet- jen, and Svein Arne Brygfjeld. 2021. Operationaliz- ing a national digital library: The case for a norwe- gian transformer model. In Proc. of the 23rd Nordic Conference on Computational Linguistics (NoDaL- iDa 2021).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Large-scale contextualised language modelling for norwegian", |
|
"authors": [ |
|
{ |
|
"first": "Andrey", |
|
"middle": [], |
|
"last": "Kutuzov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Barnes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Velldal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lilja", |
|
"middle": [], |
|
"last": "\u00d8vrelid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Oepen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proc. of the 23rd Nordic Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja \u00d8vrelid, and Stephan Oepen. 2021. Large-scale con- textualised language modelling for norwegian. In Proc. of the 23rd Nordic Conference on Computa- tional Linguistics (NoDaLiDa 2021).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The winograd schema challenge", |
|
"authors": [ |
|
{ |
|
"first": "Hector", |
|
"middle": [], |
|
"last": "Levesque", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ernest", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leora", |
|
"middle": [], |
|
"last": "Morgenstern", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Thirteenth international conference on the principles of knowledge representation and reasoning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hector Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Thirteenth international conference on the princi- ples of knowledge representation and reasoning.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "HONEST: Measuring hurtful sentence completion in language models", |
|
"authors": [ |
|
{ |
|
"first": "Debora", |
|
"middle": [], |
|
"last": "Nozza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Bianchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.naacl-main.191" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2021. HONEST: Measuring hurtful sentence com- pletion in language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Assessing gender bias in machine translation -a case study with google translate", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Marcelo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Prates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Pedro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Avelar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lamb", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.48550/ARXIV.1809.02208" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcelo O. R. Prates, Pedro H. C. Avelar, and Luis Lamb. 2018. Assessing gender bias in machine translation -a case study with google translate.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Gender bias in coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Rudinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Naradowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Leonard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "8--14", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Addressing exposure bias with document minimum risk training: Cambridge at the WMT20 biomedical translation task", |
|
"authors": [ |
|
{ |
|
"first": "Danielle", |
|
"middle": [], |
|
"last": "Saunders", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fifth Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "862--869", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danielle Saunders and Bill Byrne. 2020. Addressing exposure bias with document minimum risk train- ing: Cambridge at the WMT20 biomedical transla- tion task. In Proceedings of the Fifth Conference on Machine Translation, pages 862-869, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "What do we expect from multiple-choice QA systems?", |
|
"authors": [ |
|
{ |
|
"first": "Krunal", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3547--3553", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.findings-emnlp.317" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Krunal Shah, Nitish Gupta, and Dan Roth. 2020. What do we expect from multiple-choice QA systems? In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 3547-3553, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The woman worked as a babysitter: On biases in language generation", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Sheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Premkumar", |
|
"middle": [], |
|
"last": "Natarajan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.48550/ARXIV.1909.01326" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Release strategies and the social impacts of language models", |
|
"authors": [ |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Solaiman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Brundage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Askell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ariel", |
|
"middle": [], |
|
"last": "Herbert-Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gretchen", |
|
"middle": [], |
|
"last": "Krueger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jong", |
|
"middle": [ |
|
"Wook" |
|
], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Kreps", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.09203" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Rad- ford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, et al. 2019. Release strategies and the so- cial impacts of language models. arXiv preprint arXiv:1908.09203.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A survey on gender bias in natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Karolina", |
|
"middle": [], |
|
"last": "Stanczak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Augenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.48550/ARXIV.2112.14168" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karolina Stanczak and Isabelle Augenstein. 2021. A survey on gender bias in natural language process- ing.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Evaluating gender bias in machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Stanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1679--1684", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1164" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer", |
|
"authors": [ |
|
{ |
|
"first": "Linting", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Constant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihir", |
|
"middle": [], |
|
"last": "Kale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rami", |
|
"middle": [], |
|
"last": "Al-Rfou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Siddhant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.naacl-main.41" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods", |
|
"authors": [ |
|
{ |
|
"first": "Jieyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianlu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vicente", |
|
"middle": [], |
|
"last": "Ordonez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "15--20", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "How does BERT's attention change when you fine-tune? an analysis methodology and a case study in negation scope", |
|
"authors": [ |
|
{ |
|
"first": "Yiyun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4729--4747", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.429" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiyun Zhao and Steven Bethard. 2020. How does BERT's attention change when you fine-tune? an analysis methodology and a case study in negation scope. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4729-4747, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Task1, class-level F1 values focusing on balanced and unbalanced occupations.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Task2, class-level F1 values focusing on balanced and unbalanced occupations.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Task3, class-level F1 values focusing on balanced and unbalanced occupations. est F1 of 0.84, and NorBERT achieving the lowest score of 0.60 F1.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Task4, class-level F1 values focusing on balanced and unbalanced occupations.", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Task5, class-level F1 values focusing on balanced and unbalanced occupations.", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Example of balanced and unbalanced occupations in gold data, and each model's prediction in Task1.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: A selection of occupations from the Norwegian statistics bureau, the gold reference distribution of occu-pations and genders. The occupations presented here are either dominated by more than 98% of either gender, or have a more balanced distribution (underlined percentages) between both female and male genders.</td></tr></table>", |
|
"text": "", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Task1: [pronoun] is a/an</td></tr></table>", |
|
"text": "Macro F1 of models compared to the real-world \"gold\" distribution.", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |