ACL-OCL / Base_JSON /prefixG /json /gebnlp /2021.gebnlp-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:12.250581Z"
},
"title": "Using Gender-and Polarity-Informed Models to Investigate Bias",
"authors": [
{
"first": "Samia",
"middle": [],
"last": "Touileb",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {}
},
"email": "samiat@uio.no"
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {}
},
"email": "liljao@uio.no"
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {}
},
"email": "erikve@uio.no"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this work we explore the effect of incorporating demographic metadata in a text classifier trained on top of a pre-trained transformer language model. More specifically, we add information about the gender of critics and book authors when classifying the polarity of book reviews, and the polarity of the reviews when classifying the genders of authors and critics. We use an existing data set of Norwegian book reviews with ratings by professional critics, which has also been augmented with gender information, and train a document-level sentiment classifier on top of a recently released Norwegian BERT-model. We show that gender-informed models obtain substantially higher accuracy, and that polarity-informed models obtain higher accuracy when classifying the genders of book authors. For this particular data set, we take this result as a confirmation of the gender bias in the underlying label distribution, but in other settings we believe a similar approach can be used for mitigating bias in the model.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this work we explore the effect of incorporating demographic metadata in a text classifier trained on top of a pre-trained transformer language model. More specifically, we add information about the gender of critics and book authors when classifying the polarity of book reviews, and the polarity of the reviews when classifying the genders of authors and critics. We use an existing data set of Norwegian book reviews with ratings by professional critics, which has also been augmented with gender information, and train a document-level sentiment classifier on top of a recently released Norwegian BERT-model. We show that gender-informed models obtain substantially higher accuracy, and that polarity-informed models obtain higher accuracy when classifying the genders of book authors. For this particular data set, we take this result as a confirmation of the gender bias in the underlying label distribution, but in other settings we believe a similar approach can be used for mitigating bias in the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As is well established, training data for NLP tasks may contain various types of bias that can be inherited by the models we train, and that may potentially lead to unintended and undesired effects when deployed (Bolukbasi et al., 2016) . The bias can stem from the unlabeled texts used for pretraining of language models (LMs), or from the language or the label distribution used for tuning a downstream classifier. Typically, when a classifier is fitted on top of a pre-trained LM for a given task, only textual data is considered by the learned representations.",
"cite_spans": [
{
"start": 212,
"end": 236,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we investigate the effect of adding metadata information about demographic variables that are known to be associated with bias in the training data. Specifically, we focus on the task of binary sentiment classification based on data where gender has previously been shown to be correlated with the label distribution. The data we use are Norwegian book reviews, where the gender of both critics and book authors have previously been annotated (Touileb et al., 2020) . When considering all pairs of male/female critics/authors, Touileb et al. (2020) showed that female critics tended to assign lower ratings to female authors, relative to other gender pairs. In this work we explore the effect of adding information about gender to a document-level polarity classifier trained on top of a pre-trained BERT model for Norwegian, showing that the model is able to take this metadata into account when making predictions. Through experiments with gender classification on the same data set, we also demonstrate that the language of the reviews is itself indeed gendered.",
"cite_spans": [
{
"start": 456,
"end": 478,
"text": "(Touileb et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 540,
"end": 561,
"text": "Touileb et al. (2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We believe that adding this type of metadata about e.g., demographic information when available can in many cases be used to mitigate bias in models. Consider the case of a model for toxic language classification; it seems intuitively plausible that incorporating information about users could help reducing the risk of false positives for selfreferential mentions by marginalized groups. However, we have a different focus for the particular experiments reported here: we show how adding information about gender in a polarity classifier confirms gender bias, by showing how a genderinformed model obtains substantially higher accuracy when evaluated on a biased label distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In what follows, we start in Section 3 with an overview of related work, after providing a brief bias statement in Section 2. In Section 4 we present our dataset, and give a detailed description of our experiments in Section 5. We present and analyse our results in Section 6, followed by an error analysis in Section 7. Finally, we summarize our findings and discuss future works in Section 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work focuses on gender bias, which we identify as the differences in language use between persons, on the unique basis of their genders. The concrete task that we deal with in the current paper is that of polarity classification of book reviews, using labels derived from the numerical ratings assigned by professional critics. We use an existing dataset of book reviews dubbed NoReC gender (Touileb et al., 2020) , which is a subset of the Norwegian Review Corpus (Velldal et al., 2018) , a dataset primarily used for document-level sentiment analysis. The subset NoReC gender has previously been augmented with information about the gender of both critics and book authors. Through experiments with gender predictions of both critics and book authors, we demonstrate the presence of gendered language in these reviews. Previous work has also shown that the distribution of ratings in the dataset to some degree is correlated with the gender of the critics and the authors. Consequently, work on sentiment classification on the basis of the dataset could risk inheriting aspects of gender bias unknowingly, either in the model predictions themselves or in how these are evaluated, or both. One of our motivations in this work is exactly to assess whether the predictions of sentiment classifiers trained on review data may to some degree depend on gender, by explicitly incorporating this as a variable in the model.",
"cite_spans": [
{
"start": 396,
"end": 418,
"text": "(Touileb et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 470,
"end": 492,
"text": "(Velldal et al., 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias statement",
"sec_num": "2"
},
{
"text": "Note that there are also issues of what could be argued to be representational harm (Blodgett et al., 2020) associated with the underlying encoding of gender itself, since only the binary gender categories of male/female are present in the data. While the dataset we use only reflects binary gender categories, we acknowledge the fact that gender as an identity spans a wider spectrum than this.",
"cite_spans": [
{
"start": 84,
"end": 107,
"text": "(Blodgett et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias statement",
"sec_num": "2"
},
{
"text": "State-of-the-art results for various NLP tasks nowadays typically build on some pre-trained transformer language models like BERT (Devlin et al., 2019) . Despite their great achievements, these models have been shown to include various types of bias (Zhao et al., 2020; Bartl et al., 2020; Basta et al., 2019; Kaneko and Bollegala, 2019; Friedman et al., 2019; Kurita et al., 2019) .",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 250,
"end": 269,
"text": "(Zhao et al., 2020;",
"ref_id": "BIBREF33"
},
{
"start": 270,
"end": 289,
"text": "Bartl et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 290,
"end": 309,
"text": "Basta et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 310,
"end": 337,
"text": "Kaneko and Bollegala, 2019;",
"ref_id": "BIBREF17"
},
{
"start": 338,
"end": 360,
"text": "Friedman et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 361,
"end": 381,
"text": "Kurita et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Recent works have shown the advantage of adding extra information to pre-trained language models for numerous tasks, e.g., dialog systems (Madotto et al., 2018) , natural language inference (Chen et al., 2018) , and machine translation (Zaremoodi et al., 2018) . Knowledge graphs have also been used to enrich embedding information. Zhang et al. (2019) use entries from Wikidata, as well as their relation to each others, to represent and inject structural knowledge aggregates to a collection of large-scale corpora. They show that their approach reduces noisy data and improves BERT fine-tuning on limited datasets. Bourgonje and Stede (2020) enrich a German BERT model with linguistic knowledge represented as a lexicon as well as manually generated syntactic features. Peinelt et al. (2020) enrich a BERT with LDA topics, and show that this combination improves performance of semantic similarity. Ostendorff et al. (2019) use a combination of metadata about books to enrich a BERTbased multi-class classification model. They train a BERT model on the title and the texts of each book, and concatenate the output with metadata information and author embeddings from Wikipedia, and feed them into a Multilayer Perceptron (MLP).",
"cite_spans": [
{
"start": 138,
"end": 160,
"text": "(Madotto et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 190,
"end": 209,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 236,
"end": 260,
"text": "(Zaremoodi et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 333,
"end": 352,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 773,
"end": 794,
"text": "Peinelt et al. (2020)",
"ref_id": null
},
{
"start": 902,
"end": 926,
"text": "Ostendorff et al. (2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "When it comes to gender and gender bias, previous research has been devoted to the identification of bias in textual content and models (Garimella and Mihalcea, 2016; Schofield and Mehr, 2016; Kiritchenko and Mohammad, 2018) , and in input representations as static and contextualised embeddings (Takeshita et al., 2020; Bartl et al., 2020; Zhao et al., 2020; Basta et al., 2019; Kaneko and Bollegala, 2019; Friedman et al., 2019; Bolukbasi et al., 2016) . A considerable amount of previous work has also gone into either mitigating existing bias in embeddings (Takeshita et al., 2020; Maudslay et al., 2019; Zmigrod et al., 2019; Garg et al., 2018) , making them gender neutral (Zhao et al., 2018) , or using debiased embeddings (Escud\u00e9 Font and Costa-juss\u00e0, 2019). Instead of debiasing and mitigating bias in embeddings, some work has focused on creating gender balanced corpora (Costajuss\u00e0 et al., 2020; Costa-juss\u00e0 and de Jorge, 2020).",
"cite_spans": [
{
"start": 136,
"end": 166,
"text": "(Garimella and Mihalcea, 2016;",
"ref_id": "BIBREF14"
},
{
"start": 167,
"end": 192,
"text": "Schofield and Mehr, 2016;",
"ref_id": "BIBREF26"
},
{
"start": 193,
"end": 224,
"text": "Kiritchenko and Mohammad, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 296,
"end": 320,
"text": "(Takeshita et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 321,
"end": 340,
"text": "Bartl et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 341,
"end": 359,
"text": "Zhao et al., 2020;",
"ref_id": "BIBREF33"
},
{
"start": 360,
"end": 379,
"text": "Basta et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 380,
"end": 407,
"text": "Kaneko and Bollegala, 2019;",
"ref_id": "BIBREF17"
},
{
"start": 408,
"end": 430,
"text": "Friedman et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 431,
"end": 454,
"text": "Bolukbasi et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 561,
"end": 585,
"text": "(Takeshita et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 586,
"end": 608,
"text": "Maudslay et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 609,
"end": 630,
"text": "Zmigrod et al., 2019;",
"ref_id": "BIBREF35"
},
{
"start": 631,
"end": 649,
"text": "Garg et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 679,
"end": 698,
"text": "(Zhao et al., 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Several previous studies have focused on gender and gender bias in sentiment analysis, both from data and model perspectives. To name a few: Kiritchenko and Mohammad (2018) propose an evaluation corpus (Equity Evaluation Corpus) that can be used to mitigate biases towards a selection of genders and races. Occupational gender stereotypes exist in sentiment analysis models (Bhaskaran and Bhallamudi, 2019) , both in training data and in pre-trained contextualized models.",
"cite_spans": [
{
"start": 141,
"end": 172,
"text": "Kiritchenko and Mohammad (2018)",
"ref_id": "BIBREF18"
},
{
"start": 374,
"end": 406,
"text": "(Bhaskaran and Bhallamudi, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Models have also been proposed to uncover gender biases (Hoyle et al., 2019) . Incorporating extra demographic information into sentiment classification models have also been successful. Hovy (2015) has shown that incorporation gender information (as embeddings) in models can improve sentiment classification. They show that such an approach can reduce the bias towards minorities, as for example females, who tend to communicate differently from the norm.",
"cite_spans": [
{
"start": 56,
"end": 76,
"text": "(Hoyle et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "In this paper, we do not focus on biases present in existing systems , nor do we try to mitigate them in a traditional way. We use a dataset of Norwegian book reviews for which a previous study has indicated some degree of gender bias in the label distribution of review ratings (Touileb et al., 2020) . Here, we investigate whether this bias is reflected in the text, as measured by classification scores on two tasks, namely binary sentiment and gender classification, and whether adding metadata information explicitly providing the gender of the authors and critics of the reviews, or the sentiment score of the review increases classification performance. Similarly to (Ostendorff et al., 2019) , we explore the effects of adding this metadata information to document classification tasks using a BERTbased model, in this case the Norwegian NorBERT (Kutuzov et al., 2021) .",
"cite_spans": [
{
"start": 279,
"end": 301,
"text": "(Touileb et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 674,
"end": 699,
"text": "(Ostendorff et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 854,
"end": 876,
"text": "(Kutuzov et al., 2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "In this work, we focus on gender effects in reviews written by male or female critics, which in turn rates the works of male and female authors. The dataset we use is the NoReC gender 1 (Touileb et al., 2020) subset of the Norwegian Review Corpus (NoReC (Velldal et al., 2018) ). NoReC gender is a corpus of 4,313 professional book reviews from several of the major Norwegian news sources. Each review is rated with a numerical score on a scale from 1 to 6 (represented by the number of dots on a die), assigned by a professional critic. The reviews also contain additional metadata information like the name of the critics, name of the book authors, and their respective genders.",
"cite_spans": [
{
"start": 254,
"end": 276,
"text": "(Velldal et al., 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "The numerical ratings and name of the critics were already provided in the metadata data of NoReC (Velldal et al., 2018) , while the name of the authors and the information about the genders were manually annotated with the release of NoReC gender (Touileb et al., 2020) . As pointed out by Touileb et al. (2020) , some of the reviews were written by children, unknown authors/critics, or by editors, these were not assigned genders and were therefore not included in our work. This results in a set of 4,083 documents. Table 1 shows an overview of the NoReC gender dataset in terms of total number of critics and authors, and their distribution across genders.",
"cite_spans": [
{
"start": 98,
"end": 120,
"text": "(Velldal et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 248,
"end": 270,
"text": "(Touileb et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 291,
"end": 312,
"text": "Touileb et al. (2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "Each review in NoReC gender comes with a numerical dice score from 1 to 6. Similarly to Touileb et al. (2020), we choose to focus on clear positive and negative reviews and therefore only use reviews with negative ratings representing dice scores 1, 2, and 3, and reviews with positive ratings representing scores 5 and 6. However, in order to control for the distribution of positive and negative labels, we have selected a subset of reviews with rating 5 to have a balanced distribution of positive and negative reviews in the train set. This results in a subset of 683 negative and 708 positive reviews for NoReC gender . A distribution of these across the train, dev, and test splits can be seen in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 703,
"end": 710,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "The dataset NoReC gender also contains a bias in the distribution of labels, based on the gender of the critics and the authors (Touileb et al., 2020) . Figure 1 shows the total number of ratings in our dataset, where the first letter (M/F) indicates the gender of the critic and the second letter indicates that of the author. For example, MF represents reviews written by male critics reviewing the works of female authors. Here we observe a clear difference in the ratings given by female critics to female authors (FF). While most reviews seem to have a certain amount of balance between positive and negative polarities with slightly more positive than negative reviews, for FF it is the opposite. This, in addition to the unbalance between the total number of reviews based on gender, represent the bias present in NoReC gender 's label distribution.",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "(Touileb et al., 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 153,
"end": 162,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "We use the Norwegian BERT model NorBERT 2 (Kutuzov et al., 2021) . The model uses the same architecture as BERT base cased (Devlin et al., 2019) , and uses a 28,600 entry Norwegian-specific sentence piece vocabulary. It was jointly trained on both official Norwegian written forms Bokm\u00e5l and Nynorsk, on 200M sentences (around 2 billion tokens) from Wikipedia articles and news articles from the Norwegian News Corpus. 3 We use a similar architecture to Ostendorff et al. (2019) as shown in Figure 2 . We feed our review texts to a NorBERT architecture of 12 hidden layers consisting of 768 units each. These representations and the metadata are subsequently concatenated and passed to a two-layer Multilayer Perceptron (MLP), using ReLu as activation function. The output layer (SoftMax) gives for each task its binary output, i.e., either binary sentiment classification labels, or binary gender classification labels. We set the learning rate for AdamW (Loshchilov and Hutter, 2019) to 5e \u2212 5, and batch size to 32. We train the model for 5 epochs, and keep the best model on the dev set with regards to F 1 .",
"cite_spans": [
{
"start": 42,
"end": 64,
"text": "(Kutuzov et al., 2021)",
"ref_id": "BIBREF20"
},
{
"start": 123,
"end": 144,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 419,
"end": 420,
"text": "3",
"ref_id": null
},
{
"start": 956,
"end": 985,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 491,
"end": 499,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We have experimented with various input sizes (first 300 tokens, first 512 tokens, and first 128 + 2 https://huggingface.co/ltgoslo/ norbert 3 https://www.nb.no/sprakbanken/ ressurskatalog/oai-nb-no-sbr-4/ Figure 2 : Architecture of our metadata-enriched classification model. Our baseline model has the same architecture except for the metadata input and the concatenation step. last 383 tokens) both with tokenized and untokenized texts. The best results were achieved using untokenized texts, and using the first 128 and last 383 tokens, as pointed out by Sun et al. (2020) . These are the input sizes used in the models we report in this work.",
"cite_spans": [
{
"start": 559,
"end": 576,
"text": "Sun et al. (2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Our metadata is one-hot encoded, and has a dimension of two for gender (female and male), and two for polarity (positive and negative). In the case where we combine information about the genders of both authors and critics, the dimension is four (i.e., two gender dimensions each).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For the task of binary gender classification, we perform a set of four experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 NorBERT-none: without any metadata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 NorBERT-ga: adding information about the gender of authors. \u2022 NorBERT-gc: adding information about the gender of critics. \u2022 NorBERT-gac: adding information about the gender of both the authors and the critics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For each of the binary classification of genders of authors or critics, we perform the following two experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 NorBERT-none: classifying the gender of authors or critics without any metadata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "NorBERT \u2022 NorBERT-polarity: classifying the gender of authors or critics by adding information about the polarity (positive and negative) of the review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model dev test",
"sec_num": null
},
{
"text": "In all of our experiments, we use the task specific NorBERT-none as baselines. Table 3 shows F 1 scores of our binary sentiment classification models on both dev and test splits of NoReC gender . The baseline model NorBERTnone that only uses NorBERT without metadata performs quite well on both dev and test splits with F 1 scores of 82.45 and 80.66 respectively. But as can be seen, the model is the least accurate in our set of experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Model dev test",
"sec_num": null
},
{
"text": "We observe that the NorBERT-ga model, which incorporate information about the gender of the authors is the most accurate model on the test set, with an F 1 score of 84.21, while it is the third most accurate on the dev split with an F 1 score of 84.51. NorBERT-gc, which adds information about the gender of the critics, also yields better results than the baseline with an F 1 score of 84.92 on dev, and 82.33 on test. The best performing model on the dev set is NorBERT-gac, with added information about the genders of both authors and critics. This model is also the second best model on test with a F 1 score of 82.92.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The results presented in Table 3 show that gender-informed models with metadata informa-tion improve the task of binary sentiment classification with respectively 2.06, 2.47, and 2.8 F 1 points on the dev set, and 3.55, 1.67, and 2.26 F 1 points on test for the three models NorBERT-ga, NorBERTgc, and NorBERT-gac. This suggests that for a binary classification task on NoReC gender , knowing the gender of the authors and critics clearly influences the performance of the model. The scores of our gender classification tasks are presented in Table 4 . As previously mentioned, for the gender classification, we have two tasks: classification of the gender of the authors, and classification of the gender of the critics.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 543,
"end": 550,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "For the classification of the authors' genders, the baseline classifier NorBERT-none performs quite good with a F 1 score of 89.57 and 90.12 on dev and test respectively. However, adding the metadata about the polarity of the review (if it's positive or negative) influences the classification task by 5.36 and 4.48 points on dev and test respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Interestingly, we observe the opposite situation for the classification of the gender of critics. Here, the baseline model NorBERT-none outperforms the NorBERT-polarity model by 5.41 and 6.08 F 1 score points on respectively dev and test splits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "For the task of author gender classification, knowing the polarity of the review clearly influences the classification. Again, this indicates that gender and polarity are correlated in our data. The results also point to a difference between the gender of authors and critics. However, additional information about the polarity of the review, seems to hurt the classification of the genders of critics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In order to gain further insight into the differences between the models we are comparing and in particular, the classification differences caused by the addition of information on gender/polarity, we perform an error analysis by comparing, for each task, how our models perform compared to the taskspecific baselines. Figure 3 shows how the three models NorBERTga, NorBERT-gc, and NorBERT-gac have different predictions than their baseline NorBERT-none for binary sentiment classification. We show the relative differences of true positives as a heatmap. These are made on the test predictions of each model over all five runs. Positive numbers (dark purple) specify that the model made more correct predictions than the baseline NorBERT-none, while negative numbers (white) indicate it made fewer correct predictions. The abbreviations FF, FM, MF, and MM represent the gender of the critic reviewing the work of an author of a given gender. FF refers to female critic and female author, FM female critic and male author, MF male critic and female author, and MM for male author and male critic.",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 327,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "7"
},
{
"text": "It is clear that all three gender-informed models become more accurate in the classification of reviews written by female critics and reviewing the works of female authors (FF). As previously mentioned, and as pointed out by Touileb et al. (2020) , female critics tend to be more negative towards female authors, and therefore there are few reviews that fall within this category with positive polarity. Adding information about the gender of the authors and the critics, seems to help the model identify some of the FF reviews that NorBERTnone was not able to classify correctly. This information seems to be particularly important for NorBERT-ga, which was the best model on the test set achieving 12 F 1 points more than the baseline on FF. This model also seems slightly better at identifying reviews for MM. A closer analysis differentiating the positive and negative polarities also shows that the three models are more accurate precisely in identifying the positive reviews in the FF subset.",
"cite_spans": [
{
"start": 225,
"end": 246,
"text": "Touileb et al. (2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "7"
},
{
"text": "The same applies to a lesser degree for FM. Knowing the gender of the authors and the critics, separately, enables the models to correctly classify more reviews than NorBERT-none. In contrary, for MF, only knowing the gender of both the critics and authors seems to slightly improve classification. For the MM reviews, the NorBERT-ga model is better at identifying the positive reviews, while NorBERT-gac is better at identifying the negative reviews. Figure 4 shows the breakdown of the relative differences of true positives. Here again, the relative differences are made on the test predictions of each model over all five runs. Positive numbers (dark blue) represent the cases where the model made more correct predictions than the baseline NorBERT-none, while negative numbers (white) indicates the opposite. For clarity, we add a prefix to each model in the figure to specify the task. GA-NorBERT-pn represent the model NorBERT-pn for the task of author gender classification, while GC-NorBERT-pn represents the task of critic gender ",
"cite_spans": [],
"ref_spans": [
{
"start": 452,
"end": 460,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "7"
},
{
"text": "For the author gender classification task, as can be seen in Figure 4 , having extra information about the polarity of the review helps the model NorBERT-pn (GA NorBERT-pn) to better predict the gender of the author if she's a female. This again is compared to the task specific baseline NorBERT-none. It also seems that this model makes a few more mistakes than the baseline when it comes to the author being a male. For gender classification of the critics, adding metadata information seems to negatively affect the model's ability to identify female critics. The model NorBERT-pn (GC NorBERT-pn) is more accurate when it comes to identifying the gender of male critics compared to the baseline, achieving 21 and 7 F 1 points more than the baseline on respectively MF and MM.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "classification.",
"sec_num": null
},
{
"text": "This corroborates our previous observations, that adding metadata information about the polarity of reviews aids the identification of female authors for author gender classifiers. While for critic gender classification it fails at identifying female critics, but is accurate in identifying males.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "classification.",
"sec_num": null
},
{
"text": "In this work, we have investigated the effect of adding information about the gender of critics and book authors when classifying the polarity of book reviews, and the polarity of the reviews when classifying the genders of authors and critics. Using a document-level classifier on top of a recently released Norwegian BERT-model, we have shown that gender-informed models obtain substantially higher accuracy, and that polarity-informed models obtain higher accuracy when classifying the gender of the book authors. In further analysis, we have observed clear differences in the classification results for male/female authors/critics. Specifically, we demonstrated that adding to NorBERT information about the genders of critics and book authors influences a binary sentiment classification task by being more accurate in predicting positive reviews for female authors.We have also shown that using polarity information helps the identification of female authors, but seems to greatly hurt the identification of female critics. Some directions for future work include quantifying the bias in the original NorBERT model. As our experiments showed, using the baseline model with only NorBERT and no metadata achieves good results, and we therefore plan to evaluate the existing biases in NorBERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "All of our experiments were run on resources provided by UNINETT Sigma2 of the Norwegian National Infrastructure for High Performance Computing and Data Storage, under the NeIC-NLPL umbrella.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unmasking contextual stereotypes: Measuring and mitigating BERT's gender bias",
"authors": [
{
"first": "Marion",
"middle": [],
"last": "Bartl",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking contextual stereotypes: Measuring and mitigating BERT's gender bias. In Proceedings of the Second Workshop on Gender Bias in Natu- ral Language Processing, pages 1-16, Barcelona, Spain (Online). Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluating the underlying gender bias in contextualized word embeddings",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Basta",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Noe",
"middle": [],
"last": "Casas",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "33--39",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3805"
]
},
"num": null,
"urls": [],
"raw_text": "Christine Basta, Marta R. Costa-juss\u00e0, and Noe Casas. 2019. Evaluating the underlying gender bias in con- textualized word embeddings. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 33-39, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Good secretaries, bad truck drivers? occupational gender stereotypes in sentiment analysis",
"authors": [
{
"first": "Jayadev",
"middle": [],
"last": "Bhaskaran",
"suffix": ""
},
{
"first": "Isha",
"middle": [],
"last": "Bhallamudi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "62--68",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3809"
]
},
"num": null,
"urls": [],
"raw_text": "Jayadev Bhaskaran and Isha Bhallamudi. 2019. Good secretaries, bad truck drivers? occupational gender stereotypes in sentiment analysis. In Proceedings of the First Workshop on Gender Bias in Natural Lan- guage Processing, pages 62-68, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language (technology) is power: A critical survey of \"bias\" in NLP",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5454--5476",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.485"
]
},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in NLP. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5454- 5476, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"T"
],
"last": "Saligrama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "4349--4357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in neural information processing systems, pages 4349-4357.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Exploiting a lexical resource for discourse connective disambiguation in German",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Bourgonje",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5737--5748",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Bourgonje and Manfred Stede. 2020. Exploit- ing a lexical resource for discourse connective dis- ambiguation in German. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 5737-5748, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural natural language inference models enhanced with external knowledge",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2406--2417",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1224"
]
},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2406-2417, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fine-tuning neural machine translation on genderbalanced datasets",
"authors": [
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta R. Costa-juss\u00e0 and Adri\u00e0 de Jorge. 2020. Fine-tuning neural machine translation on gender- balanced datasets. In Proceedings of the Second Workshop on Gender Bias in Natural Language Pro- cessing, pages 26-34, Barcelona, Spain (Online). Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "GeBioToolkit: Automatic extraction of gender-balanced multilingual corpus of Wikipedia biographies",
"authors": [
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Pau",
"middle": [],
"last": "Li Lin",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Espa\u00f1a-Bonet",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4081--4088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta R. Costa-juss\u00e0, Pau Li Lin, and Cristina Espa\u00f1a- Bonet. 2020. GeBioToolkit: Automatic extrac- tion of gender-balanced multilingual corpus of Wikipedia biographies. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4081-4088, Marseille, France. European Lan- guage Resources Association.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Equalizing gender bias in neural machine translation with word embeddings techniques",
"authors": [
{
"first": "Joel",
"middle": [
"Escud\u00e9"
],
"last": "Font",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "147--154",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3821"
]
},
"num": null,
"urls": [],
"raw_text": "Joel Escud\u00e9 Font and Marta R. Costa-juss\u00e0. 2019. Equalizing gender bias in neural machine transla- tion with word embeddings techniques. In Proceed- ings of the First Workshop on Gender Bias in Natu- ral Language Processing, pages 147-154, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Relating word embedding gender biases to gender gaps: A cross-cultural analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Sonja",
"middle": [],
"last": "Schmer-Galunder",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Rye",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3803"
]
},
"num": null,
"urls": [],
"raw_text": "Scott Friedman, Sonja Schmer-Galunder, Anthony Chen, and Jeffrey Rye. 2019. Relating word embed- ding gender biases to gender gaps: A cross-cultural analysis. In Proceedings of the First Workshop on",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gender Bias in Natural Language Processing, pages 18-24, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Londa",
"middle": [],
"last": "Schiebinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2018,
"venue": "Sciences",
"volume": "115",
"issue": "16",
"pages": "3635--3644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Zooming in on gender differences in social media",
"authors": [
{
"first": "Aparna",
"middle": [],
"last": "Garimella",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media (PEOPLES)",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aparna Garimella and Rada Mihalcea. 2016. Zooming in on gender differences in social media. In Proceed- ings of the Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in So- cial Media (PEOPLES), pages 1-10, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Demographic factors improve classification performance",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "752--762",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1073"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy. 2015. Demographic factors improve classi- fication performance. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 752-762, Beijing, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised discovery of gendered language through latent-variable modeling",
"authors": [
{
"first": "Alexander Miserlis",
"middle": [],
"last": "Hoyle",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Wolf-Sonkin",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1706--1716",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1167"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Miserlis Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cot- terell. 2019. Unsupervised discovery of gendered language through latent-variable modeling. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1706- 1716, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Gender-preserving debiasing for pre-trained word embeddings",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1641--1650",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1160"
]
},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 1641-1650, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Examining gender and race bias in two hundred sentiment analysis systems",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "43--53",
"other_ids": {
"DOI": [
"10.18653/v1/S18-2005"
]
},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko and Saif Mohammad. 2018. Ex- amining gender and race bias in two hundred sen- timent analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Compu- tational Semantics, pages 43-53, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Measuring bias in contextualized word representations",
"authors": [
{
"first": "Keita",
"middle": [],
"last": "Kurita",
"suffix": ""
},
{
"first": "Nidhi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "166--172",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3823"
]
},
"num": null,
"urls": [],
"raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contex- tualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166-172, Florence, Italy. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Large-scale contextualised language modelling for norwegian",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 23rd Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja \u00d8vrelid, and Stephan Oepen. 2021. Large-scale con- textualised language modelling for norwegian. In Proceedings of the 23rd Nordic Conference on Com- putational Linguistics (NoDaLiDa 2021).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1468--1478",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1136"
]
},
"num": null,
"urls": [],
"raw_text": "Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1468-1478, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "It's all in the name: Mitigating gender bias with name-based counterfactual data substitution",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Rowan Hall Maudslay",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5267--5275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mit- igating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing, pages 5267- 5275, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Enriching bert with knowledge graph embeddings for document classification",
"authors": [
{
"first": "Malte",
"middle": [],
"last": "Ostendorff",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bourgonje",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Moreno-Schneider",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Rehm",
"suffix": ""
},
{
"first": "Bela",
"middle": [],
"last": "Gipp",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.08402"
]
},
"num": null,
"urls": [],
"raw_text": "Malte Ostendorff, Peter Bourgonje, Maria Berger, Ju- lian Moreno-Schneider, Georg Rehm, and Bela Gipp. 2019. Enriching bert with knowledge graph embeddings for document classification. arXiv preprint arXiv:1909.08402.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "2020. tBERT: Topic models and BERT joining forces for semantic similarity detection",
"authors": [
{
"first": "Nicole",
"middle": [],
"last": "Peinelt",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7047--7055",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.630"
]
},
"num": null,
"urls": [],
"raw_text": "Nicole Peinelt, Dong Nguyen, and Maria Liakata. 2020. tBERT: Topic models and BERT joining forces for semantic similarity detection. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7047-7055, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Genderdistinguishing features in film dialogue",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Schofield",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Mehr",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fifth Workshop on Computational Linguistics for Literature",
"volume": "",
"issue": "",
"pages": "32--39",
"other_ids": {
"DOI": [
"10.18653/v1/W16-0204"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandra Schofield and Leo Mehr. 2016. Gender- distinguishing features in film dialogue. In Proceed- ings of the Fifth Workshop on Computational Lin- guistics for Literature, pages 32-39, San Diego, Cal- ifornia, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "How to fine-tune bert for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2020. How to fine-tune bert for text classification?",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Can existing methods debias languages other than English? first attempt to analyze and mitigate Japanese word embeddings",
"authors": [
{
"first": "Masashi",
"middle": [],
"last": "Takeshita",
"suffix": ""
},
{
"first": "Yuki",
"middle": [],
"last": "Katsumata",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Rzepka",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Araki",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "44--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masashi Takeshita, Yuki Katsumata, Rafal Rzepka, and Kenji Araki. 2020. Can existing methods debias languages other than English? first attempt to an- alyze and mitigate Japanese word embeddings. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, pages 44-55, Barcelona, Spain (Online). Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Gender and sentiment, critics and authors: a dataset of Norwegian book reviews",
"authors": [
{
"first": "Samia",
"middle": [],
"last": "Touileb",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "125--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samia Touileb, Lilja \u00d8vrelid, and Erik Velldal. 2020. Gender and sentiment, critics and authors: a dataset of Norwegian book reviews. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, pages 125-138, Barcelona, Spain (Online). Association for Computational Lin- guistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "NoReC: The Norwegian Review Corpus",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Cathrine",
"middle": [],
"last": "Stadsnes Eivind",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Bergem",
"suffix": ""
},
{
"first": "Samia",
"middle": [],
"last": "Touileb",
"suffix": ""
},
{
"first": "Fredrik",
"middle": [],
"last": "J\u00f8rgensen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th edition of the Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4186--4191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Velldal, Lilja \u00d8vrelid, Cathrine Stadsnes Eivind Alexander Bergem, Samia Touileb, and Fredrik J\u00f8rgensen. 2018. NoReC: The Norwegian Review Corpus. In Proceedings of the 11th edition of the Language Resources and Evaluation Conference, pages 4186-4191, Miyazaki, Japan.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Adaptive knowledge sharing in multitask learning: Improving low-resource neural machine translation",
"authors": [
{
"first": "Poorya",
"middle": [],
"last": "Zaremoodi",
"suffix": ""
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "656--661",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2104"
]
},
"num": null,
"urls": [],
"raw_text": "Poorya Zaremoodi, Wray Buntine, and Gholamreza Haffari. 2018. Adaptive knowledge sharing in multi- task learning: Improving low-resource neural ma- chine translation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 656- 661, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1441--1451",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1139"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Gender bias in multilingual embeddings and cross-lingual transfer",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Subhabrata",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Saghar",
"middle": [],
"last": "Hosseini",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"Hassan"
],
"last": "Awadallah",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2896--2907",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.260"
]
},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Subhabrata Mukherjee, Saghar Hosseini, Kai-Wei Chang, and Ahmed Hassan Awadallah. 2020. Gender bias in multilingual embeddings and cross-lingual transfer. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 2896-2907, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning gender-neutral word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1521"
]
},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4847-4853, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1651--1661",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1161"
]
},
"num": null,
"urls": [],
"raw_text": "Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmen- tation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1651-1661, Florence, Italy. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Distribution of ratings given by critics to works of authors. The first letter (M/F) indicates the gender of the critic and the second that of the author.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Relative differences of true positives for binary sentiment classification on test compared to their baseline NorBERT-none. Darker colors represent more correct predictions than the baseline.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Relative differences of true positives for binary authors and critic gender classification on test compared to their relative baselines NorBERT-none.",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table><tr><td colspan=\"4\">Train Dev. Test Total</td></tr><tr><td>pos 568</td><td>69</td><td>71</td><td>708</td></tr><tr><td>neg 568</td><td>60</td><td>55</td><td>683</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Total number of unique male and female critics and authors in NoReC gender ."
},
"TABREF2": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Total number of positive and negative reviews in the data splits of NoReC gender ."
},
"TABREF4": {
"num": null,
"content": "<table><tr><td/><td>Model</td><td>dev</td><td>test</td></tr><tr><td>Author</td><td colspan=\"3\">NorBERT-none NorBERT-polarity 94.93 94.60 89.57 90.12</td></tr><tr><td>Critic</td><td colspan=\"3\">NorBERT-none NorBERT-polarity 64.99 57.76 70.40 63.84</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Model performance on dev and test for binary sentiment classification. NorBERT-none is the baseline model. All models report mean F 1 ."
},
"TABREF5": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Model performance of binary gender classification on dev and test for authors and critics. Models report mean F 1 ."
}
}
}
}