ACL-OCL / Base_JSON /prefixG /json /gebnlp /2021.gebnlp-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:53.309824Z"
},
"title": "Generating Gender Augmented Data for NLP",
"authors": [
{
"first": "Nishtha",
"middle": [],
"last": "Jain",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Trinity College Dublin",
"location": {}
},
"email": ""
},
{
"first": "Maja",
"middle": [],
"last": "Popovic",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {
"addrLine": "3 Microsoft",
"settlement": "Dublin"
}
},
"email": ""
},
{
"first": "Declan",
"middle": [],
"last": "Groves",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {
"addrLine": "3 Microsoft",
"settlement": "Dublin"
}
},
"email": "3degroves@microsoft.com"
},
{
"first": "Eva",
"middle": [],
"last": "Vanmassenhove",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tilburg University",
"location": {}
},
"email": "4e.o.j.vanmassenhove@tilburguniversity.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Gender bias is a frequent occurrence in NLPbased applications, especially pronounced in gender-inflected languages. Bias can appear through associations of certain adjectives and animate nouns with the natural gender of referents, but also due to unbalanced grammatical gender frequencies of inflected words. This type of bias becomes more evident in generating conversational utterances where gender is not specified within the sentence, because most current NLP applications still work on a sentence-level context. As a step towards more inclusive NLP, this paper proposes an automatic and generalisable rewriting approach for short conversational sentences. The rewriting method can be applied to sentences that, without extra-sentential context, have multiple equivalent alternatives in terms of gender. The method can be applied both for creating gender balanced outputs as well as for creating gender balanced training data. The proposed approach is based on a neural machine translation (NMT) system trained to 'translate' from one gender alternative to another. Both the automatic and manual analysis of the approach show promising results for automatic generation of gender alternatives for conversational sentences in Spanish.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Gender bias is a frequent occurrence in NLPbased applications, especially pronounced in gender-inflected languages. Bias can appear through associations of certain adjectives and animate nouns with the natural gender of referents, but also due to unbalanced grammatical gender frequencies of inflected words. This type of bias becomes more evident in generating conversational utterances where gender is not specified within the sentence, because most current NLP applications still work on a sentence-level context. As a step towards more inclusive NLP, this paper proposes an automatic and generalisable rewriting approach for short conversational sentences. The rewriting method can be applied to sentences that, without extra-sentential context, have multiple equivalent alternatives in terms of gender. The method can be applied both for creating gender balanced outputs as well as for creating gender balanced training data. The proposed approach is based on a neural machine translation (NMT) system trained to 'translate' from one gender alternative to another. Both the automatic and manual analysis of the approach show promising results for automatic generation of gender alternatives for conversational sentences in Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent studies have exposed challenging systematic issues related to bias that extend to a range of AI applications, including Natural Language Processing (NLP) technology (Costa-juss\u00e0, 2019; Blodgett et al., 2020) . Observed bias problems range from copying biases already existing in data to claims that the training process can lead to an exacerbation or amplification of observed biases (Zhou and Schiebinger, 2018; Vanmassenhove et al., 2021) . The algorithms learn to maximize the overall probability of an occurrence, leading to preferences for more frequently appearing training patterns.",
"cite_spans": [
{
"start": 172,
"end": 191,
"text": "(Costa-juss\u00e0, 2019;",
"ref_id": "BIBREF4"
},
{
"start": 192,
"end": 214,
"text": "Blodgett et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 391,
"end": 419,
"text": "(Zhou and Schiebinger, 2018;",
"ref_id": "BIBREF16"
},
{
"start": 420,
"end": 447,
"text": "Vanmassenhove et al., 2021)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With this work, we propose a method for generating (more) balanced data in terms of one of the main types of bias frequently observed in language: gender bias. Gender bias can occur in language due to the fact that some languages have a way of explicitly marking (natural or grammatical) gender while others do not (Stahlberg et al., 2007) . Gender bias in translation is usually manifested when animate entities (e.g. professions) are translated from gender neutral language (e.g. English) into a gendered language (e.g. Spanish) because the instances seen in training data are biased. Also, conversational utterances are prone to bias, both in machine translation as well as in other NLP applications, because systems often do not have the ability to provide multiple gender variants. Therefore, users are simply presented with the most probable option which is prone to bias. In our work, we aim to enable the generation of multiple gender variants by expanding each sentence with the missing gender variants, thus fostering inclusion in online conversations/NLP applications. Generating gender variants can and should also be used to create gender balanced conversational data that can be used to train less biased NLP models such as machine translation models, language models, chat bots, etc.",
"cite_spans": [
{
"start": 315,
"end": 339,
"text": "(Stahlberg et al., 2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike previous studies, we did not want to limit ourselves to one specific gender phenomenon, such as gender markings on professions (Zmigrod et al., 2019) ) (for which the gender can easily be swapped by using hand-crafted lists) or first person personal pronouns (Habash et al., 2019) ). The objective of this research aims to include as many cases as possible of gender alternatives related not only to gender of persons but also to grammatical gender of the objects referred to. In Example 1, (a) illustrates an example of two alternatives for a sentence where there is agreement with the grammatical gender of an object referred to in the previous sentence, while in (b) there is agreement with the gender of the speaker/writer (i.e. a person).",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "(Zmigrod et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 266,
"end": 287,
"text": "(Habash et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Example 1. (a) [MALE] At this stage, our approach does not discriminate between human referents and objects. It is furthermore limited to the generation of binary gender alternatives. We are aware of the importance and challenge of dealing with non-binary gender (Ackerman, 2019) which we aim to tackle in future work.",
"cite_spans": [
{
"start": 15,
"end": 21,
"text": "[MALE]",
"ref_id": null
},
{
"start": 263,
"end": 279,
"text": "(Ackerman, 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The research was carried out in collaboration with an anonymous industry partner with a specific application in mind that deals with conversational sentences. Our approach aims to alleviate gender bias in the said application. We focus on one gender-rich language (Spanish), however, scalability and generalizability were kept in mind while designing the approach. Our approach can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Identifying (appropriate) sentences/segments that should have the opposite gender variant for some words. POS sequences were used to extract such segments from the OpenSubtitles corpus 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Creating gendered variants for the words in such segments by applying a rule-based approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Training a neural rewriter on the compiled gender-parallel Spanish data in order to be able to automatically generate gendered variants on unseen data sets. This additional step makes the approach more scalable as it removes the need for any preprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The first two steps are necessary since there is a lack of readily available open-source genderparallel data for training. Although language knowledge and a POS tagger are necessary for these steps, the human effort and necessity for external linguistic tools are minimal (contrary to other approaches which heavily rely on linguistic tools (Zmigrod et al., 2019) or on manually created gender-parallel data (Habash et al., 2019) .",
"cite_spans": [
{
"start": 341,
"end": 363,
"text": "(Zmigrod et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 408,
"end": 429,
"text": "(Habash et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the literature on gender in NLP, two main approaches for bias mitigation can be identified: (a) approaches that attempt to mitigate bias during model or word representation training, and/or (b) approaches that aim to augment the data by creating more variety in the training set (pre-processing step) or in the output (post-processing step). In the following paragraphs, we focus on the latter as it is most closely related to our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There have been attempts to artificially increase the variety in already existing data sets by creating alternatives to sentences in order to decrease the overall bias (in terms of gender). 4 This approach has been referred to in the literature as 'Counterfactual Data Augmentation'(CDA) (Lu et al., 2018) . Their CDA approach consists of a simple bidirectional dictionary of gendered words such as he:she, her:him/his, queen:king, etc. Zhao et al. (2018) does not use the term CDA as this was introduced later, but what they describe can be interpreted as a rudimentary approach to CDA: they augmented the existing data set by adding additional sentences in which personal pronouns 'he' and 'she' had been swapped.",
"cite_spans": [
{
"start": 190,
"end": 191,
"text": "4",
"ref_id": null
},
{
"start": 288,
"end": 305,
"text": "(Lu et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 437,
"end": 455,
"text": "Zhao et al. (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another CDA approach is described in Zmigrod et al. (2019) . Similar to Lu et al. (2018) , the approach relies on a bidirectional dictionary of animate nouns. Unlike Lu et al. (2018) , pronouns are not handled and the languages worked on are Hebrew and Spanish, languages that have more gender markers than English. Since solely changing the nouns into their male/female counterpart often requires the enforcement of grammatical gender agreement of accompanying articles and adjectives, they introduce Markov Random Fields with optional neural parametrisation that can infer the effect of the swap on the remaining words in the segment. Their approach is limited to mitigating gender stereotypes related to animate nouns and relies on dependency trees, lemmata, POS-tags and morpho-syntactic tags in order to solve issues related to the morpho-syntactic agreement.",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "Zmigrod et al. (2019)",
"ref_id": "BIBREF17"
},
{
"start": 72,
"end": 88,
"text": "Lu et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 166,
"end": 182,
"text": "Lu et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the field of machine translation (MT), due to specific discrepancies between the information encoded in the source and target data, there has been some work on generating the appropriate gender variant for ambiguous source sentences. 5 Vanmassenhove et al. (2019) appends gender tags to the source side of the training data indicating the gender of the speaker. As such, during testing, the desired (or multiple) gender variant(s) can be generated by adding tags. Basta et al. (2020) also experiment with incorporating a gender tag, and investigate adding the previous sentence as additional context information. Both methods result in the improvement of automatic MT scores as well as on gender accuracy for English-to-Spanish translation. Similarly, Bentivogli et al. (2020) developed NMT systems using gender tags and evaluated them specifically on gender phenomena. The work described in Habash et al. (2019) is the most similar to ours. They proposed an approach for automatic gender reinflection (\"re-gendering\") for Arabic. They propose a method which consists of two components: a gender classifier and a NMT gender rewriter. In order to build the NMT rewriter, they first manually created a corpus annotated with gender information. Subsequently, each gendered sentence is re-gendered manually in order to obtain the necessary gender-parallel data for training. This way, they are able to provide gender alternatives for sentences with natural gender agreement with the first person singular.",
"cite_spans": [
{
"start": 237,
"end": 238,
"text": "5",
"ref_id": null
},
{
"start": 467,
"end": 486,
"text": "Basta et al. (2020)",
"ref_id": "BIBREF1"
},
{
"start": 755,
"end": 779,
"text": "Bentivogli et al. (2020)",
"ref_id": "BIBREF2"
},
{
"start": 895,
"end": 915,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our research, in contrast, aims to augment existing data with gender alternatives in a broader sense: it is not limited to singular first person phenomena, ambiguity in multilingual settings, or phenomena related solely to gender agreement. It involves the gender of adjectives, past participles, and several types of pronouns for which the referent is not explicitly mentioned within the context of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As mentioned in the introduction, our main objective is to create an automatic gender rewriter using NMT. In order to do so, we need gender-parallel training data that consists of possible gender variants in both directions (masculine-to-feminine and feminine-to-masculine). Such data sets are, unfortunately, not publicly available, which is why we first leveraged linguistic knowledge and rules to generate a sufficient amount of gender-parallel data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating gender-parallel data",
"sec_num": "3"
},
{
"text": "Therefore, we identified the sequences of POS classes that show gender agreement in Spanish and can thus be 're-gendered': adjectives, past participles, and several types of pronouns. A detailed description of how the different word classes are tackled to generate gender alternatives is described below. We would like to point out that our target data consisted of very short sentences, where there is at most agreement with one referent. 6 As such, our approach is limited to tackle sentences alike and cannot handle the generation of alternatives for sentences where more than two gender alternatives could be generated (due to grammatical agreement of the re-genderable word with multiple entities).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating gender-parallel data",
"sec_num": "3"
},
{
"text": "Past participles In principle, almost all Spanish past participles have an explicit agreement with their referent and can thus be re-gendered. However, in certain contexts they should not be: if they follow or precede a referent noun (\"Pel\u00edcula aburrida\", \"Acceso permitido.\") thus agreeing with the gender of the noun, or if they follow the auxiliary verb \"haber\" thus representing past tense and not a property of a person/object (\"he enviado\", \"has descansado\"). If they appear in isolation (\"Ocupado/ocupada.\", \"Aburrido/aburrida.\"), or merely surrounded by interjections or punctuation (\"Ocupado/ocupada, gracias.\", \"Buenos dias, recibido/recibida, \u00a1gracias!\"), adverbs (\"muy cansado/cansada\"), or a linking verb (\"Estoy registrado/registrada.\", \"Parece acabado/acabada.\"), they can be re-gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-genderable word classes",
"sec_num": "3.1"
},
{
"text": "We also included pairs of past participles bound by conjunctions, referring to the same person or object, since in these sentences, both instances should be re-gendered (\"aburrido/aburrida y cansado/cansada.\", \"acabado/acabada y pagado/pagada.\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-genderable word classes",
"sec_num": "3.1"
},
{
"text": "Adjectives Many Spanish adjectives are gendered and have an explicit gender marker corresponding to the gender of its referent. However, some adjectives are gender neutral. Gendered and neutral adjectives can (largely) be identified based on their specific suffixes (for example \"-al\", \"nte\", \"-ble\", so the adjectives \"genial\", \"interesante\", and \"probable\" are neutral), while other suffixes indicate gendered adjectives (for example \"o/a\", so the adjective \"correcto/correcta\" has variants).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-genderable word classes",
"sec_num": "3.1"
},
{
"text": "In addition, similarly to past participles, the given context has to be taken into account for gendered adjectives: they should not be re-gendered if they immediately precede or follow a noun (with or without article) which determines the gender (\"Presupuestos adjuntos.\", \"\u00a1Maravillosa idea!\", \"La informaci\u00f3n correcta.\"). Also, adjectives following neutral demonstrative pronouns \"eso\" or \"esto\" should not be re-gendered (\"Eso es bueno.\"). Analogous to past participles, adjectives in isolation (\"Listo/Lista.\", \"perfecto/perfecta.\", \"seguro/segura.\", \"\u00a1fant\u00e1stico/fant\u00e1stica!\"), surrounded by punctuation (\"Correcto/correcta, saludos.\"), preceding verb (\"\u00bfEst\u00e1s listo/lista?\") or adverb (\"Es muy lindo/linda.\") can be re-gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-genderable word classes",
"sec_num": "3.1"
},
{
"text": "When two adjectives are present, in a conjunction, and refer to the same referent, both should be re-gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-genderable word classes",
"sec_num": "3.1"
},
{
"text": "Clitic pronouns Some Spanish clitic pronouns, namely \"lo(s)\" and \"la(s)\" should be re-gendered (e.g. \"Lo/la veo.\", \"Lo/la adjunto.\") while \"le(s)\" should not be changed (\"Le veo.\", \"Le digo.\"). However, in some cases \"lo\" can represent a general concept not referring to a particular object, such as in \"lo siento\" (I'm sorry), \"lo s\u00e9\" (I know). If some of these are re-gendered, the precision will decrease.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-genderable word classes",
"sec_num": "3.1"
},
{
"text": "Clitic pronouns attached to verbs Clitic pronouns can be attached to a verb infinitive (\"Gracias por acabarlo/acabarla.\" (thanks for finishing it), \"Quiero verlo/verla.\" (I want to see it)). Similar to the isolated clitic pronouns, there are certain exceptions, such as \"Es bueno saberlo\" (it is good to know). If the gender neutral clitic pronoun \"le\" is attached to a verb (\"Quiero tenerle informado.\" (I want to keep you/him/her informed)), it should not be re-gendered. Gendered pronouns attached to an imperative should also be re-gendered (\"D\u00e9jalo/D\u00e9jala.\" (leave it), \"Hazlo/Hazla.\" (do it)). On the other hand, clitic pronouns which refer to an indirect object, such as \"m\u00e1ndame\" (send me), are neutral. Finally, if there are two attached clitic pronouns, \"M\u00e1ndamelo/M\u00e1ndamela.\" (send it to me), only the gendered part (in this case \"lo\"/\"la\") should be re-gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-genderable word classes",
"sec_num": "3.1"
},
{
"text": "Demonstrative pronouns Demonstrative pronouns \"esto\", \"eso\" and \"aquello\" are neutral, while \"estos/estas\", \"este/esta\", \"ese/esa\", \"aquello/aquella\" are gendered. If the referent is missing in the sentence and the pronoun is gendered, they should be re-gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-genderable word classes",
"sec_num": "3.1"
},
{
"text": "Whether a gender alternative translation should be generated does not solely depend on the word classes it contains but also on the structure of the sentence. If the referent is missing in a sentence, then an additional variant with the opposite gender should be generated. If the referent is present in a sentence, only one gender variant is grammatically correct, and as such, these sentences are to be left unchanged. The presence or absence of a referent can be determined by the sequence of POS tags in a sentence 7 . For example, if we want to check whether a sentence with an adjective \"creo que es correcta\" (gloss: \"I believe (it) is correctfeminine\") needs an additional re-gendered variant or not, its POS sequence \"VERB CONJUNCTION VERB ADJECTIVE\" indicates that there is no referent noun within the given context. Therefore, another variant of the adjective \"correct\" should be provided: \"creo que es correcto\". In contrast, the sentence \"la soluci\u00f3n es correcta\" with POS sequence \"ARTICLE NOUN VERB ADJECTIVE\" contains a referent noun \"soluci\u00f3n\", and therefore it should not be re-gendered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding gender variants by rules",
"sec_num": "3.2"
},
{
"text": "For each re-genderable sentence, we apply rules for changing the ending of the corresponding word, if necessary. The POS sequences to identify regenderable sentences and the subsequent rules used to re-gender the corresponding words in such sentences are given in detail in the Appendix. It is worth mentioning we also used POS sequences to identify neutral sentences (those which should be not re-gendered ) since we wanted the parallel corpus to contain both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding gender variants by rules",
"sec_num": "3.2"
},
{
"text": "In order to create gender-parallel data, a set of Spanish subtitles was downloaded from the OPUS (Tiedemann, 2012) website. 8 After basic filtering (removing too long and non-alpha numeric segments), a set of short sentences with up to 10 (untokenized) words was extracted. This candidate set consisted of 22 458 968 sentences. This data set was POS tagged using Treetagger 9 . The sentences matching the POS sequences mentioned in the Appendix were extracted from this data set. This set consisted of more than 1M sentences. For each extracted re-genderable sentence, the alternative gender variant is created by applying appropriate rules described in the Appendix. After applying rules on all re-genderable structures, we joined both re-gendering directions (masculine-tofeminine and feminine-to-masculine) in order to create a balanced data set. As already mentioned, the corpus also contains a number of sentences that are not to be regendered. By including these neutral sentences in our training data, we encourage the rewriter to: (a) learn when to generate alternatives and when not to, and (b) how to generate those alternatives, if necessary. In this way, a corpus with about 2.2M gender-parallel sentences was created. This corpus was then separated into train, development (\u223c1k sentences) and test (\u223c3k sentences) sets. The rewritten parts of the development and test sets were revised manually and the errors were corrected for about 6% of sentences and 1.5% of words. The training set, being large, was not verified manually, thus it contained some noise.",
"cite_spans": [
{
"start": 97,
"end": 114,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 124,
"end": 125,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-parallel data",
"sec_num": "4"
},
{
"text": "In addition to OpenSubtitles, we also obtained data from the industry partner consisting of around 8 000 sentences readily available with all possible alternative versions of the sentences provided. An additional 22 000 sentences had to be revised manually in order to produce the correct gender variant for re-genderable sentences. This set was used as an additional test set for the re-writer. One part of this set can be handled by the described POS sequences and rules (\"structured test 1\"), while another part contains different POS sequences and cannot be handled by these rules at all (\"unstructured test 1\"). The latter test set will give a good estimation of the scalability of our approach. An overall split of data sets is described in Table 1 . The OpenSubtitles data was split in the standard way for machine translation, namely a few thousands of segments for development and test sets and the rest for the training set. ",
"cite_spans": [],
"ref_spans": [
{
"start": 747,
"end": 754,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Gender-parallel data",
"sec_num": "4"
},
{
"text": "Once we compiled a sufficient amount of genderparalell data, we were able to train our automatic rewriter. The automatic rewriter is a NMT system trained on the following parallel data: original sentences as the source language, and re-gendered sentence as the target language. For neutral sentences, the source and the target parts are identical. The NMT rewriter was built using the publicly available Sockeye 10 implementation (Hieber et al., 2018) of the Transformer architecture (Vaswani et al., 2017) . The system operates on subword units generated by byte-pair encoding (BPE) (Sennrich et al., 2016) . We set the number of BPE merging operations to 32000. We have experimented with the following setups:",
"cite_spans": [
{
"start": 430,
"end": 451,
"text": "(Hieber et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 484,
"end": 506,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 584,
"end": 607,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Rewriter",
"sec_num": "5"
},
{
"text": "\u2022 a Standard NMT system without any additional tags",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Rewriter",
"sec_num": "5"
},
{
"text": "\u2022 an NMT system with neutrality/regenderability tags in the source part",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Rewriter",
"sec_num": "5"
},
{
"text": "The system with tags was built using the same technique as proposed in (Johnson et al., 2017) for multilingual MT systems and used for many other applications including gender-informed MT (Vanmassenhove et al., 2019). For our experiments, we added a label 'N' (neutral) or 'G' (re-genderable) to each source sentence. These tags are implicitly present in the gender-parallel data -if the source and the target parts differ, it is a re-genderable sentence, if they are identical it is neutral. Therefore, the tags are certainly available for the training and development sets, but they might not be available for the test sets. Therefore, this system was assessed in two ways:",
"cite_spans": [
{
"start": 71,
"end": 93,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Rewriter",
"sec_num": "5"
},
{
"text": "\u2022 \"NMT-T\": neutrality/re-genderability tags are available for the test sets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Rewriter",
"sec_num": "5"
},
{
"text": "\u2022 \"NMT-AT\": the tags are not available for the test sets (a realistic scenario) and therefore are assigned automatically by the gender classifier described in the next section (which is similar to the approach described in (Habash et al., 2019) .)",
"cite_spans": [
{
"start": 223,
"end": 244,
"text": "(Habash et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Rewriter",
"sec_num": "5"
},
{
"text": "In order to explore potential benefits of automatic pre-classification for automatic rewriting, a classifier to distinguish between 're-genderable' (G) 11 and 'neutral' (N) 12 sentences was also designed. The tags generated by this classifier were used to assess the performance of the \"NMT-AT\" re-writer by appending them to the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender Classifier",
"sec_num": "5.1"
},
{
"text": "The classifier was built on the data set of about 8 000 sentences provided by the industry partner. These sentences were balanced in both directions i.e., both masculine-to-feminine as well as feminine-to-masculine counterparts of a given sentence were present and labelled as G. The rest of the sentences were labeled as N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "For the sake of designing a generalised classifier, the development set consisted of sentences from the OpenSubtitles corpus (and was the same as the development set used for the NMT system).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "The final classifier was tested on two different test sets -one consisted of the 22 000 conversational sentences sourced from the industry partner and another extracted from the OpenSubtitles corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "Following on the work of Habash et al. (2019) for the gender identification step, features using character n-grams, word n-grams and morphological information were created from the training data. To begin with, TF-IDF scores of character n-grams of length 4-7 with maximum features capped at 20 000 and of word n-grams of length 1-3 were generated. These two feature matrices were joined together along with a morphological feature that denoted the presence of a gendered word in the sentence. The resulting training data was a high dimensional data frame with around 40 000 features.",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "Habash et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "Due to the limited size of the training set, neural network based classifiers were ruled out. Instead, owing to the high dimensional nature of the data, we used a SVM based classifier for training. All the Industry Test Set OpenSubs Acc. Rec. Prec. Acc. Rec. Prec. Overall 82% --80% --G -96% 60% -97% 76% N -76% 98% -56% 93% Table 2 : Gender Classifier Results steps described in this section were implemented in Python 3.7 using sklearn 13 , pandas 14 and Stan-zaNLP 15 libraries.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "The SVM based classifier was tested on two sets of data as described in Section 5.1. This was done in order to assess the generalisability of the classifier. Given the small size of the training data, the performance of the classifier looks promising thus far (see Table 2 ). It can be observed in Table 2 that the classifier clearly performs better on the test data set consisting of sentences sourced from the industry partner as compared to the data extracted from OpenSubtitles. While the accuracy is comparable on both sets ( 80%), the precision and recall of neutral sentences is higher on the industry data than the set compiled from OpenSubtitles data. The high recall of sentences labelled as G implies that the classifier is almost always successful at recognising sentences that need to be re-gendered (i.e. sentences that need an alternative variant). However, it incorrectly predicts the labels of a substantial number of N-labelled sentences, which in turn results in a low precision of re-genderable sentences. As we want to avoid generating (incorrect) gender alternatives for neutral sentences, our aim was to first attain a high precision for neutral sentences and then aim towards a high recall for the same. The tags generated by this classifier for the industry sourced data and OpenSubtitles data were used to test the \"NMT-AT\" rewriter.",
"cite_spans": [],
"ref_spans": [
{
"start": 265,
"end": 272,
"text": "Table 2",
"ref_id": null
},
{
"start": 298,
"end": 305,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Precision and Recall",
"sec_num": null
},
{
"text": "Our first experiment consisted of using the implementation of CDA by (Zmigrod et al., 2019) to generate gendered variants. However, this work only tackled animate nouns, which rarely occur in the conversational sentences we investigated in this work. Our re-implementation of their approach generated the correct gender variant for only 1% of the sentences. Because of the very low recall, this implementation was not directly applicable for our research. In addition to this, since our work aims to tackle multiple gender related word classes, we explored extending the implementation by augmenting the list with character adjectives. On doing so, we found that this implementation generated the correct gendered variant in only 9% of the cases. An important point to note is that 3% of the neutral sentences (for which variants should not have been generated) were also converted as opposed to the 1% with only animate nouns, attributed to the presence of more words in the hand-crafted lists. In order to cover more words and improve the performance of this implementation on our data set, we considered augmenting the hand-crafted list with past participles and/or clitic pronouns. However, that increased the size of the list exponentially and made the approach prone to errors, inefficient and not scalable to other languages.",
"cite_spans": [
{
"start": 69,
"end": 91,
"text": "(Zmigrod et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results for generating gender variants",
"sec_num": "6"
},
{
"text": "The results in the form of error rates are shown in 3. Since we are not performing typical machine translation, namely converting one language into another one, but only converting a few words in the sentence into a sentence in the same language, these error rates are not related to any of the typical automatic evaluation metrics (such as TER, etc.) but to the amount of incorrectly converted words. For each system, numbers in the left column represent the count of incorrectly converted words normalised by the total number of sentences, while numbers in the right column represent the count of incorrectly converted words normalised by the total number of words in the corpus. The numbers in the first row and first two columns can be interpreted as follows: left: 6.4% of all sentences have incorrectly converted words in ; right: 1.50% of all words are incorrectly converted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic evaluation of neural rewriter",
"sec_num": "6.1"
},
{
"text": "First, it can be noted that the error rates are lower for the template-based \"in-domain\" test sets than for the unstructured \"out-of-domain\" test sets, which is in line with our expectations. The change in error rate is mainly due to discrepancies in the re-genderable segments. The error rates in the neutral segments are comparable in the out-of-domain and in-domain test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic evaluation of neural rewriter",
"sec_num": "6.1"
},
{
"text": "Adding manual tags indicating whether a sen-tence should get a gender alternative or not (e.g. 'neutral' vs 'regenderable') reduces the error rates on all test sets for both types of segments. A similar performance can not be achieved by adding automatic tags. Automatic tags deteriorate the performance on neutral segments, but reduce the error rates for re-genderable segments, especially for the unstructured \"out-of-domain\" test set. The manually tagged results indicate the potential of a classifier. These results tie up with the results of the gender classifier (Section 5.1) which is good at classifying the re-genderable sentences as denoted by a high recall of sentences labelled 'G', however it doesn't do very well at labelling neutral sentences as 'N'. It tends to mislabel many of those sentences as 'G', resulting in a low recall and, consequently, incorrect re-gendering. For the sake of completeness, error rates are reported for the rule-based rewriter, too. The error rates for re-genderable sentences are lower than the NMT rewriter without tags and for neutral sentences the error rate is 0%; it should be noted that the rules are applicable only to data sets which strictly conform to the described template structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic evaluation of neural rewriter",
"sec_num": "6.1"
},
{
"text": "In order to better understand the nature of errors and remaining challenges, a qualitative manual inspection was carried out on all test sets. First of all, it is observed that in general, the NMT re-writer does not intervene on large portions of a sentence but addresses only specific words, which is exactly what it is expected to do. This is a positive result, as generating gender variants implies changing specific gendered words and does not involve changing entire segments. It also facilitates the evaluation since manual inspection is needed only to identify the nature of incorrect words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative manual inspection of errors",
"sec_num": "6.2"
},
{
"text": "The analysis revealed that the most frequent error for neutral sentences are re-gendered pronouns and adjectives which should not be changed. Also, the most frequent error in re-genderable sentences is leaving them unchanged. These types of errors are predominant in structured sentences, and two examples, one for neutral and one for regenderable sentence, can be seen in Table 4 (a). It can also be seen that adding tags can help in some cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 373,
"end": 380,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Qualitative manual inspection of errors",
"sec_num": "6.2"
},
{
"text": "For unstructured sentences, there are more error types especially for neutral sentences, and examples can be seen in Table 4 (b). In the first three set type NMT NMT-T NMT-AT rules test all 6.4 1.50 4.5 1.03 17.9 4.21 6.1 1.43 (structured) neutral 5.3 1.13 2.5 0.48 33.3 7.07 0.0 0.0 re-genderable 7.1 1.81 6.0 1.51 6.0 1.72 6.1 1.43 test1 all 2.4 0.54 1.3 0.27 4.5 0.99 3.2 0.7 (structured) neutral 4.8 0.95 2.2 0.43 8.7 1.73 0.0 0.0 re-genderable 0.8 0.19 0.6 0.14 1.6 0.38 3.2 0.7 test2 all 11.9 2.13 5.2 0.93 10.4 1.87 not (unstructured) neutral 3.3 0.58 0.3 0.04 6.0 1.07 applicable re-genderable 57.3 10.7 31.1 5.84 33.4 6.26 sentences, the same error type as for structured sentences can be seen, namely some words are changed which should not be changed. Adding tags helped in both cases. However, some other error types can be seen, such as converting some (not gender-related) words into non-existing words in sentences 4) and 5). For sentence 5), generating a non-existing word was triggered by adding tags. Sentence 6) shows an unnecessary re-gendering as well as adding non-existing words. This was also resolved by adding tags. In sentence 7), a word which is not at all related to gender was converted, and this was prevented by adding tags.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Qualitative manual inspection of errors",
"sec_num": "6.2"
},
{
"text": "As for regenderable sentences, the vast majority of errors are again the unchanged words which had to be changed. If there is more than one word to be regendered, sometimes they all remain unchanged (sentence 8) and sometimes only some of them are regendered (sentence 9). Tags can help to some extent, but only for some words, not all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative manual inspection of errors",
"sec_num": "6.2"
},
{
"text": "Adding tags generated by the classifier also increases the number of correctly re-gendered structures at the cost of a small number of additions of non-existing words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative manual inspection of errors",
"sec_num": "6.2"
},
{
"text": "In this paper, we describe an initial approach towards enriching short conversational sentences with their gender variants. Unlike other related work, our approach is not limited to tackling the first person singular phenomena, swapping third person pronouns or merely dealing with occupa-tional or generally animate nouns. In addition, with our approach, the reliance on linguistic knowledge and tools is kept to a minimum in order to facilitate real-world deployment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "The main hurdle for this type of research is the absence of large training sets. Although provided with some manually annotated data from the industry partner, the data provided was far from sufficient to train a state-of-the-art automatic gender re-writer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "Therefore, training data was extracted from OpenSubtitles using linguistic knowledge about the targeted language, namely Spanish. Re-genderable types of words (POS classes) were identified and then frequently occurring 're-genderable' as well as 'neutral' POS patterns were extracted. By applying the corresponding rules to the re-genderable sentences, a large gender-parallel Spanish data set was compiled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "Next, an NMT rewriter was trained in order to 'translate' each re-genderable sentence into its gender alternative which showed promising performance both in terms of automatic as well as of manual evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "In addition, it is shown that providing additional information regarding the need for rewriting in the form of tags could be helpful for the NMT system, as similar tags have shown to be useful for other applications such as multilingual translation, controlling politeness and gender in MT, etc. While gold standard labels show better performance than the labels generated by the gender classifier, the classifier shows promising results given the very small training set. Further experiments should investigate a classifier trained on larger amount of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "In future work, we would like to explore how a similar approach can be applied on more sentence structures in Spanish, as well as for different languages which exhibit distinct gendering rules. Furthermore, different NMT architectures, e.g. character-level NMT or an NMT system with linguistically motivated subword units could be an interesting extension to the conducted experiments, given that gender is usually marked by specific morphemes (usually not more than one or two specific characters). In addition to that, the performance of the gender classifier can be improved to produce more accurate tags by using larger annotated training sets, adding more morphological information in features and using word embeddings instead of TF-IDF scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "English: \"Is it complete?\" 2 English: \"I am confused.\" 3 https://opus.nlpl.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Different types of bias exist, however, the current approaches have focused on gender, possibly because many languages have explicit gender markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "'I am a teacher' or 'I am smart' in English are not marked for gender. However, in many other languages they would be morphologically marked for the male or female gender (e.g. French, Spanish...).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, sentences such as \"I am happy and they are angry.\" are not covered by our approach as both 'happy' and 'angry' are in agreement but with different referents, 'I' and 'they' respectively. Such sentences would require the generation of more than two alternatives since both referents are ambiguous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Assuming that the sentences are short-this approach would not generalize to longer sentences 8 http://opus.nlpl.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/awslabs/sockeye",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Grammatical gender markings are not related to a referent within the sentence, therefore these markings have to be expanded.12 No gender markers that need to be expanded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://scikit-learn.org/stable/ 14 https://pandas.pydata.org/ 15 https://stanfordnlp.github.io/stanza/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Syntactic and cognitive issues in investigating gendered coreference",
"authors": [
{
"first": "Lauren",
"middle": [],
"last": "Ackerman",
"suffix": ""
}
],
"year": 2019,
"venue": "Glossa: a journal of general linguistics",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauren Ackerman. 2019. Syntactic and cognitive is- sues in investigating gendered coreference. Glossa: a journal of general linguistics, 4(1).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Extensive study on the underlying gender bias in contextualized word embeddings",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Basta",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Noe",
"middle": [],
"last": "Casas",
"suffix": ""
}
],
"year": 2020,
"venue": "Neural Computing and Applications",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Basta, Marta R Costa-juss\u00e0, and Noe Casas. 2020. Extensive study on the underlying gender bias in contextualized word embeddings. Neural Com- puting and Applications, pages 1-14.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Gender in danger? evaluating speech translation technology on the must-she corpus",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Savoldi",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Mattia Antonino Di",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "Roldano",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.05754"
]
},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Beatrice Savoldi, Matteo Negri, Mat- tia Antonino Di Gangi, Roldano Cattoni, and Marco Turchi. 2020. Gender in danger? evaluating speech translation technology on the must-she cor- pus. arXiv preprint arXiv:2006.05754.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language (technology) is power: A critical survey of \"bias\" in NLP",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5454--5476",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.485"
]
},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in NLP. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5454- 5476, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An analysis of gender bias studies in natural language processing",
"authors": [
{
"first": "",
"middle": [],
"last": "Marta R Costa-Juss\u00e0",
"suffix": ""
}
],
"year": 2019,
"venue": "Nature Machine Intelligence",
"volume": "",
"issue": "",
"pages": "1--2",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta R Costa-juss\u00e0. 2019. An analysis of gender bias studies in natural language processing. Nature Ma- chine Intelligence, pages 1-2.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic gender identification and reinflection in arabic",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Chung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "155--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash, Houda Bouamor, and Christine Chung. 2019. Automatic gender identification and reinflec- tion in arabic. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 155-165.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The sockeye neural machine translation toolkit at AMTA 2018",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "200--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In Proceedings of the 13th Conference of the Association for Machine Transla- tion in the Americas (Volume 1: Research Track), pages 200-207, Boston, MA. Association for Ma- chine Translation in the Americas.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2017,
"venue": "In Transactions of the Association of Computational Linguistics",
"volume": "5",
"issue": "1",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Cor- rado, et al. 2017. Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. In Transactions of the Association of Computational Linguistics, Volume 5:1, pages 339- 351, Vancouver, Canada.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Gender Bias in Natural Language Processing",
"authors": [
{
"first": "Kaiji",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Mardziel",
"suffix": ""
},
{
"first": "Fangjing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Preetam",
"middle": [],
"last": "Amancharla",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Datta",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.11714"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gen- der Bias in Natural Language Processing. In arXiv:1807.11714.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (ACL 2016), pages 1715-1725, Berlin, Ger- many.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Representation of the sexes in language",
"authors": [
{
"first": "Dagmar",
"middle": [],
"last": "Stahlberg",
"suffix": ""
},
{
"first": "Friederike",
"middle": [],
"last": "Braun",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Irmen",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Sczesny",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "163--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagmar Stahlberg, Friederike Braun, Lisa Irmen, and Sabine Sczesny. 2007. Representation of the sexes in language. Social communication, pages 163-187.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parallel Data, Tools and Interfaces in OPUS",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel Data, Tools and In- terfaces in OPUS. In Proceedings of the Eight In- ternational Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Getting gender right in neural machine translation",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Vanmassenhove",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "3003--3008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2019. Getting gender right in neural machine translation. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3003-3008.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Machine translationese: Effects of algorithmic bias on linguistic complexity in machine translation",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Vanmassenhove",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gwilliam",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "2203--2213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Vanmassenhove, Dimitar Shterionov, and Matthew Gwilliam. 2021. Machine translationese: Effects of algorithmic bias on linguistic complexity in machine translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Com- putational Linguistics: Main Volume, pages 2203- 2213.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of The Thirty-first Annual Conference on Neural Information Processing Systems 30 (NIPS)",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Proceedings of The Thirty-first Annual Conference on Neural Information Processing Sys- tems 30 (NIPS), pages 5998-6008, Long Beach, CA, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning Gender-Neutral Word Embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning Gender-Neutral Word Embeddings. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 4847-4853, Brussels, Belgium.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "AI Can be Sexist and Racist -It's Time to Make it Fair",
"authors": [
{
"first": "J",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schiebinger",
"suffix": ""
}
],
"year": 2018,
"venue": "Nature",
"volume": "559",
"issue": "",
"pages": "324--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Zhou and L Schiebinger. 2018. AI Can be Sexist and Racist -It's Time to Make it Fair. In Nature 559, pages 324-326.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)",
"volume": "",
"issue": "",
"pages": "1651--1661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmen- tation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics (ACL 2019), pages 1651-1661, Florence, Italy.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Statistics of data used for building the NMT rewriter.",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">(a) structured sentences</td></tr><tr><td/><td colspan=\"2\">type original</td><td>correct</td><td>NMT</td><td>NMT-T</td></tr><tr><td/><td>N</td><td colspan=\"4\">esto es perfecto esto es perfecto esto es perfecta esto es perfecto</td></tr><tr><td/><td>G</td><td>est\u00e1 adjunto</td><td>est\u00e1 adjunta</td><td>est\u00e1 adjunto</td><td>est\u00e1 adjunto</td></tr><tr><td/><td/><td/><td colspan=\"2\">(b) unstructured sentences</td></tr><tr><td colspan=\"2\">type original</td><td/><td>correct</td><td>NMT</td><td>NMT-T</td></tr><tr><td>1) N</td><td colspan=\"2\">no son lo mismo</td><td>no son lo mismo</td><td>no son la misma</td><td>no son lo mismo</td></tr><tr><td>2) N</td><td colspan=\"5\">aquello fue encantador aquello fue encantador aquello fue encantadora aquello fue encantador</td></tr><tr><td>3) N</td><td colspan=\"2\">\u00bfa qui\u00e9n aprovecha?</td><td>\u00bfa qui\u00e9n aprovecha?</td><td colspan=\"2\">\u00bfa qui\u00e9n aprovecho?</td><td>\u00bfa qui\u00e9n aprovecha?</td></tr><tr><td>4) N</td><td colspan=\"2\">ind\u00edqueme la</td><td>ind\u00edqueme la</td><td>ind\u00edqueme la</td><td>ind\u00edqueme la</td></tr><tr><td/><td colspan=\"2\">disponibilidad</td><td>disponibilidad</td><td>emperbilidad</td><td>evelbilidad</td></tr><tr><td>5) N</td><td colspan=\"2\">ind\u00edqueme su</td><td>ind\u00edqueme su</td><td>ind\u00edqueme su</td><td>ind\u00edqueme su</td></tr><tr><td/><td colspan=\"2\">disponibilidad</td><td>disponibilidad</td><td>disponibilidad</td><td>escorpibilidad</td></tr><tr><td>6) N</td><td colspan=\"2\">unos momentos</td><td>unos momentos</td><td>unos momentos</td><td>unos momentos</td></tr><tr><td/><td colspan=\"2\">extraordinarios</td><td>extraordinarios</td><td colspan=\"2\">extraordinarias arios</td><td>extraordinarios</td></tr><tr><td>7) N</td><td colspan=\"2\">ind\u00edquenos cu\u00e1nto</td><td>ind\u00edquenos cu\u00e1nto</td><td colspan=\"2\">ind\u00edquenas cu\u00e1nto</td><td>ind\u00edquenos cu\u00e1nto</td></tr><tr><td>8) G</td><td colspan=\"2\">esta es la adecuada</td><td>este es el adecuado</td><td colspan=\"2\">esta es la adecuada</td><td>esta es lo adecuada</td></tr><tr><td>9) G</td><td colspan=\"2\">esta la hemos recibido</td><td colspan=\"3\">este lo hemos recibido esta la hemos recibido</td><td>esta lo hemos recibido</td></tr></table>",
"type_str": "table",
"text": "Results for NMT rewriter: error rates (%): count of incorrectly converted words normalised by the total number of sentences (left columns) and normalised by the total number of words (right columns).",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Examples of incorrectly generated sentence variants for (a) structured sentences and (b) unstructured sentences.",
"html": null
}
}
}
}