{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:05:56.506173Z" }, "title": "Gender and Representation Bias in GPT-3 Generated Stories", "authors": [ { "first": "Li", "middle": [], "last": "Lucy", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Berkeley" } }, "email": "lucy3_li@berkeley.edu" }, { "first": "David", "middle": [], "last": "Bamman", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Berkeley" } }, "email": "dbamman@berkeley.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Using topic modeling and lexicon-based word similarity, we find that stories generated by GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and descriptions depending on GPT-3's perceived gender of the character in a prompt, with feminine characters 1 more likely to be associated with family and appearance, and described as less powerful than masculine characters, even when associated with high power verbs in a prompt. Our study raises questions on how one can avoid unintended social biases when using large language models for storytelling.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Using topic modeling and lexicon-based word similarity, we find that stories generated by GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and descriptions depending on GPT-3's perceived gender of the character in a prompt, with feminine characters 1 more likely to be associated with family and appearance, and described as less powerful than masculine characters, even when associated with high power verbs in a prompt. Our study raises questions on how one can avoid unintended social biases when using large language models for storytelling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Advances in large language models have allowed new possibilities for their use in storytelling, such as machine-in-the-loop creative writing (Clark et al., 2018; Kreminski et al., 2020; Akoury et al., 2020) and narrative generation for games (Raley and Hua, 2020) . However, fictional stories can reinforce real stereotypes, and artificially generated stories are no exception. Language models mimic patterns in their training data, parroting or even amplifying social biases (Bender et al., 2021 ).", "cite_spans": [ { "start": 141, "end": 161, "text": "(Clark et al., 2018;", "ref_id": "BIBREF18" }, { "start": 162, "end": 185, "text": "Kreminski et al., 2020;", "ref_id": "BIBREF31" }, { "start": 186, "end": 206, "text": "Akoury et al., 2020)", "ref_id": "BIBREF2" }, { "start": 242, "end": 263, "text": "(Raley and Hua, 2020)", "ref_id": "BIBREF39" }, { "start": 476, "end": 496, "text": "(Bender et al., 2021", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An ongoing line of research examines the nature and effects of these biases in natural language generation (Sheng et al., 2020; Wallace et al., 2019; Shwartz et al., 2020) . Language models generate different occupations and levels of respect for different genders, races, and sexual orientations (Sheng et al., 2019; Kirk et al., 2021) . Abid et al. (2021) showed that GPT-3's association of Muslims and violence can be difficult to diminish, even when prompts include anti-stereotype content.", "cite_spans": [ { "start": 107, "end": 127, "text": "(Sheng et al., 2020;", "ref_id": "BIBREF43" }, { "start": 128, "end": 149, "text": "Wallace et al., 2019;", "ref_id": "BIBREF50" }, { "start": 150, "end": 171, "text": "Shwartz et al., 2020)", "ref_id": "BIBREF45" }, { "start": 297, "end": 317, "text": "(Sheng et al., 2019;", "ref_id": "BIBREF44" }, { "start": 318, "end": 336, "text": "Kirk et al., 2021)", "ref_id": null }, { "start": 339, "end": 357, "text": "Abid et al. (2021)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work focuses on representational harms in generated narratives, especially the reproduction Douloti understood some and didn't understand some. But he didn't care to understand. It was enough for him to know the facts of the situation and why his mother had left ... Douloti understood some and didn't understand some. But more, she could tell that Nenn had sympathy for one who had given up life. Sister Nenn went on with her mending ... of gender stereotypes found in film, television, and books. We use GPT-3, a large language model that has been released as a commercial product and thus has potential for wide use in narrative generation tasks (Brown et al., 2020; Brockman et al., 2020; Scott, 2020; Elkins and Chun, 2020; Branwen, 2020) . Our experiments compare GPT-3's stories with literature as a form of domain control, using generated stories and book excerpts that begin with the same sentence.", "cite_spans": [ { "start": 653, "end": 673, "text": "(Brown et al., 2020;", "ref_id": null }, { "start": 674, "end": 696, "text": "Brockman et al., 2020;", "ref_id": "BIBREF14" }, { "start": 697, "end": 709, "text": "Scott, 2020;", "ref_id": "BIBREF42" }, { "start": 710, "end": 732, "text": "Elkins and Chun, 2020;", "ref_id": "BIBREF19" }, { "start": 733, "end": 747, "text": "Branwen, 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We examine the topic distributions of books and GPT-3 stories, as well as the amount of attention given to characters' appearances, intellect, and power. We find that GPT-3's stories tend to include more masculine characters than feminine ones (mirroring a similar tendency in books), and identical prompts can lead to topics and descriptions that follow social stereotypes, depending on the prompt character's gender. Stereotype-related topics in prompts tend to persist further in a story if the character's gender aligns with the stereotype. Finally, using prompts containing different verbs, we are able to steer GPT-3 towards more intellectual, but not more powerful, characters. Code and materials to support this work can be found at https://github.com/lucy3/gpt3_gender.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our prompts are single sentences containing main characters sampled from 402 English contemporary fiction books, which includes texts from the Black Book Interactive Project, global Anglophone fiction, Pulitzer Prize winners, and bestsellers reported by Publisher's Weekly and the New York Times. We use BookNLP to find main characters and sentences containing them (Bamman et al., 2014) . We define a main character as someone who is within their book's top 2% most frequent characters and mentioned at least 50 times. Every prompt is longer than 3 tokens, does not contain feminine or masculine pronouns, is from the main narrative and not dialogue, and contains only one singletoken character name. This results in 2154 characters, with 10 randomly selected prompts each.", "cite_spans": [ { "start": 366, "end": 387, "text": "(Bamman et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "We use the GPT-3 API to obtain 5 text completions per prompt, with the davinci model, a temperature of 0.9, and a limit of 1800 tokens. A high temperature is often recommended to yield more \"creative\" responses (Alexeev, 2020; Branwen, 2020). We also pull excerpts that begin with each prompt from the original books, where each excerpt length is the average length of stories generated by that prompt. This human-authored text provides a control that contains the same main character names and initial content as GPT-3 data. The collection of generated stories contains over 161 million tokens, and the set of book excerpts contains over 32 million tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "We use BookNLP's tokenizer and dependency parser on our data (Underwood et al., 2018; Bamman et al., 2014), followed by coreference resolution on named entities using the model annotated and trained on literature by Bamman et al. (2020) . Pronoun chains containing the same character name within the same story are combined.", "cite_spans": [ { "start": 216, "end": 236, "text": "Bamman et al. (2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Text processing methods", "sec_num": "3" }, { "text": "Depending on the context, gender may refer to a person's self-determined identity, how they express their identity, how they are perceived, and others' social expectations of them (Cao and Daum\u00e9 III, 2020; Ackerman, 2019). Gender inference raises many ethical considerations and carries a risk of harmful misgendering, so it is best to have individuals self-report their gender (Larson, 2017) . However, fictional characters typically do not state their genders in machine-generated text, and GPT-3 may gender a character differently from the original book. Our study focuses on how GPT-3 may perceive a character's gender based on textual features.", "cite_spans": [ { "start": 378, "end": 392, "text": "(Larson, 2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Gender inference", "sec_num": "3.1" }, { "text": "Thus, we infer conceptual gender, or gender used by a perceiver, which may differ from the gender experienced internally by an individual being perceived (Ackerman, 2019) .", "cite_spans": [ { "start": 154, "end": 170, "text": "(Ackerman, 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Gender inference", "sec_num": "3.1" }, { "text": "First, we use a character's pronouns (he/him/his, she/her/hers, their/theirs) as a rough heuristic for gender. For book character gender, we aggregate pronouns for characters across all excerpts, while for generated text, we assign gender on a per-story basis. Since coreference resolution can be noisy, we label a character as feminine if at least 75% of their pronouns are she/her, and a character as masculine if at least 75% of their pronouns are he/his. The use of pronouns as the primary gendering step labels the majority of main characters ( Figure 2 ). This approach has several limitations. Gender and pronoun use can be fluid, but we do not determine which cases of mixed-gender pronouns are gender fluidity rather than coreference error. Coreference models are also susceptible to gender biases (Rudinger et al., 2018) , and they are not inclusive of nonbinary genders and pronouns (Cao and Daum\u00e9 III, 2020).", "cite_spans": [ { "start": 807, "end": 830, "text": "(Rudinger et al., 2018)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 550, "end": 558, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Gender inference", "sec_num": "3.1" }, { "text": "Out of 734,560 characters, 48.3% have no pronouns. For these characters, we perform a second step of estimating expected conceptual gender by name, first using a list of gendered honorifics if they appear. 2 Then, if a name has no pronouns or honorifics, we use U.S. birth names from 1990 to 2019 (Social Security Administration, 2020), labeling a name as a gender if at least 90% of birth names have that gender. This step also has limitations. The gender categories of names are not exact, and the association between a name and gender can change over time (Blevins and Mullen, 2015) . Some cultures do not commonly gender names, and U.S. name lists do not always generalize to names from other countries. Still, humans and NLP models associate many names with gender and consequently, with gender stereotypes (Bjorkman, 2017; Caliskan et al., 2017; Nosek et al., 2002; Moss-Racusin et al., 2012) . We assume that GPT-3 also draws on social connotations when generating and processing names. We hope that future work can further improve the respectful measurement of gender in fiction.", "cite_spans": [ { "start": 559, "end": 585, "text": "(Blevins and Mullen, 2015)", "ref_id": "BIBREF11" }, { "start": 829, "end": 851, "text": "Caliskan et al., 2017;", "ref_id": "BIBREF16" }, { "start": 852, "end": 871, "text": "Nosek et al., 2002;", "ref_id": "BIBREF38" }, { "start": 872, "end": 898, "text": "Moss-Racusin et al., 2012)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Gender inference", "sec_num": "3.1" }, { "text": "All book excerpts and generated stories are more likely to have masculine characters, and in ones with feminine main characters in the prompt, there is a slightly smaller gap between feminine and mas- culine characters (Figure 3 ). This pattern persists even when only looking at pronoun-gendered characters, who are referred to multiple times and are likely to play larger roles. Our results echo previous work that show that English literature pays more attention to men in text (Underwood et al., 2018; Kraicer and Piper, 2018; Johns and Dye, 2019).", "cite_spans": [], "ref_spans": [ { "start": 219, "end": 228, "text": "(Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Gender inference", "sec_num": "3.1" }, { "text": "Prompts containing main characters of different genders may also contain different content, which can introduce confounding factors when isolating the effect of perceived gender on generated stories. We also run all our experiments on a subset of 7334 paired GPT-3 stories. Every prompt does not contain gendered pronouns and is used to generate multiple stories. GPT-3 may assign different gender pronouns to the main character in the same prompt across different stories (Table 1) . We find cases where this occurs, randomly pairing stories with the same prompt, where one has the main character associated with feminine pronouns and another has them associated with masculine pronouns. In this setup, we exclude stories where the main character in the prompt is gendered by name.", "cite_spans": [], "ref_spans": [ { "start": 473, "end": 482, "text": "(Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Matched stories", "sec_num": "3.2" }, { "text": "Given this dataset of book excerpts and stories generated by GPT-3, we carry out several analyses to understand the representation of gender within them. We focus on overall content differences between stories containing prompt characters of different genders in this current section, and lexiconbased stereotypes in \u00a75.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic differences", "sec_num": "4" }, { "text": "Topic modeling is a common unsupervised method for uncovering coherent collections of words across narratives (Boyd-Graber et al., 2017; Goldstone and Underwood, 2014) . We train latent Dirichlet allocation (LDA) on unigrams and bigrams from book excerpts and generated stories using MALLET, with 50 topics and default parameters. We remove character names from the text during training. For each topic t, we calculate \u2206T (t) = P (t|F ) \u2212 P (t|M ), where P (t|M ) is the average probability of a topic occurring in stories with masculine main characters, and P (t|F ) is the analogous value for feminine main characters. Table 1 shows that generated stories place masculine and feminine characters in different topics, and in the subset of matched GPT-3 stories, these differences still persist (Pearson r = 0.91, p < 0.001). Feminine characters are more likely to be discussed in topics related to family, emotions, and body parts, while masculine ones are more aligned to politics, war, sports, and crime. The differences in generated stories follow those seen in books (Pearson r = 0.84, p < 0.001). Prompts with the same content can still lead to different narratives that are tied to character gender, suggesting that GPT-3 has internally linked stereotypical contexts to gender. In previous work, GPT-3's predecessor GPT-2 also places women in caregiving roles (Kirk et al., 2021) , and character tropes for women emphasize maternalism and appearance (Gala et al., 2020) .", "cite_spans": [ { "start": 110, "end": 136, "text": "(Boyd-Graber et al., 2017;", "ref_id": "BIBREF12" }, { "start": 137, "end": 167, "text": "Goldstone and Underwood, 2014)", "ref_id": "BIBREF24" }, { "start": 1367, "end": 1386, "text": "(Kirk et al., 2021)", "ref_id": null }, { "start": 1457, "end": 1476, "text": "(Gala et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 621, "end": 628, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Method", "sec_num": "4.1" }, { "text": "We also use our trained LDA model to infer topic probabilities for each prompt, and examine prompts with a high (> 0.15) probability of a topic with gender bias, such as politics or family. We chose this threshold using manual inspection, and prompts that meet this threshold tended to have at least one topic-related word in them. When prompts contain the family topic, the resulting story tends to continue or amplify that topic more so if the main character is feminine (Figure 4) . The reverse occurs when prompts have a high probability of politics: the resulting story is more likely to continue the topic if the main character is masculine. So, even if characters are in a prompt with anti-stereotypical content, it is still challenging to generate stories with topic probabilities at similar levels as a character with the stereotype-aligned gender.", "cite_spans": [], "ref_spans": [ { "start": 473, "end": 483, "text": "(Figure 4)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Now, we measure how much descriptions of characters correspond to a few established gender stereotypes. Men are often portrayed as strong, intelli-gent, and natural leaders (Smith et al., 2012; Sap et al., 2017; Fast et al., 2016b; Gala et al., 2020) . Popular culture has increased its attention towards women in science, politics, academia, and law (Long et al., 2010; Inness, 2008; Flicker, 2003) . Even so, depictions of women still foreground their physical appearances (Hoyle et al., 2019) , and portray them as weak and less powerful (Fast et al., 2016b; Sap et al., 2017) . Thus, our present study measures three dimensions of character descriptions: appearance, intellect, and power.", "cite_spans": [ { "start": 173, "end": 193, "text": "(Smith et al., 2012;", "ref_id": "BIBREF46" }, { "start": 194, "end": 211, "text": "Sap et al., 2017;", "ref_id": "BIBREF41" }, { "start": 212, "end": 231, "text": "Fast et al., 2016b;", "ref_id": "BIBREF21" }, { "start": 232, "end": 250, "text": "Gala et al., 2020)", "ref_id": "BIBREF23" }, { "start": 351, "end": 370, "text": "(Long et al., 2010;", "ref_id": "BIBREF33" }, { "start": 371, "end": 384, "text": "Inness, 2008;", "ref_id": "BIBREF26" }, { "start": 385, "end": 399, "text": "Flicker, 2003)", "ref_id": "BIBREF22" }, { "start": 475, "end": 495, "text": "(Hoyle et al., 2019)", "ref_id": "BIBREF25" }, { "start": 541, "end": 561, "text": "(Fast et al., 2016b;", "ref_id": "BIBREF21" }, { "start": 562, "end": 579, "text": "Sap et al., 2017)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Lexicon-based stereotypes", "sec_num": "5" }, { "text": "Words linked to people via linguistic dependencies can be used to analyze descriptions of people in text (Fast et al., 2016b; Hoyle et al., 2019; Lucy et al., 2020; Bamman et al., 2013; Sap et al., 2017) . These words can be aligned with lexicons curated by human annotators, such as Fast et al. (2016b) 's categories of adjectives and verbs, which were used to measure gender stereotypes in online fiction. We train 100-dimensional word2vec embeddings (Mikolov et al., 2013) on lowercased, punctuationless generated stories and books, using default parameters in the gensim Python package. We extract adjectives and verbs using the dependency relations nsubj and amod attached to main character names and their pronouns in non-prompt text. For masculine and feminine characters, we only use their gender-conforming pronouns.", "cite_spans": [ { "start": 105, "end": 125, "text": "(Fast et al., 2016b;", "ref_id": "BIBREF21" }, { "start": 126, "end": 145, "text": "Hoyle et al., 2019;", "ref_id": "BIBREF25" }, { "start": 146, "end": 164, "text": "Lucy et al., 2020;", "ref_id": "BIBREF34" }, { "start": 165, "end": 185, "text": "Bamman et al., 2013;", "ref_id": "BIBREF7" }, { "start": 186, "end": 203, "text": "Sap et al., 2017)", "ref_id": "BIBREF41" }, { "start": 284, "end": 303, "text": "Fast et al. (2016b)", "ref_id": "BIBREF21" }, { "start": 453, "end": 475, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "5.1" }, { "text": "To gather words describing appearance, we combine Fast et al. (2016b) 's lexicons for beautiful and sexual (201 words). For words related to intellect, we use Fast et al. (2016a) 's Empath categories containing the word intellectual (98 words). For measuring power, we take Fast et al. (2016b) 's lexicons for strong and dominant (113 words), and contrast them with a union of their lexicons for weak, dependent, submissive, and afraid (141 words).", "cite_spans": [ { "start": 50, "end": 69, "text": "Fast et al. (2016b)", "ref_id": "BIBREF21" }, { "start": 159, "end": 178, "text": "Fast et al. (2016a)", "ref_id": "BIBREF20" }, { "start": 274, "end": 293, "text": "Fast et al. (2016b)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "5.1" }, { "text": "Counting lexicon word frequency can overemphasize popular words (e.g. want) and exclude related words. Therefore, we calculate semantic similarity instead. For appearance and intellect, we compute the average cosine similarity of a verb or adjective to every word in each lexicon. For power, we take a different approach, because antonyms tend be close in semantic space (Mrk\u0161i\u0107 et al., 2016) . Previous work has used differences between antonyms to create semantic axes and compare words to these axes (Kozlowski et al., 2019; Turney and Littman, 2003; . Let a Figure 5 : Appearance, intellect, and power scores across genders in books and GPT-3-generated stories. Error bars are 95% confidence intervals. All differences between feminine and masculine characters are significant (Welch's t-test, p < 0.001), except for intellect in matched GPT-3 stories. be a word in the lexicon related to strength and b be a word embedding from the lexicon related to weakness. We use 's SEMAXIS to calculate word x's score:", "cite_spans": [ { "start": 371, "end": 392, "text": "(Mrk\u0161i\u0107 et al., 2016)", "ref_id": "BIBREF37" }, { "start": 503, "end": 527, "text": "(Kozlowski et al., 2019;", "ref_id": "BIBREF29" }, { "start": 528, "end": 553, "text": "Turney and Littman, 2003;", "ref_id": "BIBREF48" } ], "ref_spans": [ { "start": 562, "end": 570, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": "5.1" }, { "text": "S(x) = cos \uf8eb \uf8ed x, 1 | A | a\u2208A a \u2212 1 | B | b\u2208B b \uf8f6 \uf8f8 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "5.1" }, { "text": "where a positive value means x is stronger, and a negative value means x is weaker. We z-score all three of our metrics, and average the scores for all words associated with characters of each gender.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "5.1" }, { "text": "Book characters have higher power and intellect than generated characters, but relative gender differences are similar between the two datasets (Figure 5) . As hypothesized, feminine characters are most likely to be described by their appearance, and masculine characters are most powerful. The gender differences between masculine and feminine characters for appearance and power persist in matched GPT-3 stories, suggesting that GPT-3 has internally linked gender to these attributes. The patterns for intellect show that feminine characters are usually highest, though the insignificant difference in matched GPT-3 stories (p > 0.05) suggests that this attribute may be more affected by other content than gender. We also test the ability of prompts to steer GPT-3 towards stronger and more intellectual characters. We examine character descriptions in stories gener- Figure 6 : A comparison of stories generated by all prompts with stories generated by prompts where characters are linked to cognitive or high power verbs. Error bars are 95% confidence intervals.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 154, "text": "(Figure 5)", "ref_id": null }, { "start": 871, "end": 879, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "ated by prompts in which characters are the subject of high power verbs from Sap et al. (2017)'s connotation frame lexicon, which was created for the study of characters in film. We also examine GPT-3 stories with prompts where characters use cognitive verbs from Bloom's Taxonomy, which is used to measure student learning, such as summarize, interpret, or critique (Anderson et al., 2001) . We match verbs based on their lemmatized forms.", "cite_spans": [ { "start": 367, "end": 390, "text": "(Anderson et al., 2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "We find that prompts containing cognitive verbs result in descriptions with higher intellect scores ( Figure 6 ). Prompts containing high power verbs, however, do not lead to similar change, and nonmasculine characters with high power verbs still have lower power on average than all masculine characters. Traditional power differentials in gender may be challenging to override and require more targeted prompts.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "The use of GPT-3 for storytelling requires a balance between creativity and controllability to avoid unintended generations. We show that multiple gender stereotypes occur in generated narratives, and can emerge even when prompts do not contain explicit gender cues or stereotype-related content. Our study uses prompt design as a possible mechanism for mitigating bias, but we do not intend to shift the responsibility of preventing social harm from the creators of these systems to their users. Future studies can use causal inference and more carefully designed prompts to untangle the factors that influence GPT-3 and other text generation models' narrative outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We use \"feminine character\" to refer to characters with feminine pronouns, honorifics, or names, and ditto for \"masculine character\". See \u00a73.1 for details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The full of list of honorifics is in our Github repo.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Nicholas Tomlin, Julia Mendelsohn, and Emma Lurie for their helpful feedback on earlier versions of this paper. This work was supported by funding from the National Science Foundation (Graduate Research Fellowship DGE-1752814 and grant IIS-1942591).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Persistent anti-Muslim bias in large language models", "authors": [ { "first": "Abubakar", "middle": [], "last": "Abid", "suffix": "" }, { "first": "Maheen", "middle": [], "last": "Farooqi", "suffix": "" }, { "first": "James", "middle": [], "last": "Zou", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2101.05783" ] }, "num": null, "urls": [], "raw_text": "Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent anti-Muslim bias in large language models. arXiv preprint arXiv:2101.05783.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Syntactic and cognitive issues in investigating gendered coreference", "authors": [ { "first": "Lauren", "middle": [], "last": "Ackerman", "suffix": "" } ], "year": 2019, "venue": "Glossa: A Journal of General Linguistics", "volume": "4", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.5334/gjgl.721" ] }, "num": null, "urls": [], "raw_text": "Lauren Ackerman. 2019. Syntactic and cognitive is- sues in investigating gendered coreference. Glossa: A Journal of General Linguistics, 4(1).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "STO-RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation", "authors": [ { "first": "Nader", "middle": [], "last": "Akoury", "suffix": "" }, { "first": "Shufan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Whiting", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Hood", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6470--6484", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.525" ] }, "num": null, "urls": [], "raw_text": "Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. 2020. STO- RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6470-6484, Online. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "GPT-3: Creative potential of NLP. Towards Data Science", "authors": [ { "first": "Vladimir", "middle": [ "Alexeev" ], "last": "", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Alexeev. 2020. GPT-3: Creative potential of NLP. Towards Data Science.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "SemAxis: A lightweight framework to characterize domain-specific word semantics beyond sentiment", "authors": [ { "first": "Jisun", "middle": [], "last": "An", "suffix": "" }, { "first": "Haewoon", "middle": [], "last": "Kwak", "suffix": "" }, { "first": "Yong-Yeol", "middle": [], "last": "Ahn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2450--2461", "other_ids": { "DOI": [ "10.18653/v1/P18-1228" ] }, "num": null, "urls": [], "raw_text": "Jisun An, Haewoon Kwak, and Yong-Yeol Ahn. 2018. SemAxis: A lightweight framework to characterize domain-specific word semantics beyond sentiment. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2450-2461, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives", "authors": [ { "first": "L", "middle": [ "W" ], "last": "Anderson", "suffix": "" }, { "first": "B", "middle": [ "S" ], "last": "Bloom", "suffix": "" }, { "first": "D", "middle": [ "R" ], "last": "Krathwohl", "suffix": "" }, { "first": "P", "middle": [], "last": "Airasian", "suffix": "" }, { "first": "K", "middle": [], "last": "Cruikshank", "suffix": "" }, { "first": "R", "middle": [], "last": "Mayer", "suffix": "" }, { "first": "P", "middle": [], "last": "Pintrich", "suffix": "" }, { "first": "J", "middle": [], "last": "Raths", "suffix": "" }, { "first": "M", "middle": [], "last": "Wittrock", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L.W. Anderson, B.S. Bloom, D.R. Krathwohl, P. Airasian, K. Cruikshank, R. Mayer, P. Pintrich, J. Raths, and M. Wittrock. 2001. A Taxonomy for Learning, Teaching, and Assessing: A Revi- sion of Bloom's Taxonomy of Educational Objec- tives. Longman.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An annotated dataset of coreference in English literature", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Lewke", "suffix": "" }, { "first": "Anya", "middle": [], "last": "Mansoor", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "44--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bamman, Olivia Lewke, and Anya Mansoor. 2020. An annotated dataset of coreference in En- glish literature. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 44-54, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning latent personas of film characters", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Connor", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "352--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bamman, Brendan O'Connor, and Noah A. Smith. 2013. Learning latent personas of film char- acters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 352-361, Sofia, Bul- garia. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Bayesian mixed effects model of literary character", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Underwood", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "370--379", "other_ids": { "DOI": [ "10.3115/v1/P14-1035" ] }, "num": null, "urls": [], "raw_text": "David Bamman, Ted Underwood, and Noah A. Smith. 2014. A Bayesian mixed effects model of literary character. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 370-379, Balti- more, Maryland. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the dangers of stochastic parrots: Can language models be too big", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Gebru", "suffix": "" }, { "first": "Angelina", "middle": [], "last": "Mcmillan-Major", "suffix": "" }, { "first": "Shmargaret", "middle": [], "last": "Shmitchell", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21", "volume": "", "issue": "", "pages": "610--623", "other_ids": { "DOI": [ "10.1145/3442188.3445922" ] }, "num": null, "urls": [], "raw_text": "Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? . In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Trans- parency, FAccT '21, page 610-623, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Singular they and the syntactic representation of gender in English", "authors": [ { "first": "M", "middle": [], "last": "Bronwyn", "suffix": "" }, { "first": "", "middle": [], "last": "Bjorkman", "suffix": "" } ], "year": 2017, "venue": "Glossa: A Journal of General Linguistics", "volume": "2", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.5334/gjgl.374" ] }, "num": null, "urls": [], "raw_text": "Bronwyn M Bjorkman. 2017. Singular they and the syntactic representation of gender in English. Glossa: A Journal of General Linguistics, 2(1).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Jane, john... leslie? a historical method for algorithmic gender prediction", "authors": [ { "first": "Cameron", "middle": [], "last": "Blevins", "suffix": "" }, { "first": "Lincoln", "middle": [], "last": "Mullen", "suffix": "" } ], "year": 2015, "venue": "DHQ: Digital Humanities Quarterly", "volume": "9", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cameron Blevins and Lincoln Mullen. 2015. Jane, john... leslie? a historical method for algorithmic gender prediction. DHQ: Digital Humanities Quar- terly, 9(3).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Applications of topic models", "authors": [ { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Yuening", "middle": [], "last": "Hu", "suffix": "" }, { "first": "David", "middle": [], "last": "Mimno", "suffix": "" } ], "year": 2017, "venue": "Foundations and Trends\u00ae in Information Retrieval", "volume": "11", "issue": "2-3", "pages": "143--296", "other_ids": { "DOI": [ "10.1561/1500000030" ] }, "num": null, "urls": [], "raw_text": "Jordan Boyd-Graber, Yuening Hu, and David Mimno. 2017. Applications of topic models. Foundations and Trends\u00ae in Information Retrieval, 11(2-3):143- 296.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "GPT-3 creative fiction", "authors": [ { "first": "Gwern", "middle": [], "last": "Branwen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gwern Branwen. 2020. GPT-3 creative fiction.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "OpenAI API. OpenAI Blog", "authors": [ { "first": "Greg", "middle": [], "last": "Brockman", "suffix": "" }, { "first": "Mira", "middle": [], "last": "Murati", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Welinder", "suffix": "" }, { "first": "Openai", "middle": [], "last": "", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Brockman, Mira Murati, Peter Welinder, and OpenAI. 2020. OpenAI API. OpenAI Blog.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners", "authors": [ { "first": "Tom", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Jared", "middle": [ "D" ], "last": "Kaplan", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Askell", "suffix": "" }, { "first": "Sandhini", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Ariel", "middle": [], "last": "Herbert-Voss", "suffix": "" }, { "first": "Gretchen", "middle": [], "last": "Krueger", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Henighan", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ziegler", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Winter", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hesse", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Sigler", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Litwin", "suffix": "" } ], "year": null, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "1877--1901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert- Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semantics derived automatically from language corpora contain human-like biases", "authors": [ { "first": "Aylin", "middle": [], "last": "Caliskan", "suffix": "" }, { "first": "Joanna", "middle": [ "J" ], "last": "Bryson", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2017, "venue": "Science", "volume": "356", "issue": "6334", "pages": "183--186", "other_ids": { "DOI": [ "10.1126/science.aal4230" ] }, "num": null, "urls": [], "raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Toward gender-inclusive coreference resolution", "authors": [ { "first": "Yang", "middle": [], "last": "", "suffix": "" }, { "first": "Trista", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4568--4595", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.418" ] }, "num": null, "urls": [], "raw_text": "Yang Trista Cao and Hal Daum\u00e9 III. 2020. Toward gender-inclusive coreference resolution. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4568-4595, Online. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Creative writing with a machine in the loop: Case studies on slogans and stories", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Anne", "middle": [ "Spencer" ], "last": "Ross", "suffix": "" }, { "first": "Chenhao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "23rd International Conference on Intelligent User Interfaces, IUI '18", "volume": "", "issue": "", "pages": "329--340", "other_ids": { "DOI": [ "10.1145/3172944.3172983" ] }, "num": null, "urls": [], "raw_text": "Elizabeth Clark, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A. Smith. 2018. Creative writing with a machine in the loop: Case stud- ies on slogans and stories. In 23rd International Conference on Intelligent User Interfaces, IUI '18, page 329-340, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Can GPT-3 pass a writer's Turing Test", "authors": [ { "first": "Katherine", "middle": [], "last": "Elkins", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Chun", "suffix": "" } ], "year": 2020, "venue": "Journal of Cultural Analytics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.22148/001c.17212" ] }, "num": null, "urls": [], "raw_text": "Katherine Elkins and Jon Chun. 2020. Can GPT-3 pass a writer's Turing Test? Journal of Cultural Analyt- ics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Empath: Understanding topic signals in large-scale text", "authors": [ { "first": "Ethan", "middle": [], "last": "Fast", "suffix": "" }, { "first": "Binbin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Michael S", "middle": [], "last": "Bernstein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "4647--4657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ethan Fast, Binbin Chen, and Michael S Bernstein. 2016a. Empath: Understanding topic signals in large-scale text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Sys- tems, pages 4647-4657.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Shirtless and dangerous: Quantifying linguistic signals of gender bias in an online fiction writing community", "authors": [ { "first": "Ethan", "middle": [], "last": "Fast", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Vachovsky", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bernstein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International AAAI Conference on Web and Social Media", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ethan Fast, Tina Vachovsky, and Michael Bernstein. 2016b. Shirtless and dangerous: Quantifying lin- guistic signals of gender bias in an online fiction writing community. In Proceedings of the Interna- tional AAAI Conference on Web and Social Media, volume 10.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Between brains and breasts-women scientists in fiction film: On the marginalization and sexualization of scientific competence", "authors": [ { "first": "Eva", "middle": [], "last": "Flicker", "suffix": "" } ], "year": 2003, "venue": "Public Understanding of Science", "volume": "12", "issue": "3", "pages": "307--318", "other_ids": { "DOI": [ "10.1177/0963662503123009" ] }, "num": null, "urls": [], "raw_text": "Eva Flicker. 2003. Between brains and breasts-women scientists in fiction film: On the marginalization and sexualization of scientific competence. Public Understanding of Science, 12(3):307-318.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Analyzing gender bias within narrative tropes", "authors": [ { "first": "Dhruvil", "middle": [], "last": "Gala", "suffix": "" }, { "first": "Mohammad", "middle": [ "Omar" ], "last": "Khursheed", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Lerner", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Connor", "suffix": "" }, { "first": "", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science", "volume": "", "issue": "", "pages": "212--217", "other_ids": { "DOI": [ "10.18653/v1/2020.nlpcss-1.23" ] }, "num": null, "urls": [], "raw_text": "Dhruvil Gala, Mohammad Omar Khursheed, Hannah Lerner, Brendan O'Connor, and Mohit Iyyer. 2020. Analyzing gender bias within narrative tropes. In Proceedings of the Fourth Workshop on Natural Lan- guage Processing and Computational Social Sci- ence, pages 212-217, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The quiet transformations of literary studies: What thirteen thousand scholars could tell us", "authors": [ { "first": "Andrew", "middle": [], "last": "Goldstone", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Underwood", "suffix": "" } ], "year": 2014, "venue": "New Literary History", "volume": "45", "issue": "3", "pages": "359--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Goldstone and Ted Underwood. 2014. The quiet transformations of literary studies: What thir- teen thousand scholars could tell us. New Literary History, 45(3):359-384.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Unsupervised discovery of gendered language through latent-variable modeling", "authors": [ { "first": "Alexander Miserlis", "middle": [], "last": "Hoyle", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Wolf-Sonkin", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Wallach", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1706--1716", "other_ids": { "DOI": [ "10.18653/v1/P19-1167" ] }, "num": null, "urls": [], "raw_text": "Alexander Miserlis Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cot- terell. 2019. Unsupervised discovery of gendered language through latent-variable modeling. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1706- 1716, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Geek Chic: Smart Women in Popular Culture", "authors": [ { "first": "Sherrie", "middle": [ "A" ], "last": "Inness", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sherrie A. Inness. 2008. Geek Chic: Smart Women in Popular Culture. Palgrave Macmillan.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Gender bias at scale: Evidence from the usage of personal names", "authors": [ { "first": "T", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Melody", "middle": [], "last": "Johns", "suffix": "" }, { "first": "", "middle": [], "last": "Dye", "suffix": "" } ], "year": 2019, "venue": "Behavior Research Methods", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brendan T. Johns and Melody Dye. 2019. Gender bias at scale: Evidence from the usage of personal names. Behavior Research Methods, 51(4).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Aleksandar Shtedritski, and Yuki M. Asano. 2021. How true is gpt-2? an empirical analysis of intersectional occupational biases", "authors": [ { "first": "Hannah", "middle": [], "last": "Kirk", "suffix": "" }, { "first": "Yennie", "middle": [], "last": "Jun", "suffix": "" }, { "first": "Haider", "middle": [], "last": "Iqbal", "suffix": "" }, { "first": "Elias", "middle": [], "last": "Benussi", "suffix": "" }, { "first": "Filippo", "middle": [], "last": "Volpin", "suffix": "" }, { "first": "Frederic", "middle": [ "A" ], "last": "Dreyer", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hannah Kirk, Yennie Jun, Haider Iqbal, Elias Benussi, Filippo Volpin, Frederic A. Dreyer, Aleksandar Sht- edritski, and Yuki M. Asano. 2021. How true is gpt- 2? an empirical analysis of intersectional occupa- tional biases.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The geometry of culture: Analyzing the meanings of class through word embeddings", "authors": [ { "first": "Austin", "middle": [ "C" ], "last": "Kozlowski", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Taddy", "suffix": "" }, { "first": "James", "middle": [ "A" ], "last": "Evans", "suffix": "" } ], "year": 2019, "venue": "American Sociological Review", "volume": "84", "issue": "5", "pages": "905--949", "other_ids": { "DOI": [ "10.1177/0003122419877135" ] }, "num": null, "urls": [], "raw_text": "Austin C. Kozlowski, Matt Taddy, and James A. Evans. 2019. The geometry of culture: Analyzing the mean- ings of class through word embeddings. American Sociological Review, 84(5):905-949.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Social characters: The hierarchy of gender in contemporary English-language fiction", "authors": [ { "first": "Eve", "middle": [], "last": "Kraicer", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Piper", "suffix": "" } ], "year": 2018, "venue": "Cultural Analytics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eve Kraicer and Andrew Piper. 2018. Social char- acters: The hierarchy of gender in contemporary English-language fiction. Cultural Analytics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Why are we like this?: The AI architecture of a co-creative storytelling game", "authors": [ { "first": "Max", "middle": [], "last": "Kreminski", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Dickinson", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Mateas", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Wardrip-Fruin", "suffix": "" } ], "year": 2020, "venue": "International Conference on the Foundations of Digital Games, FDG '20", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3402942.3402953" ] }, "num": null, "urls": [], "raw_text": "Max Kreminski, Melanie Dickinson, Michael Mateas, and Noah Wardrip-Fruin. 2020. Why are we like this?: The AI architecture of a co-creative story- telling game. In International Conference on the Foundations of Digital Games, FDG '20, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Gender as a variable in naturallanguage processing: Ethical considerations", "authors": [ { "first": "Brian", "middle": [], "last": "Larson", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing", "volume": "", "issue": "", "pages": "1--11", "other_ids": { "DOI": [ "10.18653/v1/W17-1601" ] }, "num": null, "urls": [], "raw_text": "Brian Larson. 2017. Gender as a variable in natural- language processing: Ethical considerations. In Pro- ceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1-11, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Portrayals of male and female scientists in television programs popular among middle school-age children", "authors": [ { "first": "Marilee", "middle": [], "last": "Long", "suffix": "" }, { "first": "Jocelyn", "middle": [], "last": "Steinke", "suffix": "" }, { "first": "Brooks", "middle": [], "last": "Applegate", "suffix": "" }, { "first": "Maria", "middle": [ "Knight" ], "last": "Lapinski", "suffix": "" }, { "first": "Marne", "middle": [ "J" ], "last": "Johnson", "suffix": "" }, { "first": "Sayani", "middle": [], "last": "Ghosh", "suffix": "" } ], "year": 2010, "venue": "Science Communication", "volume": "32", "issue": "3", "pages": "356--382", "other_ids": { "DOI": [ "10.1177/1075547009357779" ] }, "num": null, "urls": [], "raw_text": "Marilee Long, Jocelyn Steinke, Brooks Applegate, Maria Knight Lapinski, Marne J. Johnson, and Sayani Ghosh. 2010. Portrayals of male and female scientists in television programs popular among mid- dle school-age children. Science Communication, 32(3):356-382.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Content analysis of textbooks via natural language processing: Findings on gender, race, and ethnicity in Texas U.S. history textbooks", "authors": [ { "first": "Li", "middle": [], "last": "Lucy", "suffix": "" }, { "first": "Dorottya", "middle": [], "last": "Demszky", "suffix": "" }, { "first": "Patricia", "middle": [], "last": "Bromley", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2020, "venue": "AERA Open", "volume": "6", "issue": "3", "pages": "", "other_ids": { "DOI": [ "10.1177/2332858420940312" ] }, "num": null, "urls": [], "raw_text": "Li Lucy, Dorottya Demszky, Patricia Bromley, and Dan Jurafsky. 2020. Content analysis of textbooks via natural language processing: Findings on gender, race, and ethnicity in Texas U.S. history textbooks. AERA Open, 6(3):2332858420940312.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Greg Corrado, Kai Chen, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Proceedings of the Inter- national Conference on Learning Representations (ICLR), pages 1-12.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Science faculty's subtle gender biases favor male students", "authors": [ { "first": "Corinne", "middle": [ "A" ], "last": "Moss-Racusin", "suffix": "" }, { "first": "John", "middle": [ "F" ], "last": "Dovidio", "suffix": "" }, { "first": "Victoria", "middle": [ "L" ], "last": "Brescoll", "suffix": "" }, { "first": "Mark", "middle": [ "J" ], "last": "Graham", "suffix": "" }, { "first": "Jo", "middle": [], "last": "Handelsman", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the National Academy of Sciences", "volume": "109", "issue": "41", "pages": "16474--16479", "other_ids": { "DOI": [ "10.1073/pnas.1211286109" ] }, "num": null, "urls": [], "raw_text": "Corinne A. Moss-Racusin, John F. Dovidio, Victo- ria L. Brescoll, Mark J. Graham, and Jo Handels- man. 2012. Science faculty's subtle gender biases favor male students. Proceedings of the National Academy of Sciences, 109(41):16474-16479.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Counter-fitting word vectors to linguistic constraints", "authors": [ { "first": "Nikola", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "\u00d3", "middle": [], "last": "Diarmuid", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Lina", "middle": [ "M" ], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas-Barahona", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Wen", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "142--148", "other_ids": { "DOI": [ "10.18653/v1/N16-1018" ] }, "num": null, "urls": [], "raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid \u00d3 S\u00e9aghdha, Blaise Thom- son, Milica Ga\u0161i\u0107, Lina M. Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-148, San Diego, California. Association for Computational Linguis- tics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Harvesting implicit group attitudes and beliefs from a demonstration web site", "authors": [ { "first": "A", "middle": [], "last": "Brian", "suffix": "" }, { "first": "", "middle": [], "last": "Nosek", "suffix": "" }, { "first": "R", "middle": [], "last": "Mahzarin", "suffix": "" }, { "first": "Anthony", "middle": [ "G" ], "last": "Banaji", "suffix": "" }, { "first": "", "middle": [], "last": "Greenwald", "suffix": "" } ], "year": 2002, "venue": "Group Dynamics: Theory, Research, and Practice", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian A Nosek, Mahzarin R Banaji, and Anthony G Greenwald. 2002. Harvesting implicit group atti- tudes and beliefs from a demonstration web site. Group Dynamics: Theory, Research, and Practice, 6(1):101.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Playing with unicorns: AI dungeon and citizen NLP", "authors": [ { "first": "Rita", "middle": [], "last": "Raley", "suffix": "" }, { "first": "Minh", "middle": [], "last": "Hua", "suffix": "" } ], "year": 2020, "venue": "Digital Humanities Quarterly", "volume": "14", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rita Raley and Minh Hua. 2020. Playing with unicorns: AI dungeon and citizen NLP. Digital Humanities Quarterly, 14(4).", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Gender bias in coreference resolution", "authors": [ { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Naradowsky", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Leonard", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "8--14", "other_ids": { "DOI": [ "10.18653/v1/N18-2002" ] }, "num": null, "urls": [], "raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Connotation frames of power and agency in modern films", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Marcella", "middle": [ "Cindy" ], "last": "Prasettio", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2329--2334", "other_ids": { "DOI": [ "10.18653/v1/D17-1247" ] }, "num": null, "urls": [], "raw_text": "Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Hannah Rashkin, and Yejin Choi. 2017. Connota- tion frames of power and agency in modern films. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2329-2334, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Microsoft teams up with OpenAI to exclusively license GPT-3 language model. The Official Microsoft Blog", "authors": [ { "first": "Kevin", "middle": [], "last": "Scott", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Scott. 2020. Microsoft teams up with OpenAI to exclusively license GPT-3 language model. The Official Microsoft Blog.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Towards Controllable Biases in Language Generation", "authors": [ { "first": "Emily", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Prem", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "3239--3254", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.291" ] }, "num": null, "urls": [], "raw_text": "Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2020. Towards Controllable Biases in Language Generation. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020, pages 3239-3254, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "The woman worked as a babysitter: On biases in language generation", "authors": [ { "first": "Emily", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Premkumar", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3407--3412", "other_ids": { "DOI": [ "10.18653/v1/D19-1339" ] }, "num": null, "urls": [], "raw_text": "Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3407- 3412, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "you are grounded!\": Latent name artifacts in pre-trained language models", "authors": [ { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6850--6861", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.556" ] }, "num": null, "urls": [], "raw_text": "Vered Shwartz, Rachel Rudinger, and Oyvind Tafjord. 2020. \"you are grounded!\": Latent name artifacts in pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6850-6861, Online. Association for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Gender roles & occupations: A look at character attributes and job-related aspirations in film and television", "authors": [ { "first": "L", "middle": [], "last": "Stacy", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Ashley", "middle": [], "last": "Choueiti", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Prescott", "suffix": "" }, { "first": "", "middle": [], "last": "Pieper", "suffix": "" } ], "year": 2012, "venue": "Geena Davis Institute on Gender in Media", "volume": "", "issue": "", "pages": "1--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stacy L Smith, Marc Choueiti, Ashley Prescott, and Katherine Pieper. 2012. Gender roles & occupa- tions: A look at character attributes and job-related aspirations in film and television. Geena Davis In- stitute on Gender in Media, pages 1-46.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Social Security Administration. 2020. Popular baby names: Beyond the top 1000 names. National Data", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Social Security Administration. 2020. Popular baby names: Beyond the top 1000 names. National Data.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Measuring praise and criticism: Inference of semantic orientation from association", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Michael", "middle": [ "L" ], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Littman", "suffix": "" } ], "year": 2003, "venue": "ACM Trans. Inf. Syst", "volume": "21", "issue": "4", "pages": "315--346", "other_ids": { "DOI": [ "10.1145/944012.944013" ] }, "num": null, "urls": [], "raw_text": "Peter D. Turney and Michael L. Littman. 2003. Mea- suring praise and criticism: Inference of semantic orientation from association. ACM Trans. Inf. Syst., 21(4):315-346.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "The transformation of gender in Englishlanguage fiction", "authors": [ { "first": "E", "middle": [], "last": "William", "suffix": "" }, { "first": "David", "middle": [], "last": "Underwood", "suffix": "" }, { "first": "Sabrina", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2018, "venue": "Journal of Cultural Analytics", "volume": "", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.22148/16.019" ] }, "num": null, "urls": [], "raw_text": "William E Underwood, David Bamman, and Sabrina Lee. 2018. The transformation of gender in English- language fiction. Journal of Cultural Analytics, 1(1).", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Universal adversarial triggers for attacking and analyzing NLP", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Kandpal", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2153--2162", "other_ids": { "DOI": [ "10.18653/v1/D19-1221" ] }, "num": null, "urls": [], "raw_text": "Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing NLP. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2153-2162, Hong Kong, China. Association for Computational Lin- guistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "GPT-3 can assign different gender pronouns to a character across different generations, as shown in this example using a prompt, in bold, pulled from Mahasweta Devi's Imaginary Maps." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Frequency of masculine (M), feminine (F), and other (O) main prompt characters in our datasets. Bars are colored by gendering method." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "On average, there are more masculine characters in each GPT-3 story or book excerpt. Each column is the gender of the prompt character, and the bars are colored by gendering method. Error bars are 95% confidence intervals." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Prompt character gender is related the probability of a generated story continuing the family and politics topics. Each dot is a GPT-3 story, and the larger dots are means with 95% confidence intervals." }, "TABREF1": { "num": null, "type_str": "table", "text": "", "html": null, "content": "
: Feminine and masculine main characters are
associated with different topics, even in the matched
prompt setup. These topics have the biggest \u2206T in all
GPT-3 stories, and these differences are statistically sig-
nificant (t-test with Bonferroni correction, p < 0.05).
" } } } }