ACL-OCL / Base_JSON /prefixG /json /gebnlp /2020.gebnlp-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:03:01.427321Z"
},
"title": "Situated Data, Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research",
"authors": [
{
"first": "Lucy",
"middle": [],
"last": "Havens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "lucy.havens@ed.ac.uk"
},
{
"first": "Melissa",
"middle": [],
"last": "Terras",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "m.terras@ed.ac.uk"
},
{
"first": "Benjamin",
"middle": [],
"last": "Bach",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "bbach@inf.ed.ac.uk"
},
{
"first": "Beatrice",
"middle": [],
"last": "Alex",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "balex@ed.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a bias-aware methodology to engage with power relations in natural language processing (NLP) research. NLP research rarely engages with bias in social contexts, limiting its ability to mitigate bias. While researchers have recommended actions, technical methods, and documentation practices, no methodology exists to integrate critical reflections on bias with technical NLP methods. In this paper, after an extensive and interdisciplinary literature review, we contribute a bias-aware methodology for NLP research. We also contribute a definition of biased text, a discussion of the implications of biased NLP systems, and a case study demonstrating how we are executing the bias-aware methodology in research on archival metadata descriptions. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. 1 \"A belief that one's own racial or ethnic group is superior\" (Oxford English Dictionary, 2013c). 2 \"[P]rejudice, stereotyping, or discrimination, typically against women, on the basis of sex\" (Oxford English Dictionary, 2013d). 3 \"The belief that people can be distinguished or characterized, esp. as inferior, on the basis of their social class\" (Oxford English Dictionary, 2013a).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a bias-aware methodology to engage with power relations in natural language processing (NLP) research. NLP research rarely engages with bias in social contexts, limiting its ability to mitigate bias. While researchers have recommended actions, technical methods, and documentation practices, no methodology exists to integrate critical reflections on bias with technical NLP methods. In this paper, after an extensive and interdisciplinary literature review, we contribute a bias-aware methodology for NLP research. We also contribute a definition of biased text, a discussion of the implications of biased NLP systems, and a case study demonstrating how we are executing the bias-aware methodology in research on archival metadata descriptions. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. 1 \"A belief that one's own racial or ethnic group is superior\" (Oxford English Dictionary, 2013c). 2 \"[P]rejudice, stereotyping, or discrimination, typically against women, on the basis of sex\" (Oxford English Dictionary, 2013d). 3 \"The belief that people can be distinguished or characterized, esp. as inferior, on the basis of their social class\" (Oxford English Dictionary, 2013a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Analysis of computer systems has raised awareness of their biases, prompting researchers to make recommendations to mitigate harms that biased computer systems cause. Analysis has shown computer systems exhibiting biases through racism 1 (Noble, 2018), sexism 2 (Perez, 2019) , and classism 3 (D'Ignazio and Klein, 2020) . This list of harms is not exhaustive; biased computer systems may also harm people based on ability, citizenship, and any other identity characteristic. To mitigate harms from biased computer systems, researchers have recommended actions, methods, and practices. However, none of the recommendations comprehensively address the complexity of the problems bias causes.",
"cite_spans": [
{
"start": 262,
"end": 275,
"text": "(Perez, 2019)",
"ref_id": "BIBREF38"
},
{
"start": 308,
"end": 320,
"text": "Klein, 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considering the numerous types of bias that may enter a natural language processing (NLP) system, places that bias may enter, and harms that bias may cause, we propose a bias-aware methodology to comprehensively address the consequences of bias for NLP research. Our methodology integrates critical reflection on social influences on and implications of NLP research with technical NLP methods. To scope our research direction and inform our methodology, we draw on an interdisciplinary selection of literature that includes work from the humanities, arts, and social sciences. We intend the methodology to (a) support the reproducibility of NLP research, enabling researchers to better understand which perspectives were considered in the research; and (b) diversify perspectives in NLP systems, guiding researchers in explicitly communicating the social context their research so others can situate future research in contexts that have yet to be investigated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We begin with our bias statement ( \u00a72) and motivations for proposing a bias-aware NLP research methodology ( \u00a73). Next, we summarize the interdisciplinary literature informing our methodology ( \u00a74), explain the methodology ( \u00a75), and demonstrate it with a case study of our ongoing research with bias in archival metadata descriptions ( \u00a76). We end with a summary and vision for future NLP research ( \u00a77).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We situate this paper in the United Kingdom (UK) in the 21 st century, writing as authors who primarily work as academic researchers. We identify as three females and one male; and as American, German, and Scots. Together we have experience in natural language processing, human-computer interaction, data visualization, digital humanities, and digital cultural heritage. In this paper, we propose a biasaware methodology for NLP researchers. We define biased language as written or spoken language that creates or reinforces inequitable power relations among people, harming certain people through simplified, dehumanizing, or judgmental words or phrases that restrict their identity; and privileging other people through words or phrases that favor their identity. Biased language causes representational harms (Vainapel et al., 2015; Sweeney, 2013) , or the restriction of a person's identity through the use of hyperbolic or simplistic language (Blodgett et al., 2020; Talbot, 2003) . NLP systems built on biased language become biased computer systems, which \"systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others\" (Friedman and Nissenbaum, 1996, p. 332) . Representational harms may cause inequitable system performance for different groups of people, leading to allocative harms (Zhang et al., 2020; Noble, 2018) , or the denial of a resource or opportunity (Blodgett et al., 2020) . The people who experience harms from biased NLP systems varies with the context in which people use the system and with the language source on which the system relies. Moreover, people may not be aware they are being harmed given the black-box nature of many systems (Koene et al., 2017) . That being said, whether or not people realize they are being prejudiced against, the people harmed will be those excluded from the most powerful social group.",
"cite_spans": [
{
"start": 813,
"end": 836,
"text": "(Vainapel et al., 2015;",
"ref_id": "BIBREF46"
},
{
"start": 837,
"end": 851,
"text": "Sweeney, 2013)",
"ref_id": "BIBREF43"
},
{
"start": 949,
"end": 972,
"text": "(Blodgett et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 973,
"end": 986,
"text": "Talbot, 2003)",
"ref_id": "BIBREF45"
},
{
"start": 1179,
"end": 1218,
"text": "(Friedman and Nissenbaum, 1996, p. 332)",
"ref_id": null
},
{
"start": 1345,
"end": 1365,
"text": "(Zhang et al., 2020;",
"ref_id": "BIBREF50"
},
{
"start": 1366,
"end": 1378,
"text": "Noble, 2018)",
"ref_id": "BIBREF32"
},
{
"start": 1424,
"end": 1447,
"text": "(Blodgett et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 1717,
"end": 1737,
"text": "(Koene et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "2"
},
{
"text": "3 Why does NLP need a Bias-Aware Methodology?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "2"
},
{
"text": "Statistics report a homogeneity of perspectives among students in computer-related disciplines that do not reflect the diversity of people affected by computer systems, risking a homogeneity of perspectives in the technology workforce and the computer systems that workforce develops. For academic year 2018/19, statistics on students in the UK 4 report that the dominant group of people studying computer-related subjects overwhelmingly are white males without a disability. 5 Moreover, differences in total numbers of surveyed students across identity characteristics (e.g. sex, ethnicity, disability) skew the statistics in favor of those reported as white, male, and without a disability. Lack of diverse perspectives among students in computer-related disciplines may limit the diversity of perspectives in the workforce, where the development of NLP and other computer systems occurs. As of 2019, the Wise Campaign reported that women comprise 24% of the core-STEM workforce in the UK. 6 Lack of diverse perspectives in the development of NLP and other computer systems risks technological decisions that exclude groups of people (\"technical bias\"), as well as applications of computer systems that oppress groups of people (\"emergent bias\") (Friedman and Nissenbaum, 1996) .",
"cite_spans": [
{
"start": 1248,
"end": 1279,
"text": "(Friedman and Nissenbaum, 1996)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "2"
},
{
"text": "That being said, even if student demographics in NLP and computer-related disciplines become more balanced, the data underlying NLP systems will still cause bias. Theories of discourse state that language (written or spoken) reflects and reinforces \"society, culture and power\" (Bucholtz, 2003, p. 45) . In turn, NLP systems built on human language reflect and reinforce power relations in society, inheriting biases in language (Caliskan et al., 2017 ) such as stereotypical expectations of genders (Haines et al., 2016) and ethnicities (Garg et al., 2018) . Drawing on feminist theory, we argue that all language is biased, because language records human interpretations that are situated in a specific time, place, and worldview (Haraway, 1988) . Consequently, all NLP systems are subject to biases originating in the social contexts in which the systems are built (\"preexisting bias\") (Friedman and Nissenbaum, 1996) . Psychology research suggests that biased language causes representational harms: Vainapel et al. (2015) studied how masculine-generic language (e.g. \"he\") versus gender-neutral language (e.g. \"he or she\") affected participants' responses to questionnaires. The authors report that women gave themselves lower scores on intrinsic goal orientation and task value in questionnaires using masculine-generic language in contrast to questionnaires using gender-neutral language. 7 The study provides an example of how biased language may harm select groups of people, because the participants reported as women experienced a restriction of their identity, influencing their behavior to conform to stereotypes.",
"cite_spans": [
{
"start": 278,
"end": 301,
"text": "(Bucholtz, 2003, p. 45)",
"ref_id": null
},
{
"start": 429,
"end": 451,
"text": "(Caliskan et al., 2017",
"ref_id": "BIBREF6"
},
{
"start": 500,
"end": 521,
"text": "(Haines et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 538,
"end": 557,
"text": "(Garg et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 732,
"end": 747,
"text": "(Haraway, 1988)",
"ref_id": "BIBREF20"
},
{
"start": 889,
"end": 920,
"text": "(Friedman and Nissenbaum, 1996)",
"ref_id": "BIBREF13"
},
{
"start": 1004,
"end": 1026,
"text": "Vainapel et al. (2015)",
"ref_id": "BIBREF46"
},
{
"start": 1396,
"end": 1397,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "2"
},
{
"text": "Acknowledging the harms of biased language and biased NLP systems, researchers have proposed approaches mitigating bias, though no approach has fully removed bias from an NLP dataset or algorithm. To mitigate bias in datasets, Webster et al. (2018) produced a dataset of gendered ambiguous pronouns (GAP) to provide an unbiased text source on which to train NLP algorithms. However, the GAP dataset reverses gender roles, assuming that gender is a binary rather than a spectrum. 8 Any NLP system that uses the GAP dataset thus adopts its preexisting gender bias. Efforts to mitigate bias in algorithms are similarly limited, focusing on technical performance rather than performance in social contexts. Zhao et al. (2018) describe an approach to debias word embeddings, writing, \"Finally we show that given sufficiently strong alternative cues, systems can ignore their bias\" (p. 16). However, the paper does not explain the intended social context in which to apply the authors' approach, risking emergent bias. 9 Additionally, Gonen and Goldberg (2019) demonstrate how this debiasing approach hides, rather than removes, bias. In our bias-aware methodology, we describe documentation and user research practices that facilitate transparent communication of biases that may be present in NLP systems, facilitating reflection on how to include more diverse perspectives and empower underrepresented people.",
"cite_spans": [
{
"start": 227,
"end": 248,
"text": "Webster et al. (2018)",
"ref_id": "BIBREF48"
},
{
"start": 703,
"end": 721,
"text": "Zhao et al. (2018)",
"ref_id": "BIBREF51"
},
{
"start": 1029,
"end": 1054,
"text": "Gonen and Goldberg (2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "2"
},
{
"text": "To inform our proposed bias-aware NLP research methodology, we draw on an interdisciplinary corpus of literature from computer science, data science, the humanities, the arts, and the social sciences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interdisciplinary Literature Review",
"sec_num": "4"
},
{
"text": "NLP and ML scholars have recommended actions to diversify perspectives in technological research, recognizing the value of diversity to bias mitigation. Blodgett et al. (2020) and Crawford (2017) recommend interdisciplinary collaboration so researchers can learn from humanistic, artistic, and sociological disciplines regarding human behavior, helping researchers to more effectively anticipate harms that computer systems may cause, in addition to benefits they may bring, addressing risks of emergent bias. They also recommend engaging with the people affected by NLP and other computer systems, testing on more diverse populations to address the risk of technical bias, and rethinking power relations between those who create and those who are affected by computer systems to address the risk of preexisting bias. Though these recommendations address the three types of bias that may enter an NLP system, they do not articulate how to identify relevant people to include in the development and testing of NLP systems. Our bias-aware methodology builds on recommendations from Blodgett et al. (2020) and Crawford (2017) by outlining how to identify and include stakeholders in NLP research ( \u00a75.1).",
"cite_spans": [
{
"start": 153,
"end": 175,
"text": "Blodgett et al. (2020)",
"ref_id": "BIBREF3"
},
{
"start": 180,
"end": 195,
"text": "Crawford (2017)",
"ref_id": "BIBREF9"
},
{
"start": 1080,
"end": 1102,
"text": "Blodgett et al. (2020)",
"ref_id": "BIBREF3"
},
{
"start": 1107,
"end": 1122,
"text": "Crawford (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interdisciplinary Literature Review",
"sec_num": "4"
},
{
"text": "D'Ignazio and Klein (2020) propose data feminism as an approach to addressing bias in data science. They define data feminism as, \"a way of thinking about data, both their uses and their limits, that is informed by direct experience, by a commitment to action, and by intersectional feminist thought\" (p. 8). 10 Data feminism has seven principles: examine power, challenge power, elevate emotion and embodiment, rethink binaries and hierarchies, embrace pluralism, consider context, and make labor visible. These principles facilitate critical reflection on the impacts of data's collection and use in social contexts. Our bias-aware methodology tailors these principles to NLP research, outlining activities that encourage researchers to consider influences on and implications of their work beyond the NLP community ( \u00a75.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interdisciplinary Literature Review",
"sec_num": "4"
},
{
"text": "Within the NLP research community, Bender and Friedman (2018) recommend improved documentation practices to mitigate emergent, technical, and preexisting biases. They recommend all NLP research includes a \"data statement,\" which they describe as, \"a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software\" (p. 587). Aimed at developers and users of NLP systems, data statements reduce the risk of emergent bias. The authors also note: \"As systems are being built, data statements enable developers and researchers to make informed choices about training sets and to flag potential underrepresented populations who may be overlooked or treated unfairly\" (p. 599), helping authors of data statements reduce the risk of technical and preexisting biases. A data statement serves as guiding documentation for the case study approach we propose in our bias-aware methodology ( \u00a75.2), documenting the specific context in which NLP researchers work. Our bias-aware methodology guides research activities before, during, and after the writing of a data statement: for researchers reading data statements to find a dataset for an NLP system, our methodology guides their evaluation of a dataset's suitability for research; for researchers writing data statements, our methodology guides their documentation of the data collection process.",
"cite_spans": [
{
"start": 35,
"end": 61,
"text": "Bender and Friedman (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interdisciplinary Literature Review",
"sec_num": "4"
},
{
"text": "In addition to technological disciplines, our methodology draws on critical discourse analysis (van Leeuwen, 2009) , participatory action research (Reid and Frisby, 2008; Swantz, 2008) , intersectionality (Crenshaw, 1991; D'Ignazio and Klein, 2020) , feminism (Haraway, 1988; Harding, 1995; Moore, 2018) , and design (Martin and Hanington, 2012) . Participatory action research provides a way for NLP researchers to diversify perspectives in their research, engaging with the social context that influences and is affected by NLP systems. Intersectionality reminds researchers of the multitude of experiences of privilege and oppression that bias causes, because no single identity characteristic determines whether a person is \"dominant\" (favored) or \"minoritized\" (harmed) (D'Ignazio and Klein, 2020). The case study approach common to design methods enables a researcher to make progress on addressing bias through explicitly situating research in a specific time and place, and conducting user research with people to understand their power relations in that time and place. Feminist theory values perspectives at the margins, encouraging researchers to engage with people who are excluded from the dominant group in a social context. Feminist theorist Harding (1995) writes, \"In order to gain a causal critical view of the interests and values that constitute the dominant conceptual projects...one must start from the lives excluded as origins of their design -from 'marginal' lives\" (p. 341). Our bias-aware research methodology includes collaboration with people at the margins of NLP research in an effort to empower minoritized people.",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "(van Leeuwen, 2009)",
"ref_id": "BIBREF47"
},
{
"start": 147,
"end": 170,
"text": "(Reid and Frisby, 2008;",
"ref_id": "BIBREF39"
},
{
"start": 171,
"end": 184,
"text": "Swantz, 2008)",
"ref_id": "BIBREF42"
},
{
"start": 205,
"end": 221,
"text": "(Crenshaw, 1991;",
"ref_id": "BIBREF10"
},
{
"start": 222,
"end": 248,
"text": "D'Ignazio and Klein, 2020)",
"ref_id": "BIBREF12"
},
{
"start": 260,
"end": 275,
"text": "(Haraway, 1988;",
"ref_id": "BIBREF20"
},
{
"start": 276,
"end": 290,
"text": "Harding, 1995;",
"ref_id": "BIBREF21"
},
{
"start": 291,
"end": 303,
"text": "Moore, 2018)",
"ref_id": "BIBREF31"
},
{
"start": 317,
"end": 345,
"text": "(Martin and Hanington, 2012)",
"ref_id": "BIBREF29"
},
{
"start": 1257,
"end": 1271,
"text": "Harding (1995)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interdisciplinary Literature Review",
"sec_num": "4"
},
{
"text": "Our bias-aware methodology has three main activities: examining power relations ( \u00a75.1), explaining the bias of focus ( \u00a75.2), and applying NLP methods ( \u00a75.3). Though we discuss the activities individually, we recommend researchers execute them in parallel because each activity informs the others. We aim for the methodology to include activities that researchers may adapt to their own research context, be their focus on algorithm development, adaptation, or application; or on dataset creation. We hope for this paper to begin a dialogue on tailoring a bias-aware methodology to different types of NLP research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Bias-aware Methodology",
"sec_num": "5"
},
{
"text": "An NLP researcher executing the bias-aware methodology will document the distribution of power in the social context relevant to their research and language source. In the bias-aware methodology, a researcher considers language to be a partial record that provides knowledge situated in a specific time, place, and perspective. To understand which people's perspectives their language source (\"the data\") includes and excludes, an NLP researcher will identify stakeholders, or those who are represented in, use, manage, or provide the data. Specifically, NLP research stakeholders are (1) the researcher(s), (2) producers of the data, (3) institutions providing access to the data, (4) people represented in the data, and (5) people who use the data. To investigate their stakeholders' power relations, an NLP researcher will observe who dominates the social setting(s) relevant to their research, and who experiences minoritization in the same setting(s). After identifying the stakeholders, the researcher will document their roles as dominant or minoritized, along with any limitations to their identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examining Power Relations Stakeholder Identification",
"sec_num": "5.1"
},
{
"text": "To understand how privilege and oppression are experienced among stakeholders, an NLP researcher will conduct participatory action research (PAR) (Reid and Frisby, 2008; Swantz, 2008) with representative individuals from all five stakeholder groups. Researchers who conduct PAR attempt to establish collaborative relationships with representatives from their groups of stakeholders. Researchers are not experts bringing NLP systems to stakeholders; rather, researchers and stakeholders collaboratively study a social context to understand how NLP systems could empower people, particularly minoritized people. Instead of seeking an objective perspective, researchers foreground individual stakeholder perspectives, recording them as situated in a specific time and place, and using their multiplicity to gain insight into the complexity of the research's social context. To understand how NLP research can empower people in a specific social context, we propose four power relations questions 11 for NLP researchers to answer: (1) who or what is included in the research, (2) who or what is excluded from the research, (3) how will the research define knowledge, and (4) who has agency and who can be empowered?",
"cite_spans": [
{
"start": 146,
"end": 169,
"text": "(Reid and Frisby, 2008;",
"ref_id": "BIBREF39"
},
{
"start": 170,
"end": 183,
"text": "Swantz, 2008)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stakeholder Collaboration",
"sec_num": null
},
{
"text": "To understand the impacts of dominant people's interests and values, research following a bias-aware methodology will begin from the perspective of minoritized people, those who are typically excluded as a result (even if unintentional) of the interests and values of dominant people. The research will define knowledge as situated in specific times, places, and perspectives. The widespread availability of language as digital data may give the illusion of universal representation. However, critical discourse analysis reminds the NLP researcher that their data, composed of discourses, 12 are \"socially constructed ways of knowing some aspect of reality\" (van Leeuwen, 2009, p. 141) . Social hierarchies influence the data that becomes widely available, rendering minoritized groups of people invisible due to their exclusion from the data, or misrepresenting them due to their exclusion from the data collection process.",
"cite_spans": [
{
"start": 658,
"end": 685,
"text": "(van Leeuwen, 2009, p. 141)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stakeholder Collaboration",
"sec_num": null
},
{
"text": "An NLP researcher will weigh insights gathered from different stakeholder groups equally, making the research's knowledge multi-faceted. Explicit documentation of the time, place, and perspective that produced the knowledge will inform future NLP research. Should a future researcher wish to reproduce the research, the documentation will guide the future researcher in seeking the proper social context. Should a future researcher wish to build upon the research, they will be able to compare and contrast the research's social setting with their own, guiding them in determining potential contributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stakeholder Collaboration",
"sec_num": null
},
{
"text": "In situations where the researcher cannot conduct PAR with stakeholders, the researcher will write a data biography. 13 A data biography documents where data were collected and stored, who collected and owns the data, and why, when, and how the data were collected (Krause, 2019) . Writing a data biography facilitates critical reflection on the social influences on and social implications of a dataset, informing technical decisions when applying NLP methods. Datasets may circulate oppression of minoritized groups through inclusion and through omission. The key to recognizing who is dominant and minoritized is understanding that an individual may be both; power relations vary with the context of research.",
"cite_spans": [
{
"start": 265,
"end": 279,
"text": "(Krause, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unavailable Stakeholders",
"sec_num": null
},
{
"text": "When explaining the type of bias on which NLP research focuses, a researcher will provide a definition and explain how this type of bias relates to other types of bias. For example, AllSides.com's ratings may guide the classification of political bias in news, 14 Hanson et al.'s (2015) Accessible Writing Guide may inform research with stakeholders who include people with disabilities, and Hitti et al. (2019) provide a model for how to clearly define and classify gender bias in collaboration with interdisciplinary experts. Table 1 provides examples of gender biased language organized into their gender bias taxonomy. When",
"cite_spans": [
{
"start": 261,
"end": 263,
"text": "14",
"ref_id": null
}
],
"ref_spans": [
{
"start": 528,
"end": 535,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explaining the Bias of Focus",
"sec_num": "5.2"
},
{
"text": "A lawyer must always carry his phone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Bias Gender Generalization",
"sec_num": null
},
{
"text": "The role of a waitress is overlooked by the restaurant owners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Marking of Sex",
"sec_num": null
},
{
"text": "The event was sportsthemed for all the fathers volunteering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Bias Societal Stereotype",
"sec_num": null
},
{
"text": "All girls are sensitive. Table 1 : Biased text examples classified into the gender bias taxonomy of Hitti et al. (2019) .",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "Hitti et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Behavioral Stereotype",
"sec_num": null
},
{
"text": "following the bias-aware methodology, NLP research to create annotated datasets for other types of bias will similarly include collaboration with relevant disciplinary experts (i.e. racial bias with critical race theory experts) to define and categorize types of bias relevant to the research. When writing a data statement's curation rationale, an NLP researcher will include a definition of their bias of focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavioral Stereotype",
"sec_num": null
},
{
"text": "In the answers to the power relations questions, an NLP researcher will describe how they consider intragroup differences within their stakeholder groups, in addition to differences between dominating and minoritized stakeholder groups, because the intersection of identity characteristics, rather than one identity characteristic in isolation, determines how people experience oppression (Crenshaw, 1991) . Due to the complexity that intersecting identity characteristics add to evaluations of bias, in the bias-aware methodology, an NLP researcher will use case studies. Case studies gather information in a clearlydefined context and present the resulting knowledge as connected to a specific time, place, and people.",
"cite_spans": [
{
"start": 389,
"end": 405,
"text": "(Crenshaw, 1991)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Behavioral Stereotype",
"sec_num": null
},
{
"text": "To conduct a case study, an NLP researcher will \"determine a problem, make initial hypotheses, conduct research through interviews, observations, and other forms of information gathering [such as PAR], revise hypotheses and theory, and tell a story\" (Martin and Hanington, 2012, p. 28) . Feminist theory's focus on agency and lived experience as situated in a specific context adds value to PAR by helping a researcher anticipate and critically examine the implications of PAR's drive towards action (Reid and Frisby, 2008) . When documenting their case study in blogs, presentations, or publications, an NLP researcher will discuss potential applications of the research beyond the case study's context, anticipating potential benefits and harms. Potential harms may outweigh potential benefits, making the best decision not to build an NLP system (Crawford, 2017) .",
"cite_spans": [
{
"start": 250,
"end": 285,
"text": "(Martin and Hanington, 2012, p. 28)",
"ref_id": null
},
{
"start": 500,
"end": 523,
"text": "(Reid and Frisby, 2008)",
"ref_id": "BIBREF39"
},
{
"start": 849,
"end": 865,
"text": "(Crawford, 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Behavioral Stereotype",
"sec_num": null
},
{
"text": "When applying NLP methods in the bias-aware methodology, an NLP researcher should acknowledge biases found with any algorithms they use in their data statement. For example, when applying word embeddings, an NLP researcher could look to Bolukbasi et al. 2016 2019for understanding how these methods have been shown to exhibit gender bias. If an NLP researcher will train an algorithm on their language source, research documentation will describe the training process and results. If the research includes annotation, documentation will include instructions given to annotators. For NLP research on algorithms, we recommend considering approaches to making bias transparent, in addition to reducing the biased behavior of algorithms. Research from Kaneko et al. (2019) and Zhao et al. (2018) on mitigating bias in word embeddings provide starting points for algorithmic bias research, as their methods have yet to be evaluated in diverse contexts. However, Gonen and Goldberg (2019) have shown the limits of debiasing word embeddings. We argue that the situated nature of data, and thus the situated nature of knowledge drawn from data, makes the elimination of bias impossible. Investigating how to make bias transparent provides an alternative direction for NLP researchers interested in mitigating bias in NLP systems. Whether making bias transparent or reducing biased behavior of algorithms, NLP researchers following the bias-aware methodology will collaborate with relevant disciplinary experts and minoritized stakeholders in determining how to evaluate an algorithm for bias.",
"cite_spans": [
{
"start": 748,
"end": 768,
"text": "Kaneko et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 773,
"end": 791,
"text": "Zhao et al. (2018)",
"ref_id": "BIBREF51"
},
{
"start": 957,
"end": 982,
"text": "Gonen and Goldberg (2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applying NLP Methods",
"sec_num": "5.3"
},
{
"text": "To support the training of algorithms in diverse contexts, NLP research on datasets will define the context of its language source's collection and annotation. An NLP researcher will provide data statements to inform algorithms' training and evaluation, ensuring reproducibility and avoiding unintended harms from misapplications of algorithms (Bender and Friedman, 2018) . Similarly, dataset research will include disciplinary experts and minoritized stakeholders in datasets' creation, annotation, and evaluation.",
"cite_spans": [
{
"start": 344,
"end": 371,
"text": "(Bender and Friedman, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applying NLP Methods",
"sec_num": "5.3"
},
{
"text": "In this section we describe how we are implementing the bias-aware NLP research methodology in a case study on bias in metadata descriptions from the online archival catalog of the Centre for Research Collections at the University of Edinburgh (\"the Archive\"). 15 For consistency with the outline of a bias-aware methodology ( \u00a75), we group our case study into the same three activities, explaining our examination of power relations ( \u00a76.1), our bias of focus ( \u00a76.2), and then our application of NLP methods ( \u00a76.3). Each subsection includes accomplished, ongoing, and planned future work. To demonstrate how we execute the three activities in parallel, as proposed in \u00a75, we first provide a chronological overview.",
"cite_spans": [
{
"start": 261,
"end": 263,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "6"
},
{
"text": "Initially, our research began with information gathering linked to a participatory action research (PAR) methodology. We reviewed literature on bias in NLP and archives, and on digital humanities research (collaborations between technologists and humanists that often analyze data sources with historical language). We also met with employees at the Archive to better understand the Archive's policies, which guide the writing of metadata descriptions and documentation practices, such as the metadata standards used. The employees described how they are proactively challenging the inherited metadata and inherited practices of the Archive, which date back to the 16 th century. After the literature review and meeting we began writing data statements for the Archive's metadata descriptions and for our research. Due to the limited research on NLP methods applied to archival metadata, and limited large-scale analysis of metadata descriptions, we undertook a pilot data project, 16 walking through the process of extracting metadata descriptions from a single archival collection, adding historical context to our documentation of the extracted descriptions, and calculating corpus analytics (using ElementTree 17 and NLTK 18 in a Jupyter Notebook 19 ). After establishing a workflow to extract metadata descriptions from the Archive's online catalog, we again met employees at the Archive to discuss the challenges that biased language poses to their work and to their visitors. This meeting helped us add to our data statements, identify stakeholders in our research, and begin describing the stakeholders' power relations. Moreover, the meeting confirmed the value of an NLP system that detects and classifies bias, as the Archive does not currently have a systematic approach to measuring bias in its catalog's metadata descriptions.",
"cite_spans": [
{
"start": 982,
"end": 984,
"text": "16",
"ref_id": null
},
{
"start": 1251,
"end": 1253,
"text": "19",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "6"
},
{
"text": "In our execution of the bias-aware methodology, we study power relations among five stakeholders:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Researcher and Archive Power Relations Stakeholder Identification",
"sec_num": "6.1"
},
{
"text": "(1) us (the authors) as researchers, (2) the Archive's employees, (3) the Archive (as an institution), (4) people represented in metadata descriptions, and (5) the Archive's visitors. Literature on power relations in archives and the wider gallery, library, archive, and museum (GLAM) sector (Adler, 2017; Caswell and Cifor, 2019; Hauswedell et al., 2020; McPherson, 2012; Risam, 2015) informed our identification of these stakeholders. We recorded our understanding of their power relations in our data statement (Appendix A) and power relations document (Appendix C), and will continue expanding and revising these documents until our research ends. 15 Metadata documents information about collections of cultural heritage records. Archival catalogs have numerous metadata fields that contain descriptions written by people who archives hire to document their collection items. These descriptions are the language source we refer to as archival metadata descriptions (Angel, Christine M., and Caroline Fuchs, 2018 ",
"cite_spans": [
{
"start": 292,
"end": 305,
"text": "(Adler, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 306,
"end": 330,
"text": "Caswell and Cifor, 2019;",
"ref_id": "BIBREF8"
},
{
"start": 331,
"end": 355,
"text": "Hauswedell et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 356,
"end": 372,
"text": "McPherson, 2012;",
"ref_id": "BIBREF30"
},
{
"start": 373,
"end": 385,
"text": "Risam, 2015)",
"ref_id": "BIBREF40"
},
{
"start": 652,
"end": 654,
"text": "15",
"ref_id": null
},
{
"start": 969,
"end": 1015,
"text": "(Angel, Christine M., and Caroline Fuchs, 2018",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Researcher and Archive Power Relations Stakeholder Identification",
"sec_num": "6.1"
},
{
"text": "In line with PAR, we collaborate with stakeholders at the Archive to learn about their perception of biased language in metadata descriptions, as well as challenges and potential approaches to addressing the bias. Thus far, we facilitated a group discussion with stakeholders who had a range of roles, including technical, curatorial, administrative, servicing, and documenting responsibilities; and a range of GLAM work experience, from one year to over 20 years. The group discussion informs our understanding of the range of attitudes towards bias and neutrality in archival documentation. We are preparing a survey to study how the Archive's attitudes about bias and neutrality relate to those of other UK archives. Results of the group discussion enabled us to draft answers to the power relations questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stakeholder Collaboration",
"sec_num": null
},
{
"text": "To fully answer the power relations questions, we are researching historical changes in the structure of metadata standards used at the Archive. Our stakeholders include people who documented the Archive's collections but no longer work there, and people who are written about in the Archive's metadata, which document material dating back to the 1 st century AD. To study power relations among these unavailable stakeholders, we are writing a data biography (Appendix B) for the metadata descriptions with the Archive. The data biography informs our understanding of the power relations at play in our research, which in turn informs our data statement and technical decisions about NLP methods to apply.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unavailable Stakeholders",
"sec_num": null
},
{
"text": "Our NLP research focuses on identifying types of contextual gender bias from archival metadata descriptions, complementing Hitti et al.'s (2019) focus on identifying structural gender bias. We adopt the their taxonomy of gender bias (illustrated in Table 1 ). The taxonomy has two subtypes of contextual bias: behavioral stereotypes and societal stereotypes. We may expand on definitions and subtypes of contextual bias during our research into simplistic, hyperbolic language in metadata descriptions that indicates the presence of stereotypes, because historical text often contains spellings and syntax (among other linguistic characteristics) different to the modern text on which NLP tools have been developed (Casey et al., 2020) . In the context of the Archive, gender biased metadata descriptions may cause representational harms, because the Archive supports information access, circulating ideas documented in its metadata when users search its online catalog. Societal and behavioral stereotypes present in the Archive's metadata descriptions may negatively impact perceptions of people represented in the descriptions. We are researching the types of gender bias in the descriptions, and ways to measure such biases, in an effort to support the Archive in mitigating harms from biased metadata descriptions.",
"cite_spans": [
{
"start": 715,
"end": 735,
"text": "(Casey et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contextual Gender Bias as a Focus",
"sec_num": "6.2"
},
{
"text": "The archival metadata descriptions we use as this case study's language source are from the Archive's public, online catalog. We obtained descriptive metadata fields as Extensible Markup Language (XML) data using the Open Archives Initiative -Protocol for Metadata Harvesting (OAI-PMH), 20 filtered the metadata for descriptive fields relevant to our research, and then removed duplicate descriptions. Table 2 summarizes the resulting corpus. The Archive organizes metadata hierarchically, creating metadata for collections, subcollections, and items; we group subcollection and item descriptions within their overarching collection. Currently, we are exploring how to further filter our extracted descriptions through a combination of historical research on archival metadata standards and corpus analytics of terms surrounding gender-related words (as in the third use case from Casey et al. (2020) ). For example, the Archive uses Library of Congress Subject Headings (LCSH), which use terms offensive to certain social groups: Adler (2017) discusses how LCSH represents people who do not identify with binary genders or do not conform to heterosexuality as \"deviations.\" To further filter our extracted metadata descriptions, we can associate the descriptions with the dates they were written and look for offensive terms that were used in metadata standards during those dates. Our data statement further details this process. (Loper and Bird, 2002) .",
"cite_spans": [
{
"start": 881,
"end": 900,
"text": "Casey et al. (2020)",
"ref_id": "BIBREF7"
},
{
"start": 1432,
"end": 1454,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 402,
"end": 409,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Information Extraction for Classification Information Extraction Methods",
"sec_num": "6.3"
},
{
"text": "With our case study, we aim to create and annotate a gold standard dataset on which we will train a classification algorithm to identify types of gender bias in text. We will perform the annotations as part of the research for a Doctor of Philosophy project. Due to ethical concerns regarding the use of crowdsourcing platforms (Gleibs, 2017) , anyone employed to contribute to the annotation work will be paid at least minimum wage. To guide the annotation process and ensure the reproducibility of our research, we will document instructions we follow to annotate contextual gender bias. We will collaborate with the Archive and a gender studies expert to write these instructions; we are in the process of finding a language expert with whom to collaborate. When we publish the results of our research, we will provide documentation of the annotation instructions, data statements, data biography, and power relations questions for our NLP research. After creating a gold standard dataset annotated for contextual gender bias, we plan to train a discriminative classifier on the dataset using supervised learning. We will then experiment with and evaluate how the classifier differentiates between types of contextual gender bias in archival metadata descriptions, and report openly on the results of this research.",
"cite_spans": [
{
"start": 328,
"end": 342,
"text": "(Gleibs, 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotations to Inform Classification",
"sec_num": null
},
{
"text": "In this paper we propose a bias-aware methodology for NLP research to mitigate harms from biased NLP systems. The methodology integrates practices and methods from NLP, ML, data science, gender and feminist studies, linguistics, and design. Due to the numerous types of bias, the intersectional nature of oppression, and the possibility of direct and indirect harms from bias, detecting and measuring bias is a complex process. Our methodology encourages NLP researchers to situate their work in case studies, explicitly describing the context of and stakeholders in their research. We advise NLP researchers to build the time and resources needed to undertake such work into project plans, and to put eliminating bias at the center of their research. Documenting instances of bias and their associated power relations will enable the NLP community to look for patterns across different contexts that use NLP systems. Amassing case studies in order to look for such patterns will guide NLP research towards generalizable approaches to bias mitigation, approaches that do not unintentionally minoritize people whose perspectives were unknowingly excluded. Online Catalog (version 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We (the research team) will use the extracted metadata descriptions to create a gold standard dataset annotated for contextual gender bias. We adopt Hitti et al.'s definition of contextual gender bias in text: written language that connotes or implies an inclination or prejudice against a gender through the use of gender-marked keywords and their context (2019, p. 10-11). A member of our research team has extracted text from three descriptive metadata fields for all collections, subcollections, and items in the Archive's online catalog. One of these fields provide information about the people, time period, and places associated with the collection, subcollection, or item to which the field belongs. Another field summarizes the contents of the collection, subcollection, or item to which the field belongs. The last field records the person who wrote the text for the collection, subcollection, or item's descriptive metadata fields, and the date the person wrote the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Curation Rationale",
"sec_num": null
},
{
"text": "Using the dataset of extracted text, we will experiment with training a discriminative classification algorithm to identify types of contextual gender bias. Additionally, the dataset will serve as a source of annotated, historical text to complement datasets composed of contemporary texts (i.e. from social media, Wikipedia, news articles).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Curation Rationale",
"sec_num": null
},
{
"text": "To Do: We will group the metadata descriptions based on the collection to which they're associated, rather than segmenting by sentence or paragraph for annotation. Prior to making annotations for contextual gender bias, a member of our research team will review a subset of the metadata descriptions to determine whether all the descriptions should be annotated or whether the dataset should be filtered to include only a portion of the extracted metadata descriptions. Section B. in our data biography describes our plans for filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Curation Rationale",
"sec_num": null
},
{
"text": "We chose to use archival metadata descriptions as a data source because:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Curation Rationale",
"sec_num": null
},
{
"text": "1. Metadata descriptions in the Archive's catalog (and most GLAM catalogs) are freely, publicly available online 2. GLAM metadata descriptions have yet to be analyzed at large scale using natural language processing (NLP) methods and, as records of cultural heritage, the descriptions have the potential to provide historical insights on changes in language and society (Welsh, 2016) 3. GLAM metadata standards are freely, publicly available, often online, meaning we can use historical changes in metadata standards used in the Archive to guide large-scale text analysis of changes in the language of the metadata descriptions over time 4. The Archive's policy acknowledges its responsibility to address legacy descriptions in its catalogs that use language considered biased or otherwise inappropriate today 21",
"cite_spans": [
{
"start": 370,
"end": 383,
"text": "(Welsh, 2016)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Curation Rationale",
"sec_num": null
},
{
"text": "The metadata descriptions extracted from the Archive's catalog are written in British English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Language Variety",
"sec_num": null
},
{
"text": "We (the research team) are of American, German, and Scots nationalities, and are three females and one male. We all work primarily as academic researhers in the disciplines of natural language processing, data science, data visualization, human-computer interaction, digital humanities, and digital cultural heritage. Additionally, one of us is auditing an online course on feminist and social justice studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Producer Demographic",
"sec_num": null
},
{
"text": "For the research team who will write the annotation rule book, please refer to the previous section. A gender, sexuality, and social justice studies expert based at a North American university will collaborate with us (the research team) on writing the annotation rule book. One member of our research team will annotate the metadata in collaboration with a second annotator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.4 Annotator Demographic",
"sec_num": null
},
{
"text": "Ongoing: we are seeking a second annotator with a background in gender studies, linguistics, or the information sciences; or with GLAM work experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.4 Annotator Demographic",
"sec_num": null
},
{
"text": "The metadata descriptions extracted from the Archive's online catalog using Open Access Initiative -Protocol for Metadata Harvesting (OAI-PMH). For OAI-PMH, an institution (in this case, the Archive) provides a URL to its catalog that displays its catalog metadata in XML format. A member of our research team wrote scripts in Python to extract three descriptive metadata fields for every collection, subcollection, and item in the Archive's online catalog (the metadata is organized hierarchically). Using Python and its Natural Language Toolkit (NLTK) library, the researcher removed duplicate sentences and calculated that the extracted metadata descriptions consist of a total of 966,763 words and 68,448 sentences across 1,231 collections. The minimum number of words in a collection is 7 and the maximum, 156,747, with an average of 1,306 words per collection and standard deviation of 7,784 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5 Speech or Publication Situation",
"sec_num": null
},
{
"text": "Please refer to the Provenance Appendix for information on the Speech or Publication Situation of all of the Archive's metadata descriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5 Speech or Publication Situation",
"sec_num": null
},
{
"text": "Upon extracting the metadata descriptions using OAI-PMH, the XML tags were removed so that the total words and sentences of the metadata descriptions could be calculated to ensure the text source provided a sufficiently large dataset. A member of our research team has grouped all the extracted metadata descriptions by their collection (the \"fonds\" level in the XML data), preserving the context in which the metadata descriptions were written and will be read by visitors to the Archive's online catalog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.6 Data Characteristics",
"sec_num": null
},
{
"text": "As a member of our research team extracts and filters metadata descriptions from the Archive's online catalog, they write assertions and tests to ensure as best as possible that metadata isn't being lost or unintentionally changed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.7 Data Quality",
"sec_num": null
},
{
"text": "Please refer to the Provenance Appendix for information on the Data Quality of all of the Archive's metadata descriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.7 Data Quality",
"sec_num": null
},
{
"text": "A.9 Provenance Appendix Data Statement for Metadata Descriptions from the Archive's Online Catalog (version 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Not applicable",
"sec_num": null
},
{
"text": "The Archive's policy describes a commitment to develop collections that are as inclusive and diverse as possible, keeping up with social changes and looking for opportunities to better represent communities of people. Additionally, the Archive's policy states that the Archive aims to make its collections accessible to as many people as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curation Rationale",
"sec_num": null
},
{
"text": "To Do: If available, review historical policy documents to understand how the Archive's curation rationale has evolved since its founding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curation Rationale",
"sec_num": null
},
{
"text": "The Archive's metadata descriptions are written in British English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Variety",
"sec_num": null
},
{
"text": "People who write metadata descriptions to document the Archive's collections include employees, interns, and volunteers. Employees have received professional training in archival documentation, in addition to training at the Archive. Interns and volunteers are typically students studying information sciences, museology, history, or related disciplines who have also received training at the Archive. The Archive began in the 16 th century, so the metadata descriptions in its online catalog date from that time period up through the present day (the Archive continues to collect and document cultural heritage records).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Producer Demographic",
"sec_num": null
},
{
"text": "Additional demographic information on all those who have written the Archive's metadata descriptions is limited, however the Archive is based in the United Kingdom, meaning the perspectives of those who wrote the descriptions is most likely English, Irish, Scottish, British, or European. The Archive is closely associated with a research university, so interns and volunteers who write the Archive's metadata descriptions are likely to have received, or be in the process of receiving, higher education degrees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Producer Demographic",
"sec_num": null
},
{
"text": "The metadata descriptions in the Archive's online catalog document collections created by a university associated with the Archive and acquired or donated from other people and organizations. The Archive's earliest metadata descriptions were written in the 16 th century; metadata descriptions continue to be written today.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech or Publication Situation",
"sec_num": null
},
{
"text": "The goal of the metadata descriptions is to help people find primary source material in the Archives. At the time most of the Archive's metadata descriptions were written, the descriptions were intended for employees of the Archive, who would help visitors locate primary source material. Circa 2015, employees of the Archive began writing metadata descriptions with visitors included in their intended audience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech or Publication Situation",
"sec_num": null
},
{
"text": "Current employees at the Archives have stated that they would be happy for the metadata descriptions they write to be viewed as works in progress, because the Archive could never have enough time to document all its collection items completely. Moreover, often information about collections items is impossible to know due to their historical nature and lack of accompanying documentation, so the metadata descriptions will always be incomplete.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech or Publication Situation",
"sec_num": null
},
{
"text": "The metadata descriptions include information available from the cultural heritage records they describe, from any available documentation that accompanied those records when the Archive acquired them, from authorities such as the Library of Congress Subject Headings, and from other documentation resources considered trustworthy among archives (a more extensive list is provided here).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech or Publication Situation",
"sec_num": null
},
{
"text": "Beginning circa 2017, people documenting collections in the Archive have written metadata descriptions according to the General International Standard Archival Description (ISAD(G)). Past metadata descriptions were written according to library metadata standards. Metadata descriptions may include contextual information about the people, places, and time periods relevant to the collection items, as well as the date a description was written and who wrote the description. Though all of this descriptive information ideally exists for a collection item, some collection items do not have this complete of a description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Characteristics",
"sec_num": null
},
{
"text": "To Do: If possible, determine which library metadata standards were used for documentation prior to 2017. We (the research team) collected the data using the Open Access Initiative -Protocol for Metadata Harvesting (OAI-PMH).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Characteristics",
"sec_num": null
},
{
"text": "Employees, interns, and volunteers at the Archive who wrote the metadata descriptions collected information to include in the descriptions from documentation accompanying the cultural heritage record(s) they were describing, from the cultural heritage records themselves, from authorities such as Library of Congress Subject Headings, and from other trusted sources for archival documentation. Examples of other trusted sources are available here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Characteristics",
"sec_num": null
},
{
"text": "Where possible, we will use dates associated with the descriptions to contextualize their text in relation to historical changes in metadata structures. For example, the metadata standard Library of Congress Subject Headings (LCSH) once used the term \"Jewish Question\" instead of the current term \"Jews,\" so GLAM who use LCSH may have descriptions in their catalogs that use the historical term now considered biased. After historical analysis of metadata standards the Archive uses, we will filter our collected text to include those that reference groups of people who have historically been described stereotypically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Characteristics",
"sec_num": null
},
{
"text": "The Archive and the university to which it is associated collected some of the cultural heritage records and the accompanying documentation that informs the records' metadata descriptions. For other cultural heritage records and their accompanying documentation, individual collectors gathered the records and wrote their documentation, which employees, interns, and volunteers used to write descriptive metadata for the records in the Archive's catalog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.3 Who Collected or Created the Data?",
"sec_num": null
},
{
"text": "The Archive has existed since the 16 th century, so its directors will each have established different policies and goals for acquiring and documenting cultural heritage records. The latest policy document for the Archive includes a statement about diversity, inclusion and accessibility that describes the Archive's commitment to providing representative collections for local, national, and international audiences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.3 Who Collected or Created the Data?",
"sec_num": null
},
{
"text": "The Archive's policy explains that it documents cultural heritage records in its catalog so that researchers can find the records and use them as primary source material to guide their work. Current employees of the Archive reiterated the goal of discoverability as the main reason for writing metadata descriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.4 Why was the Data Collected or Created?",
"sec_num": null
},
{
"text": "Individuals and institutions who have donated their collections to the Archive had personal reasons motivating their choices of records to save. A directory of the Archive's collections contains information about select individuals and institutions that suggest their reasons for saving the records they did. Information in the metadata descriptions themselves may also provide insight on why their associated records were collected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.4 Why was the Data Collected or Created?",
"sec_num": null
},
{
"text": "Among the metadata descriptions we extracted that include a year documenting when they were written, the years show that the descriptions were written from the 19 th century up through the 21 st century. Further research is needed to determine how early the extracted metadata descriptions without a year were written. 5. Visitors to the Archive, as they will read the metadata descriptions used as this research's text source when using the Archive's online catalog",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.5 When was the Data Collected or Created?",
"sec_num": null
},
{
"text": "Limitations: Due to the length of the text and the historical nature of the metadata descriptions we use from the Archive's catalog, we do not have access to every person represented in the metadata descriptions. However, the Archive does have a take-down policy that we will follow with our text source to respect the people represented in metadata descriptions as best as possible: if a person requests that information about them or someone they are connected to be removed from or anonymized in the catalog, the Archive will comply. To the best of our ability, we will make sure that the metadata descriptions we use as the text source for our research do not include information that a visitor has requested the Archive take down.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.5 When was the Data Collected or Created?",
"sec_num": null
},
{
"text": "Who or what is included in the research?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "Who:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "\u2022 Current employees of the Archive: To account for intragroup differences, we include employees with different years of experience and employees working in several positions within the hierarchy of job roles in the Archive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "\u2022 Us (the research team): The size of the team is small enough that all members are included, meaning intragroup differences are accounted for by default.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "To Do: Find visitors to the Archive who I can speak to about their experience reading its catalog's metadata descriptions. To account for intragroup differences among visitors, we will seek out a selection of visitors with as diverse of identity characteristics as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "What: Ongoing work includes conducting historical research to understand the context in which the metadata descriptions were written. For example, employees at the Archive stated that for many years, people wrote metadata descriptions with the aim of being as neutral and objective as possible, however the latest generation of archivists is challenging this, arguing that neutrality isn't possible and encouraging transparency instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "Who or what is excluded from the research?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "Who:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "\u2022 Past employees of the Archive",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "\u2022 People represented in the Archive's cultural heritage records",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "\u2022 The majority of the Archive's visitors (the research only has the capacity to include a selection of visitors in user research and participatory action research activities)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "What: The historical context of metadata descriptions written before my lifetime",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "To Do: Determine if policy guidelines for the Archive since its beginnings in the 16 th century are available to understand how it perceived itself and what drove its collection and documentation practices. Otherwise, the historical existence of the Archive is also excluded form the research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "How will the research define knowledge?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "The research will define knowledge as multifaceted. We (the research team) will draw on the disciplines of gender studies and linguistics to manually identify and annotate types of contextual gender bias in metadata descriptions. The research will share the annotated dataset as one interpretation of gender bias, recognizing that different people have different experiences of oppression that cause variations in attitudes towards words or phrases. We will use the annotated dataset to train a discriminative classification algorithm. The types of gender bias that the algorithm identifies will be presented as potentially biased text, requiring verification from a person working with the text to decide whether the text should be considered biased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "Who has agency and who can be empowered?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "We (the research team) have agency as the people applying NLP methods to the Archive's metadata descriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "The employees of the Archive can be empowered through participatory action research, with collaborative activities in which we situate the employees as partners in the research and as experts on archival practices and metadata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "The employees of the Archive have determined that people who do not identify as male are underrepresented in the Archive's collections and thus those collections' metadata descriptions. We focus our bias identification and classification efforts on gender bias to explore how we can empower people who do not identify as male through the process and outputs of our NLP research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "To Do: Provide examples of how our research process and outputs empowers people who do not identify as male.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Power Relations Questions",
"sec_num": null
},
{
"text": "Situating our research in the UK, we reference statistics from the UK's Higher Education Statistical Agency (HESA). 5 www.hesa.ac.uk/news/16-01-2020/sb255-higher-education-student-statistics/ subjects.6 http://www.wisecampaign.org.uk/statistics/2019-workforce-statistics-onemillion-women-in-stem-in-the-uk/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The authors report that men showed no difference in their intrinsic goal orientation and task value scores with masculinegeneric versus gender-neutral language in the questionnaires; impacts on people who do not identify as either a man or a woman are unknown as the study groups participants into these two gender categories(Vainapel et al., 2015).8 See HCI Guidelines for Gender Equity and Inclusivity at www.morgan-klaus.com/gender-guidelines.html. 9 While earlier paragraphs in the paper indicate a focus on gender bias and stereotypes related to professional occupations, the authors do not define bias or gender bias, nor do they identify the types of systems to which they refer.10 Intersectionality refers to the way in which different combinations of identity characteristics from one individual to another result in different experiences of privilege and oppression(Crenshaw, 1991). In feminist thought, multiple viewpoints are needed to understand reality; viewpoints that claim to be objective are, in fact, subjective, because knowledge is the result of human interpretation(Haraway, 1988).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We adapted these questions from Moore's work on feminist community archiving(Moore, 2018). 12 \"A connected series of utterances by which meaning is communicated\" (Oxford English Dictionary, 2013b).13 We All Count has a free, interactive data biography tool at wac-survey-rails.herokuapp.com. 14 See the Media Bias Ratings at www.allsides.com/media-bias/media-bias-ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.openarchives.org/OAI/2.0/openarchivesprotocol.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Archive is not alone; across the GLAM sector, institutions acknowledge and are exploring ways to address legacy language in their catalogs' descriptions. The \"Note\" in We Are What We Steal provides one example: https://dxlab. sl.nsw.gov.au/we-are-what-we-steal/notes/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This paper describes work conducted in collaboration with Rachel Hosker and her team at the Centre for Research Collections (CRC) at the University of Edinburgh. Hosker and her team are activists seeking to change archives' descriptive language and practices to more accurately and inclusively represent the diverse populations for whom their collections are intended. Before we joined them as collaborators, they were discussing and making changes to the Archive's descriptive language and practices. We are grateful for the willingness of Hosker and her team at the CRC to collaborate with us, bringing together the knowledge and practices of the archival and NLP communities to mitigate harms from biased language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Appendix A Data Statement for Metadata Descriptions Extracted from the Archive's",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "The metadata descriptions in the Archive's online catalog consists of manually entered data, some of which was initially written in digital form, and some of which was initially written on paper and has since been manually typed into digital form.To Do: Determine how much the metadata descriptions are born-digital versus re-written digitally, and when the Archive transitioned from writing metadata descriptions on paper to writing metadata descriptions digitally (typing manually).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Quality",
"sec_num": null
},
{
"text": "None Appendix B Data Biography for Metadata Descriptions Extracted from the Archive's",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Provenance Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Introduction: A Book is Being Cataloged",
"authors": [
{
"first": "Melissa",
"middle": [],
"last": "Adler",
"suffix": ""
}
],
"year": 2017,
"venue": "Cruising the Library: Perversities in the Organization of Knowledge",
"volume": "",
"issue": "",
"pages": "1--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melissa Adler. 2017. Introduction: A Book is Being Cataloged. In Cruising the Library: Perversities in the Organization of Knowledge, pages 1-26. Fordham University Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Organization, representation and description through the digital age: information in libraries, archives and museums",
"authors": [
{
"first": "Christine",
"middle": [
"M"
],
"last": "Angel",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Fuchs",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angel, Christine M., and Caroline Fuchs, editor. 2018. Organization, representation and description through the digital age: information in libraries, archives and museums. Walter de Gruyter GmbH, Berlin; Boston.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "6",
"issue": "",
"pages": "587--604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: Toward Mitigat- ing System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6:587-604, December.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language (Technology) is Power: A Critical Survey of \"Bias\" in NLP",
"authors": [
{
"first": "Su",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5454--5476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (Technology) is Power: A Crit- ical Survey of \"Bias\" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4356--4364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Com- puter Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 4356-4364.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Theories of Discourse as Theories of Gender: Discourse Analysis in Language and Gender Studies",
"authors": [
{
"first": "Mary",
"middle": [],
"last": "Bucholtz",
"suffix": ""
}
],
"year": 2003,
"venue": "The Handbook of Language and Gender",
"volume": "",
"issue": "",
"pages": "43--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary Bucholtz. 2003. Theories of Discourse as Theories of Gender: Discourse Analysis in Language and Gender Studies. In The Handbook of Language and Gender, pages 43-68, Oxford, GB, January. Blackwell Publishing Ltd.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186, April.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Plague Dot Text: Text mining and annotation of outbreak reports of the Third Plague Pandemic",
"authors": [
{
"first": "Arlene",
"middle": [],
"last": "Casey",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tobin",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Iona",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Engelmann",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Alex",
"suffix": ""
}
],
"year": 2020,
"venue": "Computing Research Repository",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.01415"
]
},
"num": null,
"urls": [],
"raw_text": "Arlene Casey, Mike Bennett, Richard Tobin, Claire Grover, Iona Walker, Lukas Engelmann, and Beatrice Alex. 2020. Plague Dot Text: Text mining and annotation of outbreak reports of the Third Plague Pandemic (1894- 1952). Computing Research Repository, arXiv:2002.01415:23.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neither a Beginning Nor an End: Applying an Ethics of Care to Digital Archival Collections",
"authors": [
{
"first": "Michelle",
"middle": [],
"last": "Caswell",
"suffix": ""
},
{
"first": "Marika",
"middle": [],
"last": "Cifor",
"suffix": ""
}
],
"year": 2019,
"venue": "The Routledge International Handbook of New Digital Practices in Galleries, Libraries, Archives, Museums and Heritage Sites",
"volume": "",
"issue": "",
"pages": "159--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michelle Caswell and Marika Cifor. 2019. Neither a Beginning Nor an End: Applying an Ethics of Care to Digital Archival Collections. In The Routledge International Handbook of New Digital Practices in Galleries, Libraries, Archives, Museums and Heritage Sites, pages 159-168. Routledge, November.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Trouble with Bias",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Crawford",
"suffix": ""
}
],
"year": 2017,
"venue": "Neural Information Processing Systems Conference Keynote. [Online; accessed 10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate Crawford. 2017. The Trouble with Bias. In Neural Information Processing Systems Conference Keynote. [Online; accessed 10-July-2020].",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color",
"authors": [
{
"first": "Kimberl\u00e9",
"middle": [],
"last": "Crenshaw",
"suffix": ""
}
],
"year": 1991,
"venue": "Stanford Law Review",
"volume": "43",
"issue": "6",
"pages": "1241--1299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimberl\u00e9 Crenshaw. 1991. Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color. Stanford Law Review, 43(6):1241-1299.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Addressing Age-Related Bias in Sentiment Analysis",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Lazar",
"suffix": ""
},
{
"first": "Anne",
"middle": [
"Marie"
],
"last": "Piper",
"suffix": ""
},
{
"first": "Darren",
"middle": [],
"last": "Gergle",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems -CHI '18",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Diaz, Isaac Johnson, Amanda Lazar, Anne Marie Piper, and Darren Gergle. 2018. Addressing Age-Related Bias in Sentiment Analysis. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems -CHI '18, pages 1-14, Montr\u00e9al, CA. ACM Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Data Feminism. Strong ideas series",
"authors": [
{
"first": "D'",
"middle": [],
"last": "Catherine",
"suffix": ""
},
{
"first": "Lauren",
"middle": [
"F"
],
"last": "Ignazio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine D'Ignazio and Lauren F. Klein. 2020. Data Feminism. Strong ideas series. The MIT Press, Cambridge, US.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bias in Computer Systems",
"authors": [
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Nissenbaum",
"suffix": ""
}
],
"year": 1996,
"venue": "ACM Transactions on Information Systems",
"volume": "14",
"issue": "3",
"pages": "330--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Batya Friedman and Helen Nissenbaum. 1996. Bias in Computer Systems. ACM Transactions on Information Systems, 14(3):330-347, June.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Londa",
"middle": [],
"last": "Schiebinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "115",
"issue": "16",
"pages": "3635--3644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635-E3644, April.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Women's Syntactic Resilience and Men's Grammatical Luck: Gender-Bias in Part-of-Speech Tagging and Dependency Parsing",
"authors": [
{
"first": "Aparna",
"middle": [],
"last": "Garimella",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3493--3498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aparna Garimella, Carmen Banea, Dirk Hovy, and Rada Mihalcea. 2019. Women's Syntactic Resilience and Men's Grammatical Luck: Gender-Bias in Part-of-Speech Tagging and Dependency Parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3493-3498, Florence, IT. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Are all \"research fields\" equal? Rethinking practice for the use of data from crowdsourcing market addresss",
"authors": [
{
"first": "Ilka",
"middle": [
"H"
],
"last": "Gleibs",
"suffix": ""
}
],
"year": 2017,
"venue": "Behavior Research Methods",
"volume": "49",
"issue": "4",
"pages": "1333--1342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilka H. Gleibs. 2017. Are all \"research fields\" equal? Rethinking practice for the use of data from crowdsourcing market addresss. Behavior Research Methods, 49(4):1333-1342, August.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.03862v2"
]
},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. NAACL 2019, arXiv:1903.03862v2, September.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Times They Are a-Changing . . . or Are They Not? A Comparison of Gender Stereotypes",
"authors": [
{
"first": "Elizabeth",
"middle": [
"L"
],
"last": "Haines",
"suffix": ""
},
{
"first": "Kay",
"middle": [],
"last": "Deaux",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Lofaro",
"suffix": ""
}
],
"year": 1983,
"venue": "Psychology of Women Quarterly",
"volume": "40",
"issue": "3",
"pages": "353--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth L. Haines, Kay Deaux, and Nicole Lofaro. 2016. The Times They Are a-Changing . . . or Are They Not? A Comparison of Gender Stereotypes, 1983-2014. Psychology of Women Quarterly, 40(3):353-363, September.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Writing about accessibility. Interactions",
"authors": [
{
"first": "Vicki",
"middle": [
"L"
],
"last": "Hanson",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Cavender",
"suffix": ""
},
{
"first": "Shari",
"middle": [],
"last": "Trewin",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "22",
"issue": "",
"pages": "62--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vicki L. Hanson, Anna Cavender, and Shari Trewin. 2015. Writing about accessibility. Interactions, 22(6):62-65, October.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective",
"authors": [
{
"first": "Donna",
"middle": [],
"last": "Haraway",
"suffix": ""
}
],
"year": 1988,
"venue": "Feminist Studies",
"volume": "14",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donna Haraway. 1988. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies, 14(3):575.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Strong objectivity\": A response to the new objectivity question",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "Harding",
"suffix": ""
}
],
"year": 1995,
"venue": "Synthese",
"volume": "104",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra Harding. 1995. \"Strong objectivity\": A response to the new objectivity question. Synthese, 104(3), September.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Of global reach yet of situated contexts: an examination of the implicit and explicit selection criteria that shape digital archives of historical newspapers",
"authors": [
{
"first": "Tessa",
"middle": [],
"last": "Hauswedell",
"suffix": ""
},
{
"first": "Julianne",
"middle": [],
"last": "Nyhan",
"suffix": ""
},
{
"first": "Melodee",
"middle": [
"H"
],
"last": "Beals",
"suffix": ""
},
{
"first": "Melissa",
"middle": [],
"last": "Terras",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Bell",
"suffix": ""
}
],
"year": 2020,
"venue": "Archival Science",
"volume": "20",
"issue": "2",
"pages": "139--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tessa Hauswedell, Julianne Nyhan, Melodee H. Beals, Melissa Terras, and Emily Bell. 2020. Of global reach yet of situated contexts: an examination of the implicit and explicit selection criteria that shape digital archives of historical newspapers. Archival Science, 20(2):139-165, June.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Proposed Taxonomy for Gender Bias in Text; A Filtering Methodology for the Gender Generalization Subtype",
"authors": [
{
"first": "Yasmeen",
"middle": [],
"last": "Hitti",
"suffix": ""
},
{
"first": "Eunbee",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Moreno",
"suffix": ""
},
{
"first": "Carolyne",
"middle": [],
"last": "Pelletier",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "8--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasmeen Hitti, Eunbee Jang, Ines Moreno, and Carolyne Pelletier. 2019. Proposed Taxonomy for Gender Bias in Text; A Filtering Methodology for the Gender Generalization Subtype. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 8-17, Florence, IT. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Gender-preserving Debiasing for Pre-trained Word Embeddings",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1641--1650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving Debiasing for Pre-trained Word Embed- dings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1641-1650, Florence, IT. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Algorithmic Fairness in Online Information Mediating Systems",
"authors": [
{
"first": "Ansgar",
"middle": [],
"last": "Koene",
"suffix": ""
},
{
"first": "Elvira",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Sofia",
"middle": [],
"last": "Ceppi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rovatsos",
"suffix": ""
},
{
"first": "Helena",
"middle": [],
"last": "Webb",
"suffix": ""
},
{
"first": "Menisha",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Jirotka",
"suffix": ""
},
{
"first": "Giles",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Web Science Conference, WebSci '17",
"volume": "",
"issue": "",
"pages": "391--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ansgar Koene, Elvira Perez, Sofia Ceppi, Michael Rovatsos, Helena Webb, Menisha Patel, Marina Jirotka, and Giles Lane. 2017. Algorithmic Fairness in Online Information Mediating Systems. In Proceedings of the 2017 ACM on Web Science Conference, WebSci '17, page 391-392, New York, US. Association for Computing Machinery.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "An Introduction to the Data Biography. We All Count",
"authors": [
{
"first": "Heather",
"middle": [],
"last": "Krause",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heather Krause. 2019. An Introduction to the Data Biography. We All Count. [Online; accessed 17-October- 2020].",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Measuring Bias in Contextualized Word Representations",
"authors": [
{
"first": "Keita",
"middle": [],
"last": "Kurita",
"suffix": ""
},
{
"first": "Nidhi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "166--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring Bias in Contex- tualized Word Representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166-172, Florence, IT. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "NLTK: The Natural Language Toolkit",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics -Volume 1, ETMTNLP '02, pages 63-70, US. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "11 Case studies",
"authors": [
{
"first": "Bella",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Hanington",
"suffix": ""
}
],
"year": 2012,
"venue": "Universal Methods of Design: 100 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bella Martin and Bruce Hanington. 2012. 11 Case studies. In Universal Methods of Design: 100 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions, Beverly, US. Rockport Publishers.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Why are the Digital Humanities So White? or Thinking the Histories of Race and Computation",
"authors": [
{
"first": "Tara",
"middle": [],
"last": "Mcpherson",
"suffix": ""
}
],
"year": 2012,
"venue": "Debates in the Digital Humanities",
"volume": "",
"issue": "",
"pages": "139--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tara McPherson. 2012. Why are the Digital Humanities So White? or Thinking the Histories of Race and Computation. In Debates in the Digital Humanities, pages 139-160, Minneapolis, US. University of Minnesota Press.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A cat's cradle of feminist and other critical approaches to participatory research",
"authors": [
{
"first": "Niamh",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2018,
"venue": "Connected Communities Foundation Series",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niamh Moore. 2018. A cat's cradle of feminist and other critical approaches to participatory research. In Con- nected Communities Foundation Series, Bristol, UK, September. University of Bristol/AHRC Connected Com- munities Programme. [Online; accessed 24-July-2020].",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Algorithms of Oppression: How Search Engines Reinforce Racism",
"authors": [
{
"first": "Noble",
"middle": [],
"last": "Safiya Umoja",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York, US.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Classism",
"authors": [
{
"first": "Oxford",
"middle": [],
"last": "English Dictionary",
"suffix": ""
}
],
"year": 2013,
"venue": "OED Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oxford English Dictionary. 2013a. Classism. In OED Online. Oxford University Press, June. [Online; accessed 21-August-2020].",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Discourse",
"authors": [
{
"first": "Oxford",
"middle": [],
"last": "English Dictionary",
"suffix": ""
}
],
"year": 2013,
"venue": "OED Online",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oxford English Dictionary. 2013b. Discourse. In OED Online. Oxford University Press, December. [Online; accessed 17-October-2020].",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Racism",
"authors": [
{
"first": "Oxford",
"middle": [],
"last": "English Dictionary",
"suffix": ""
}
],
"year": 2013,
"venue": "OED Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oxford English Dictionary. 2013c. Racism. In OED Online. Oxford University Press, June. [Online; accessed 21-August-2020].",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Sexism",
"authors": [
{
"first": "Oxford",
"middle": [],
"last": "English Dictionary",
"suffix": ""
}
],
"year": 2013,
"venue": "OED Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oxford English Dictionary. 2013d. Sexism. In OED Online. Oxford University Press, June. [Online; accessed 21-August-2020].",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Bias in Word Embeddings",
"authors": [
{
"first": "Orestis",
"middle": [],
"last": "Papakyriakopoulos",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Hegelich",
"suffix": ""
},
{
"first": "Juan Carlos Medina",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "Fabienne",
"middle": [],
"last": "Marco",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT * '20",
"volume": "",
"issue": "",
"pages": "446--457",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orestis Papakyriakopoulos, Simon Hegelich, Juan Carlos Medina Serrano, and Fabienne Marco. 2020. Bias in Word Embeddings. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT * '20, pages 446-457, New York, NY. Association for Computing Machinery.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Invisible Women: Exposing Data Bias in a World Designed for Men",
"authors": [
{
"first": "Caroline Criado",
"middle": [],
"last": "Perez",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caroline Criado Perez. 2019. Invisible Women: Exposing Data Bias in a World Designed for Men. Vintage, London, GB.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Continuing the Journey: Articulating Dimensions of Feminist Participatory Action Research (FPAR)",
"authors": [
{
"first": "Colleen",
"middle": [],
"last": "Reid",
"suffix": ""
},
{
"first": "Wendy",
"middle": [],
"last": "Frisby",
"suffix": ""
}
],
"year": 2008,
"venue": "The SAGE Handbook of Action Research",
"volume": "6",
"issue": "",
"pages": "93--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colleen Reid and Wendy Frisby. 2008. 6 Continuing the Journey: Articulating Dimensions of Feminist Participa- tory Action Research (FPAR). In The SAGE Handbook of Action Research, pages 93-105. SAGE Publications Ltd, February.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Beyond the Margins: Intersectionality and the Digital Humanities. Digital Humanities Quarterly",
"authors": [
{
"first": "Roopika",
"middle": [],
"last": "Risam",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roopika Risam. 2015. Beyond the Margins: Intersectionality and the Digital Humanities. Digital Humanities Quarterly, 9(2):14.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Evaluating Gender Bias in Machine Translation",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1679--1684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating Gender Bias in Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, IT. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "2 Participatory Action Research as Practice",
"authors": [
{
"first": "Marja Liisa",
"middle": [],
"last": "Swantz",
"suffix": ""
}
],
"year": 2008,
"venue": "The SAGE Handbook of Action Research",
"volume": "",
"issue": "",
"pages": "31--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marja Liisa Swantz. 2008. 2 Participatory Action Research as Practice. In The SAGE Handbook of Action Research, pages 31-48. SAGE Publications Ltd.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Discrimination in online ad delivery",
"authors": [
{
"first": "Latanya",
"middle": [],
"last": "Sweeney",
"suffix": ""
}
],
"year": 2013,
"venue": "Communications of the ACM",
"volume": "56",
"issue": "5",
"pages": "44--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Latanya Sweeney. 2013. Discrimination in online ad delivery. Communications of the ACM, 56(5):44-54, May.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "What are the Biases in My Word Embedding?",
"authors": [
{
"first": "Nathaniel",
"middle": [],
"last": "Swinger",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "De-Arteaga",
"suffix": ""
},
{
"first": "Neil",
"middle": [
"Thomas"
],
"last": "Heffernan",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Mark",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Leiserson",
"suffix": ""
},
{
"first": "Adam Tauman",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society",
"volume": "",
"issue": "",
"pages": "305--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark DM Leiserson, and Adam Tauman Kalai. 2019. What are the Biases in My Word Embedding? In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 305-311, Honolulu, US, January. Association for Computing Machinery.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Gender Stereotypes: Reproduction and Challenge",
"authors": [
{
"first": "Mary",
"middle": [],
"last": "Talbot",
"suffix": ""
}
],
"year": 2003,
"venue": "The Handbook of Language and Gender",
"volume": "",
"issue": "",
"pages": "468--486",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary Talbot. 2003. Gender Stereotypes: Reproduction and Challenge. In The Handbook of Language and Gender, pages 468-486, Oxford, GB, January. Blackwell Publishing Ltd.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "The dark side of gendered language: The masculine-generic form as a cause for self-report bias",
"authors": [
{
"first": "Sigal",
"middle": [],
"last": "Vainapel",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Opher",
"suffix": ""
},
{
"first": "Yulie",
"middle": [],
"last": "Shamir",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gilam",
"suffix": ""
}
],
"year": 2015,
"venue": "Psychological Assessment",
"volume": "27",
"issue": "4",
"pages": "1513--1519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sigal Vainapel, Opher Y. Shamir, Yulie Tenenbaum, and Gadi Gilam. 2015. The dark side of gendered language: The masculine-generic form as a cause for self-report bias. Psychological Assessment, 27(4):1513-1519.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Discourse as the Recontextualization of Social Practice: A Guide",
"authors": [
{
"first": "Theo",
"middle": [],
"last": "Van Leeuwen",
"suffix": ""
}
],
"year": 2009,
"venue": "Methods for Critical Discourse Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theo van Leeuwen. 2009. Discourse as the Recontextualization of Social Practice: A Guide. In Methods for Critical Discourse Analysis. SAGE Publications.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns",
"authors": [
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2018,
"venue": "Computing Research Repository",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.05201"
]
},
"num": null,
"urls": [],
"raw_text": "Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns. Computing Research Repository, arXiv:1810.05201, October.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "The Rare Books Catalog and the Scholarly Database. Cataloging & Classification Quarterly",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Welsh",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "54",
"issue": "",
"pages": "317--337",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Welsh. 2016. The Rare Books Catalog and the Scholarly Database. Cataloging & Classification Quarterly, 54(5-6):317-337, aug.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Hurtful words: quantifying biases in clinical contextual word embeddings",
"authors": [
{
"first": "Haoran",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"X"
],
"last": "Lu",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Abdalla",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
},
{
"first": "Marzyeh",
"middle": [],
"last": "Ghassemi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the ACM Conference on Health, Inference, and Learning",
"volume": "",
"issue": "",
"pages": "110--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoran Zhang, Amy X. Lu, Mohamed Abdalla, Matthew McDermott, and Marzyeh Ghassemi. 2020. Hurtful words: quantifying biases in clinical contextual word embeddings. In Proceedings of the ACM Conference on Health, Inference, and Learning, pages 110-120, Toronto, CA, April. Association for Computing Machinery.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, US. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": ", Caliskan et al. (2017), and Kurita et al. (2019) on gender bias; Swinger et al. (2019) on racial bias; Diaz et al. (2018) on age bias; Papakyriakopoulos (2020) on sexuality and nationality bias; and Gonen and Goldberg (2019) on the inadequacy of debiasing word embeddings. When applying part-of-speech tagging, dependency parsing, or machine translation, an NLP researcher could look to Garimella et al. (2019) and Stanovsky et al.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Online Catalog (version 1) B.1 Dataset Metadata descriptions from the Archive's online catalog B.2 Where was the Data Collected or Created?",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "of the Archive (current and former) who wrote the metadata descriptions that serve as this research's text source 3. The Archive and its associated university as institutions that provide access to the metadata descriptions 4. People represented in the metadata descriptions",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>By Metadata</td><td>Biographical/</td><td>Scope and</td><td>Processing</td><td>Total (sum of the</td></tr><tr><td>Field</td><td>Historical</td><td>Contents</td><td>Information</td><td>metadata fields)</td></tr><tr><td>Sentences</td><td>11,323</td><td>55,434</td><td>1,691</td><td>68,448</td></tr><tr><td>Words</td><td>801,893</td><td>208,190</td><td>11,016</td><td>966,763</td></tr><tr><td colspan=\"2\">By Collection Minimum</td><td>Maximum</td><td>Mean</td><td>Standard</td></tr><tr><td/><td/><td/><td/><td>Deviation</td></tr><tr><td>Words</td><td>7</td><td>156,747</td><td>1,036.2</td><td>7,784.5</td></tr><tr><td>Table 2:</td><td/><td/><td/><td/></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "Words and sentences in the extracted metadata descriptions from the Archive's 1,231 collections, calculated using Punkt tokenizers in the Natural Language Toolkit Python library"
}
}
}
}