|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:02:48.332702Z" |
|
}, |
|
"title": "Conversational Assistants and Gender Stereotypes: Public Perceptions and Desiderata for Voice Personas", |
|
"authors": [ |
|
{ |
|
"first": "Amanda", |
|
"middle": [ |
|
"Cercas" |
|
], |
|
"last": "Curry", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Heriot-Watt University Edinburgh", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Judy", |
|
"middle": [], |
|
"last": "Robertson", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Edinburgh Edinburgh", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "judy.robertson@ed.ac.uk" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "v.t.rieser@hw.ac.uk" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Conversational voice assistants are rapidly developing from purely transactional systems to social companions with \"personality\". UNESCO recently stated that the female and submissive personality of current digital assistants gives rise for concern as it reinforces gender stereotypes. In this work, we present results from a participatory design workshop, wherein we invite people to submit their preferences for what their ideal persona might look like, both in drawings as well as in a multiple choice questionnaire. We find no clear consensus which suggests that one possible solution is to let people configure/personalise their assistants. We then outline a multidisciplinary project of how we plan to address the complex question of gender and stereotyping in digital assistants.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Conversational voice assistants are rapidly developing from purely transactional systems to social companions with \"personality\". UNESCO recently stated that the female and submissive personality of current digital assistants gives rise for concern as it reinforces gender stereotypes. In this work, we present results from a participatory design workshop, wherein we invite people to submit their preferences for what their ideal persona might look like, both in drawings as well as in a multiple choice questionnaire. We find no clear consensus which suggests that one possible solution is to let people configure/personalise their assistants. We then outline a multidisciplinary project of how we plan to address the complex question of gender and stereotyping in digital assistants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Biased technology disadvantages certain groups of society, e.g. based on their race or gender. Recently, biased machine learning has received increased attention. For example, in the area of Natural Language Processing (NLP), it has been shown that word embeddings (Bolukbasi et al., 2016) , co-reference resolution (Zhao et al., 2018) and machine translation systems (Hovy et al., 2020) are likely to reflect and even amplify social biases in the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 289, |
|
"text": "(Bolukbasi et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 335, |
|
"text": "(Zhao et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 387, |
|
"text": "(Hovy et al., 2020)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Bias Statement", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Here we address a different type of bias which is not learnt from data, but encoded during the design process. We illustrate this problem on the example of Conversational Voice Assistants (CVAs), such as Amazon's Alexa, Apple's Siri, Microsoft's Cortana, or Google's Assistant, which are predominantly modelled as young, submissive women. According to UNESCO (West et al., 2019) , this bears the risk of reinforcing gender stereotypes. In particular, these design choices can create representational harm by reinforcing negative stereotypes society holds about women. The report argues that this becomes even more prevalent in the face of abuse, where most assistants do not answer 'appropriately' (Curry and Rieser, 2018; Curry and Rieser, 2019) , which might impact human-human interactions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 378, |
|
"text": "(West et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 698, |
|
"end": 722, |
|
"text": "(Curry and Rieser, 2018;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 746, |
|
"text": "Curry and Rieser, 2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Bias Statement", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to tackle the problem of reproducing structural inequality and oppression of marginalised groups when designing new systems, the Design Justice Network 1 proposes to centre the voices of those who are directly impacted by the outcomes of the design process (Costanza-Chock, 2018) . Similarly, the European Commission's Ethics Guidelines for Trustworthy AI (AI HLEG, 2019) recommend stakeholder participation during the development of new technologies, as well as paying special attention to the system's societal impact.", |
|
"cite_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 288, |
|
"text": "(Costanza-Chock, 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Bias Statement", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work we aim to unpack and verify some of the statements in the UNESCO report in a multidisciplinary project including methodologies from Human Computer Interaction (HCI), Social Psychology, and Natural Language Generation. As a first step, we conduct a participatory design workshop to gather public views on this subject to further inform the direction of this research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Bias Statement", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The persona of a CVA can be viewed as a composite of elements of identity (such as demographics and background facts), language behaviour, and interaction style. Some of these aspects can be learned from data, including personality-based linguistic style generation (e.g. Oraby et al. (2018) ), or generating responses which are factually consistent with a persona profile (e.g. Zhang et al. (2018) ). However, demographics and background facts are usually deliberate design choices, and research such as (Nass and Brave, 2007) shows that the gender choices of CVAs are conforming to traditional gender roles and social expectations: that is, the majority of CVAs have female personas.", |
|
"cite_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 291, |
|
"text": "Oraby et al. (2018)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 398, |
|
"text": "Zhang et al. (2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 527, |
|
"text": "(Nass and Brave, 2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In interviews with news outlets, companies and developers defend design choices by citing market research showing that female voices are perceived to be more cooperative and helpful, and male voices are considered more trustworthy (Schw\u00e4r and Moynihan, 2020; Stern, 2017) . As such, the role of a personal assistant is often assigned to women, whereas in applications where the CVA needs to be authoritative, companies tend to choose male voices.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 258, |
|
"text": "(Schw\u00e4r and Moynihan, 2020;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 271, |
|
"text": "Stern, 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, in 2019 UNESCO published an in-depth analysis of the gendering of AI, especially focused on conversational assistants (West et al., 2019) . The report details how the personas of current conversational assistants reinforce and spread existing biases about women as being subservient, modelling acceptance and tolerating sexual harassment and verbal abuse, and 'make women the \"face\" of glitches and errors'. The report attributes this bias to lack of diversity in the tech sector and concludes that the personas of conversational assistants should not be female by default, and that digital assistants should rather be designed to combat gender-based biases as well as discouraging insults and abusive language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 146, |
|
"text": "(West et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "While groups like Feminist Internet 2 and Women Reclaiming AI 3 have addressed the feminisation of CVAs through projects such as F'xa 4 , we involve the general public in designing alternative personas for conversational assistants as a first step in exploring the needs of diverse users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Methodology: We explore an alternative methodology to the usual way we get 'users' involved in NLP research, i.e. crowd-sourcing or scraping online data (de Vries et al., 2020) borrowed from HCI. Participatory design actively involves all stakeholders in the design process to ensure that the end-product meets their needs (Schuler and Namioka, 1993) . It aims to assign an active and informed role to everyone affected by the end result.", |
|
"cite_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 350, |
|
"text": "(Schuler and Namioka, 1993)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To implement this idea, we organised a public workshop with the help and endorsement of the Royal Society of Edinburgh 5 which allowed us to reach a wide population, including potential end-users and people affected by stereotyping. Inspired by a previous workshop on Voice Assistants and Feminism (Webb, 2019) , our workshop aimed to inform and stimulate critical reflection in order to seek an active discourse with the public. As such, we organised two events over two consecutive days: The first event was a short introduction where people where invited to learn more about the underlying technology. On the second day, we introduced the issue of gendered technology, and got two experts presenting their views, including one of the authors of the UNESCO report and a UX Voice designer from the BBC. The overall question the workshop explored was: What would your ideal conversational voice assistant be?", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 310, |
|
"text": "(Webb, 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Participants: We had a total of 128 participants registered to the online event, with 72 participants attending. According to a pre-questionnaire, the majority (72%) of participants identified as female, 28% male. Most of them have had experience with using one or more voice assistants although they do not use them regularly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Questionnaire: During and after the workshop, we asked participants to complete a form detailing the following characteristics of their ideal personal assistant: anthropomorphism, gender, age group, regional accent, as well as an option to describe other personality traits. In the following study we compare these submitted desiderata with characteristics of 14 existing chatbots:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Six voice-based commercial systems: Amazon's Alexa, Google Assistant, Apple's Siri, Microsoft's Cortana, Samsung Bixby, and BBC's Beeb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Popular text-based online chatbots: Mitsuku, Xiaoice, Replika.ai 6 , Alley (which performed well in a previous study by Curry and Rieser (2019) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 145, |
|
"text": "Curry and Rieser (2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Other well-established chatbots: ELIZA, ALICE, Parry.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We annotate these chatbots according to the same characteristics as the participants' ideal system. Due to the interaction medium of each bot (spoken vs. typed), some characteristics are unavailable such as regional accents. In addition, some systems such as the BBC's Beeb have limited availability, in which case we base our annotations on 3rd party reports. We use the following methodology to elicit the information:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Gender: Does the system's voice, avatar or name designate it to a particular gender? For example, when prompted with \"Are you a woman?\", Siri will respond with \"I don't have a gender\", however its voice and name are explicitly female.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Age: We want to elicit the age of the systems' persona. In order to determine age, we prompted the systems directly. Some systems provide either literal (\"I was released November 6th 2014\") or evasive answers (\"Well, I'm no Spring Chicken. Or winter bee. Or autumnal aarvark...\"), in these cases we used indirect ways to determine the perceived age such as available avatars or annotated the age as not available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Anthropomorphism: There are many aspects to anthropomorphism beyond the use of language, from embodiment to the systems' responses to questions about humanity such as \"Do you own a pet?\". In our classification, we limit anthropomorphism to \"Does the system have a human-like avatar?\" or annotated as not available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Lect: whether the system presents a particular language variety (eg. a regional accent or register). In this case, Xiaoice is a notable exception as it is designed as a Chinese girl.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Drawings: Prior to the workshop we asked participants to submit two designs: (1) a drawing and a description of how they image Alexa's virtual character to be like, and (2) a drawing and description of how their ideal personal assistant's character would be like. This is similar to (Kuzminykh et al., 2020) , but instead of using an avatar building tool, which restricts participants to a pre-defined set of anthropomorphic choices, we let users submit photographs, e.g. of their own drawings or other objects.", |
|
"cite_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 307, |
|
"text": "(Kuzminykh et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Questionnaire: 34 participants filled out the questionnaire on desired characteristics, see Fig.1 . Gender: The majority of participants prefer a robotic voice (32.4%), followed by female (20.6%) and male (11.8%), see Fig. 1a . About one quarter (26.5%) of participants commented that they wanted a gender neutral voice. 7 In contrast, our analysis of current chatbots shows that 71% have female voices by default.", |
|
"cite_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 322, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 97, |
|
"text": "Fig.1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 225, |
|
"text": "Fig. 1a", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Age group: Most participants (44%) want their chatbot to be either in the age bracket between 25-40 years old, or they have no preference (38%), see Fig. 1b . Some participants (15%) want the persona to be older than 40. Hardly anybody wanted their chatbot to reflect an age group of 24 or younger, which is in stark contrast to existing voice assistants, which are predominentely perceived to be in their 20s.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 156, |
|
"text": "Fig. 1b", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Anthropomorphism: Most participants would like their assistants to be identifiable as human, followed by animal and robot, see Fig. 1c . Our annotations revealed that 36% of current systems resemble humans. Although none of the commercial systems have a visual avatar, they are all anthropomorphic in other ways such as having a pet, having experienced feelings, or experiencing mental illness (in the case of Parry). Although the ethics of anthropomorphic AI have been widely discussed, e.g. (Araujo, 2018) , it is beyond the scope of this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 493, |
|
"end": 507, |
|
"text": "(Araujo, 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 134, |
|
"text": "Fig. 1c", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Lect: More than half of participants did not want their chatbot to have a regional or other accent/ language variety, see Fig. 1d . Amongst the currently available systems, only the BBC chatbot Beep has a regional accent (Northern England).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 129, |
|
"text": "Fig. 1d", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Other personality traits: In addition, we asked participants to submit any other personal characteristics they would want in their ideal assistant. Overall, responses varied greatly: friendliness, helpfulness and humour were the most common traits and they are echoed in the design of existing assistants (Roettgers, 2019) , but other users called for more transparency, and less anthropomorphism. This directly contradicts some popular approaches to persona design which are centred around the idea of having a digital 'person', e.g. asking \"How would we want a person to respond?\" (Fowler, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 322, |
|
"text": "(Roettgers, 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 597, |
|
"text": "(Fowler, 2011)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Drawings: Here, we select and discuss two submissions for each category: artistic impressions and descriptions of (1) current and (2) future personas. The examples were selected as they represent very different concepts, see Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For impressions of current systems, we got a drawing of a black box, described as 'humourless entity' used for surveillance; and a glamorous woman wearing make-up and an evening dress, described as 'people-pleasing and inoffensive'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For future systems, we got a man holding up a \"no bullshit\" sign, described as \"Strong, rational, organised\"; and a intriguing piece of ornamental jewellery in form of an octopus, described as \"funny, with the ability to invoke laughter but also to empathise and advise. An entity that I could trust completely\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this paper, we investigate alternative designs of the persona of conversational voice agents, which are currently predominantly set to reflect young women. UNESCO argues that this representation, together with their depiction as subservient assistants, bears the risk of reinforcing gender stereotypes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We involve the public in designing alternatives, both by submitting their preferences and by asking them to sketch their visual conceptions. One critical difference in our methodology, is that we gather the data as apart of a participatory design workshop in order to stimulate critical discussion and active discourse on this matter. The outcomes show a wide range of preferences and possible future designs: Participants (n=34) either prefer robotic, gender-neutral or robotic voices, mostly without a regional accent. Most people thought that the persona's behaviour and identity should resemble a human in an age bracket between 25-40. Descriptions of personality traits ranged from friendly, helpful and humorous to calls for less anthropomorphism.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The lack of clear consensus suggests that personalised or configurable digital personas are required to fulfil individual preferences. Most commercially available voice assistants allow for a limited number of choices: For example, Google Assistant lets you choose different voice 'colours', which also include male voices for English. Amazon Alexa's voice can be changed to various English accents, ranging from Southern US to UK English. Amazon has also recently added celebrity voices to purchase, such as actor Samuel L. Jackson.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In future work, we will be looking to extend these functionalities to not only reflect personality by different synthesised voices/text-to-speech, but also personality expressed in language behaviour, conversational content and interaction style, building upon previous work on personality-based linguistic style generation (e.g. Oraby et al. (2018) ), or generating responses which are factually consistent with a persona profile (e.g. Zhang et al. (2018) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 349, |
|
"text": "Oraby et al. (2018)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 456, |
|
"text": "Zhang et al. (2018)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In addition, we will be closely working with social psychologists to anticipate the social impacts these artificial personas might have. In particular, we will be studying how digital gendering and personalities of digital assistants influences human online and offline behaviour. And eventually, we aim to build a data-driven mapping between conversational behaviour (e.g. voice, linguistic style and content) and perceived personality traits, such as gender, age, trustworthiness, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Finally, we hope to repeat similar studies as the one presented in this paper with other subpopulations. For example, we hope to explore perceptions school children might hold about voice assistants, following Festerling and Siraj (2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 237, |
|
"text": "Festerling and Siraj (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://feministinternet.com/ 3 https://womenreclaimingai.com/ 4 http://about.f-xa.co/ 5 https://www.rse.org.uk/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Replika.ai allows users to customise their avatar, in this study we consider the default option.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that a gender neutral voice might be difficult to design as there is evidence that even neutral sounding voices are perceived to have a gender due to other social cues(Sutton, 2020).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research received funding from the EPSRC project 'Designing Conversational Assistants to Reduce Gender Bias' (EP/T023767/1), as well as a NESTA 'AI for Good' award.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A Current and Future Impressions of Conversational Personas (a) \"People pleasing, inoffensive\" (b) \"A humourless entity, unable to demonstrate or experience any of the human spectrum of emotions. A disembodied machine with the ability to mimic and use human speech to disseminate data in a form readily understandable to the simplest human. Able to capture data to identify currently unmet and future unmet needs.\" Figure 2 : Submissions for the persona of the current conversational assistant.(a) \"Strong, rational, organised, and not about to put up with any nonsense!\" (b) \"Funny, with the ability to invoke laughter but also to empathise and advise. An entity that I could trust completely.\" Figure 3 : Submissions for the \"ideal\" conversational assistant.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 415, |
|
"end": 423, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 696, |
|
"end": 704, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "High-Level Expert Group on Artificial Intelligence AI HLEG. 2019. Ethics Guidelines for Trustworthy AI. European Commission", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "High-Level Expert Group on Artificial Intelligence AI HLEG. 2019. Ethics Guidelines for Trustworthy AI. Euro- pean Commission.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions", |
|
"authors": [ |
|
{ |
|
"first": "Theo", |
|
"middle": [], |
|
"last": "Araujo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Computers in Human Behavior", |
|
"volume": "85", |
|
"issue": "", |
|
"pages": "183--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Theo Araujo. 2018. Living up to the chatbot hype: The influence of anthropomorphic design cues and commu- nicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85:183-189.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Tolga", |
|
"middle": [], |
|
"last": "Bolukbasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkatesh", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Saligrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kalai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4349--4357", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Advances in neural information processing systems, pages 4349-4357.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice", |
|
"authors": [ |
|
{ |
|
"first": "Sasha", |
|
"middle": [], |
|
"last": "Costanza-Chock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Design Research Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sasha Costanza-Chock. 2018. Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice. Proceedings of the Design Research Society.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "#MeToo Alexa: How Conversational Systems Respond to Sexual Harassment", |
|
"authors": [ |
|
{ |
|
"first": "Amanda", |
|
"middle": [ |
|
"Cercas" |
|
], |
|
"last": "Curry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amanda Cercas Curry and Verena Rieser. 2018. #MeToo Alexa: How Conversational Systems Respond to Sexual Harassment. In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing, pages 7-14.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A Crowd-based Evaluation of Abuse Response Strategies in Conversational Agents", |
|
"authors": [ |
|
{ |
|
"first": "Amanda", |
|
"middle": [ |
|
"Cercas" |
|
], |
|
"last": "Curry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "361--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amanda Cercas Curry and Verena Rieser. 2019. A Crowd-based Evaluation of Abuse Response Strategies in Conversational Agents. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 361-366, Stockholm, Sweden, September. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Towards Ecologically Valid Research on Language User Interfaces", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Harm De Vries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.14435" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harm de Vries, Dzmitry Bahdanau, and Christopher Manning. 2020. Towards Ecologically Valid Research on Language User Interfaces. arXiv preprint arXiv:2007.14435.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "2020. Alexa, What Are you? Exploring Primary School Children's Ontological Perceptions of Digital Voice Assistants in Open Interactions", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Festerling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Siraj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Human Development", |
|
"volume": "64", |
|
"issue": "1", |
|
"pages": "26--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Festerling and I. Siraj. 2020. Alexa, What Are you? Exploring Primary School Children's Ontological Percep- tions of Digital Voice Assistants in Open Interactions. Human Development, 64(1):26-43.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Are Smartphones Becoming Smart Alecks?", |
|
"authors": [ |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Fowler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geoffrey A. Fowler. 2011. Are Smartphones Becoming Smart Alecks?, Oct.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Can You Translate that into Man? Commercial Machine Translation Systems Include Stylistic Biases", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Bianchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Fornaciari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. Can You Translate that into Man? Commercial Machine Translation Systems Include Stylistic Biases. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Genie in the Bottle: Anthropomorphized Perceptions of Conversational Agents", |
|
"authors": [ |
|
{ |
|
"first": "Anastasia", |
|
"middle": [], |
|
"last": "Kuzminykh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nivetha", |
|
"middle": [], |
|
"last": "Govindaraju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Avery", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Lank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anastasia Kuzminykh, Jenny Sun, Nivetha Govindaraju, Jeff Avery, and Edward Lank. 2020. Genie in the Bottle: Anthropomorphized Perceptions of Conversational Agents. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-13.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship", |
|
"authors": [ |
|
{ |
|
"first": "Clifford", |
|
"middle": [], |
|
"last": "Nass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Brave", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clifford Nass and Scott Brave. 2007. Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship. The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Controlling personality-based stylistic variation with neural natural language generators", |
|
"authors": [ |
|
{ |
|
"first": "Shereen", |
|
"middle": [], |
|
"last": "Oraby", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lena", |
|
"middle": [], |
|
"last": "Reed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shubhangi", |
|
"middle": [], |
|
"last": "Tandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Sharath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Lukin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marilyn", |
|
"middle": [], |
|
"last": "Walker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 19th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shereen Oraby, Lena Reed, Shubhangi Tandon, Sharath T.S., Stephanie Lukin, and Marilyn Walker. 2018. Con- trolling personality-based stylistic variation with neural natural language generators. In Proceedings of the 19th", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "How Alexa Got her Personality. Variety", |
|
"authors": [ |
|
{ |
|
"first": "Janko", |
|
"middle": [], |
|
"last": "Roettgers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janko Roettgers. 2019. How Alexa Got her Personality. Variety, Jun.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Participatory Design: Principles and Practices", |
|
"authors": [ |
|
{ |
|
"first": "Douglas", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aki", |
|
"middle": [], |
|
"last": "Namioka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douglas Schuler and Aki Namioka. 1993. Participatory Design: Principles and Practices. CRC Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Companies like Amazon may give devices like Alexa female voices to make them seem\u0107aring", |
|
"authors": [ |
|
{ |
|
"first": "Hannah", |
|
"middle": [], |
|
"last": "Schw\u00e4r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qayyah", |
|
"middle": [], |
|
"last": "Moynihan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Business Insider", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hannah Schw\u00e4r and Qayyah Moynihan. 2020. Companies like Amazon may give devices like Alexa female voices to make them seem\u0107aring. Business Insider, Apr.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Alexa, Siri, Cortana: The problem with all-female digital assistants", |
|
"authors": [ |
|
{ |
|
"first": "Joanna", |
|
"middle": [], |
|
"last": "Stern", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "The Wall Street Journal", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joanna Stern. 2017. Alexa, Siri, Cortana: The problem with all-female digital assistants. The Wall Street Journal, Feb.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Gender Ambiguous, not Genderless: Designing Gender in Voice User Interfaces (VUIs) with Sensitivity", |
|
"authors": [ |
|
{ |
|
"first": "Selina", |
|
"middle": [ |
|
"Jeanne" |
|
], |
|
"last": "Sutton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2nd Conference on Conversational User Interfaces", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Selina Jeanne Sutton. 2020. Gender Ambiguous, not Genderless: Designing Gender in Voice User Interfaces (VUIs) with Sensitivity. In Proceedings of the 2nd Conference on Conversational User Interfaces, pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Designing a feminist Alexa. An Experiment in Feminist Conversation Design", |
|
"authors": [ |
|
{ |
|
"first": "Charlotte", |
|
"middle": [], |
|
"last": "Webb", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "UAL:Creative Institute & Feminist Internet", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charlotte Webb. 2019. Designing a feminist Alexa. An Experiment in Feminist Conversation Design. Technical report, UAL:Creative Institute & Feminist Internet.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "I'd blush if I could: Closing gender divides in digital skills through education", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "West", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Kraut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [ |
|
"Ei" |
|
], |
|
"last": "Chew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark West, Rebecca Kraut, and Han Ei Chew. 2019. I'd blush if I could: Closing gender divides in digital skills through education. Technical Report GEN/2019/EQUALS/1 REV, UNESCO.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?", |
|
"authors": [ |
|
{ |
|
"first": "Saizheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Urbanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Szlam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2204--2213", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213, Melbourne, Australia, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods", |
|
"authors": [ |
|
{ |
|
"first": "Jieyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianlu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vicente", |
|
"middle": [], |
|
"last": "Ordonez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "15--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Participants' (n=34) preferences of demographic aspects of persona", |
|
"type_str": "figure", |
|
"uris": null |
|
} |
|
} |
|
} |
|
} |