{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:05:54.159616Z" }, "title": "FanfictionNLP: A Text Processing Pipeline for Fanfiction", "authors": [ { "first": "Michael", "middle": [], "last": "Miller Yoder", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Sopan", "middle": [], "last": "Khosla", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA", "country": "USA" } }, "email": "sopank@cs.cmu.edu" }, { "first": "Qinlan", "middle": [], "last": "Shen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA", "country": "USA" } }, "email": "qinlans@cs.cmu.edu" }, { "first": "Aakanksha", "middle": [], "last": "Naik", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA", "country": "USA" } }, "email": "anaik@cs.cmu.edu" }, { "first": "Huiming", "middle": [], "last": "Jin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA", "country": "USA" } }, "email": "huimingj@cs.cmu.edu" }, { "first": "Hariharan", "middle": [], "last": "Muralidharan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Carolyn", "middle": [ "P" ], "last": "Ros\u00e9", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "region": "PA", "country": "USA" } }, "email": "cprose@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Fanfiction presents an opportunity as a data source for research in NLP, education, and social science. However, answering specific research questions with this data is difficult, since fanfiction contains more diverse writing styles than formal fiction. We present a text processing pipeline for fanfiction, with a focus on identifying text associated with characters. The pipeline includes modules for character identification and coreference, as well as the attribution of quotes and narration to those characters. Additionally, the pipeline contains a novel approach to character coreference that uses knowledge from quote attribution to resolve pronouns within quotes. For each module, we evaluate the effectiveness of various approaches on 10 annotated fanfiction stories. This pipeline outperforms tools developed for formal fiction on the tasks of character coreference and quote attribution.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Fanfiction presents an opportunity as a data source for research in NLP, education, and social science. However, answering specific research questions with this data is difficult, since fanfiction contains more diverse writing styles than formal fiction. We present a text processing pipeline for fanfiction, with a focus on identifying text associated with characters. The pipeline includes modules for character identification and coreference, as well as the attribution of quotes and narration to those characters. Additionally, the pipeline contains a novel approach to character coreference that uses knowledge from quote attribution to resolve pronouns within quotes. For each module, we evaluate the effectiveness of various approaches on 10 annotated fanfiction stories. This pipeline outperforms tools developed for formal fiction on the tasks of character coreference and quote attribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A growing number of natural language processing tools and approaches have been developed for fiction (Agarwal et al., 2013; Bamman et al., 2014; Iyyer et al., 2016; Sims et al., 2019) . These tools generally focus on published literary works, such as collections of novels. We present an NLP pipeline for processing fanfiction, amateur writing from fans of TV shows, movies, books, games, and comics.", "cite_spans": [ { "start": 101, "end": 123, "text": "(Agarwal et al., 2013;", "ref_id": "BIBREF0" }, { "start": 124, "end": 144, "text": "Bamman et al., 2014;", "ref_id": "BIBREF3" }, { "start": 145, "end": 164, "text": "Iyyer et al., 2016;", "ref_id": "BIBREF17" }, { "start": 165, "end": 183, "text": "Sims et al., 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fanfiction writers creatively change and expand on plots, settings, and characters from original media, an example of \"participatory culture\" (Jenkins, 1992; Tosenberger, 2008) . The community of fanfiction readers and writers, now largely online, has been studied for its mentorship and support for writers and for the broad representation of LGBTQ+ characters and relationships in fan-written stories (Lothian et al., 2007; Dym et al., 2019) . Fanfiction presents an opportunity as * Denotes equal contribution. a data source for research in a variety of fields, from those studying learning in online communities to social science analysis of how community norms develop in an LGBTQ-friendly environment. For NLP researchers, fanfiction provides a large source of literary text with metadata, and has already been used in applications such as authorship attribution (Kestemont et al., 2018) and character relationship classification (Kim and Klinger, 2019) .", "cite_spans": [ { "start": 142, "end": 157, "text": "(Jenkins, 1992;", "ref_id": "BIBREF18" }, { "start": 158, "end": 176, "text": "Tosenberger, 2008)", "ref_id": "BIBREF40" }, { "start": 403, "end": 425, "text": "(Lothian et al., 2007;", "ref_id": "BIBREF26" }, { "start": 426, "end": 443, "text": "Dym et al., 2019)", "ref_id": "BIBREF8" }, { "start": 869, "end": 893, "text": "(Kestemont et al., 2018)", "ref_id": "BIBREF22" }, { "start": 936, "end": 959, "text": "(Kim and Klinger, 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is an vast amount of fanfiction in online archives. As of March 2021, over 7 million stories were hosted on just one fanfiction website, Archive of Our Own, and there exist other online archives of similar or even larger sizes . We present a pipeline that enables structured insight into this vast amount of text by identifying sets of characters in fanfiction stories and attributing narration and quotes to these characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Knowing who the characters are and what they do and say is essential for understanding story structure (Bruce, 1981; Wall, 1984) . Such processing is also useful for researchers in the humanities and social sciences investigating identification with characters and the representation of characters of diverse genders, sexualities, and ethnicities (Green et al., 2004; Kasunic and Kaufman, 2018; Felski, 2020) . The presented pipeline, which extracts text related to characters in fanfiction, can assist researchers building NLP tools for literary domains, as well those analyzing characterization in fields such as digital humanities. For example, the pipeline could be used to explore how characters are voiced and described differently when cast in queer versus straight relationships.", "cite_spans": [ { "start": 103, "end": 116, "text": "(Bruce, 1981;", "ref_id": "BIBREF4" }, { "start": 117, "end": 128, "text": "Wall, 1984)", "ref_id": "BIBREF42" }, { "start": 347, "end": 367, "text": "(Green et al., 2004;", "ref_id": "BIBREF14" }, { "start": 368, "end": 394, "text": "Kasunic and Kaufman, 2018;", "ref_id": "BIBREF21" }, { "start": 395, "end": 408, "text": "Felski, 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The presented pipeline contains three main modules: character coreference resolution, quote attribution, and extraction of \"assertions\", narration that relates to particular characters. We incorporate new and existing methods into the pipeline that perform well on an annotated set of 10 fanfiction stories. This includes a novel method using Figure 1 : Fanfiction NLP pipeline overview. From the text of a fanfiction story, the pipeline assigns character mentions to character clusters (character coreference). It then attributes assertions and quotes to each character, optionally using the quote attribution output to improve coreference resolution within quotes (see Section 3.3).", "cite_spans": [], "ref_spans": [ { "start": 343, "end": 351, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "quote attribution information to resolve first-and second-person pronouns within quotes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fanfiction is written by amateur writers of all ages and education levels worldwide, so it contains much more variety in style and genre than formal fiction. It is not immediately clear that techniques for coreference resolution or quote attribution that perform well on news data or formal fiction will be effective in the informal domain of fanfiction. We demonstrate that this pipeline outperforms existing tools designed for formal fiction on the tasks of character coreference resolution and quote attribution (Bamman et al., 2014) .", "cite_spans": [ { "start": 515, "end": 536, "text": "(Bamman et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions. We contribute a fanfiction processing pipeline that outperforms prior work designed for formal fiction. The pipeline includes novel interleaving of coreference and quote attribution to improve the resolution of first-and secondperson pronouns within quotes in narrative text. We also introduce an evaluation dataset of 10 fanfiction stories with annotations for character coreference, as well as for quote detection and attribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Data from fanfiction has been used in NLP research for a variety of tasks, including authorship attribution (Kestemont et al., 2018) , action prediction (Vilares and G\u00f3mez-Rodr\u00edguez, 2019) , finegrained entity typing (Chu et al., 2020) , and tracing the sources of derivative texts (Shen et al., 2018) . Computational work focusing on characterization in fanfiction includes the work of Milli and Bamman (2016) , who found that fanfiction writers are more likely to emphasize female and secondary characters. Using data from WattPad, a platform that includes fanfiction along with original fiction, Fast et al. (2016) find that portrayals of gendered characters generally align with mainstream stereotypes.", "cite_spans": [ { "start": 108, "end": 132, "text": "(Kestemont et al., 2018)", "ref_id": "BIBREF22" }, { "start": 153, "end": 188, "text": "(Vilares and G\u00f3mez-Rodr\u00edguez, 2019)", "ref_id": "BIBREF41" }, { "start": 217, "end": 235, "text": "(Chu et al., 2020)", "ref_id": "BIBREF5" }, { "start": 282, "end": 301, "text": "(Shen et al., 2018)", "ref_id": "BIBREF37" }, { "start": 387, "end": 410, "text": "Milli and Bamman (2016)", "ref_id": "BIBREF29" }, { "start": 599, "end": 617, "text": "Fast et al. (2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Fanfiction and NLP", "sec_num": "2" }, { "text": "We are not aware of any text processing system for fanfiction specifically, though BookNLP (Bamman et al., 2014) is commonly used as an NLP system for formal fiction. We evaluate our pipeline's approaches to character coreference resolution and quote attribution against BookNLP, as well as against other task-specific approaches, on an evaluation dataset of fanfiction.", "cite_spans": [ { "start": 91, "end": 112, "text": "(Bamman et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Fanfiction and NLP", "sec_num": "2" }, { "text": "We introduce a publicly available pipeline for processing fanfiction. 1 This pipeline is a commandline tool developed in Python. From the text of a fanfiction story, the pipeline extracts a list of characters, each mention of a character, as well as what each character does and says ( Figure 1 ). More specifically, the pipeline first performs character coreference resolution, extracting character mentions and attributing them to character clusters with a single standardized character name (Section 3.1). After coreference, the pipeline outputs quotes uttered by each character using a sieve-based approach from Muzny et al. (2017) (Section 3.2). These quote attribution results are optionally used to aid the resolution of first-and second-person pronouns within quotes to improve coreference output (Section 3.3). In parallel with quote attribution, the pipeline extracts \"assertions\", topically coherent segments of text that mention a character (Section 3.4).", "cite_spans": [], "ref_spans": [ { "start": 286, "end": 294, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Fanfiction Processing Pipeline", "sec_num": "3" }, { "text": "The story text is first passed through the coreference resolution module, which extracts mentions of characters and attributes them to character clusters. These mentions include alternative forms of names, pronouns, and anaphoric references such as \"the bartender\". Each cluster is then given a single standardized character name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Module", "sec_num": "3.1" }, { "text": "Coreference Resolution. We use SpanBERTbase (Joshi et al., 2020) , a neural method with stateof-the-art performance on formal text, for coreference resolution. This model uses SpanBERT-base embeddings to create mention representations and employs Lee et al. (2017) 's approach to calculate the coreferent pairs. SpanBERT-base is originally trained on OntoNotes (Pradhan et al., 2012). However, we further fine-tune SpanBERT-base on Lit-Bank , a dataset with coreference annotations for works of literature in English, a domain more similar to fanfiction. The model takes the raw story text as input, identifies spans of text that mention characters, and outputs clusters of mentions that refer to the same character.", "cite_spans": [ { "start": 44, "end": 64, "text": "(Joshi et al., 2020)", "ref_id": "BIBREF19" }, { "start": 247, "end": 264, "text": "Lee et al. (2017)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Module", "sec_num": "3.1" }, { "text": "Character Standardization. We then assign representative character names for each coreference cluster. These names are simply the most frequent capitalized name variant, excluding pronouns and address terms, such as sir. If there are no capitalized terms in the cluster or if there are only pronouns and address terms, the most frequent mention is chosen as the name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Module", "sec_num": "3.1" }, { "text": "Post-processing. SpanBERT-base resolves all entity mentions. In order to focus solely on characters, we post-process the cluster outputs. We remove plural pronouns (we, they, us, our, etc.) and noun phrases, demonstrative pronouns (that, this), as well as it mentions. We also remove clusters whose standardized representative names are not named entities and have head words that are not descendants of person in WordNet (Miller, 1995) . Thus clusters with standardized names such as \"the father\" are kept (since they are descendants of person in WordNet), yet clusters with names such as \"his workshop\" are removed.", "cite_spans": [ { "start": 422, "end": 436, "text": "(Miller, 1995)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Module", "sec_num": "3.1" }, { "text": "For each character cluster, a standardized name and list of the mentions remaining after postprocessing is produced, along with pointers to the position of each mention in the text. This coreference information is then used as input to quote attribution and assertion extraction modules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Module", "sec_num": "3.1" }, { "text": "To extract quotes, we simply extract any spans between quotation marks, a common approach in literary texts (O'Keefe et al., 2012) . For the wide variety of fanfiction, we recognize a broader set of quotation marks than are recognized in BookNLP's approach for formal fiction.", "cite_spans": [ { "start": 108, "end": 130, "text": "(O'Keefe et al., 2012)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Quote Attribution Module", "sec_num": "3.2" }, { "text": "The pipeline attributes quotes to characters with the deterministic approach of Muzny et al. 2017, which uses sieves such as looking for character mentions that are the head words of known speech verbs. We use a standalone re-implementation of this approach by Sims and Bamman (2020) that allows using the pipeline's character coreference as input. Muzny et al. (2017) 's approach assigns quotes to character mentions and then to character clusters. We simply assign quotes to the names of these selected character clusters.", "cite_spans": [ { "start": 261, "end": 283, "text": "Sims and Bamman (2020)", "ref_id": "BIBREF38" }, { "start": 349, "end": 368, "text": "Muzny et al. (2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Quote Attribution Module", "sec_num": "3.2" }, { "text": "Recent advances in coreference resolution, such as the SpanBERT-base system incorporated in the pipeline, leverage contextualized word embeddings to compute mention representations and to cluster these mentions from pairwise or higher-order comparisons. They also concatenate features such as the distance between the compared mentions to their representations. However, these approaches to not capture the change in point of view caused by quotes within narratives, so they suffer when resolving first-and second-person pronouns within quotes. To alleviate this issue, we introduce an optional step in the pipeline that uses the output from quote attribution to inform the resolution of firstand second-person pronouns within quotes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Pronoun Resolution Module", "sec_num": "3.3" }, { "text": "Prior work (Almeida et al., 2014) proposed a joint model for entity-level quotation attribution and coreference resolution, exploiting correlations between the two tasks. However, in this work, we propose an interleaved setup that is modular and allows the user of the pipeline to use independent off-the-shelf pre-trained models of their choice for both coreference resolution and quote attribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Pronoun Resolution Module", "sec_num": "3.3" }, { "text": "More specifically, once the quote attribution module predicts the position of each quote (q i ) and its associated speaker (s i ), the first-person pronouns within the quote (e.g. I, my, mine, me) are resolved to the speaker of that quote, s i . For secondperson pronouns (e.g. you, your, yours), we assume that they point to the addressee of the quote (a i ), which is resolved to be the speaker of the nearest ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Pronoun Resolution Module", "sec_num": "3.3" }, { "text": "= s i\u2212j such that s i\u2212j = s i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Pronoun Resolution Module", "sec_num": "3.3" }, { "text": "We only consider the previous 5 quotes to find a i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Pronoun Resolution Module", "sec_num": "3.3" }, { "text": "Since there are no sieves for quote attribution that consider pronouns within quotes, the improved coreference within quotes from this optional step does not affect quote attribution. Thus, this \"cycle\" of character coreference, then quote attribution, then improved character coreference, need only be run once. However, the improved coreference resolution could impact which assertions are associated with characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Pronoun Resolution Module", "sec_num": "3.3" }, { "text": "After coreference, the pipeline also extracts what we describe as \"assertions\", topically coherent segments of text that mention a character. The motivation for this is to identify longer spans of exposition and narrative that relate to characters for building embedding representations for these characters. Parsing these assertions would also facilitate the extraction of descriptive features such as verbs for which characters are subjects and adjectives used to describe characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assertion Extraction Module", "sec_num": "3.4" }, { "text": "To identify such spans of texts that relate to characters, we first segment the text with a topic segmentation approach called TextTiling (Hearst, 1997) . We then assign segments (with quotes removed) to characters if they contain at least one mention of the character within the span. If multiple characters are mentioned, the span is included in extracted assertions for each of the characters. ", "cite_spans": [ { "start": 138, "end": 152, "text": "(Hearst, 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Assertion Extraction Module", "sec_num": "3.4" }, { "text": "To evaluate our pipeline, we annotate a dataset of 10 publicly available fanfiction stories for all mentions of characters and quotes attributed to these characters, which is similar in size to the test set used in LitBank . We select these stories from Archive of Our Own 2 , a large fanfiction archive that is maintained and operated by a fan-centered non-profit organization, the Organization for Transformative Works (Fiesler et al., 2016) . To capture a representative range of fanfiction, we choose one story from each of the 10 most popular fandoms on Archive of Our Own when we collected data in 2018 (Table 1) . Fandoms are fan communities organized around a particular original media source. For each fandom, we randomly sampled a story in English that has fewer than 5000 words and does not contain explicit sexual or violent content. Two of the authors annotated the 10 stories for each of the tasks of character coreference and quote attribution. All annotators were graduate students working in NLP. Statistics on this evaluation dataset and the annotations can be found in Table 2 .", "cite_spans": [ { "start": 421, "end": 443, "text": "(Fiesler et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 609, "end": 618, "text": "(Table 1)", "ref_id": "TABREF2" }, { "start": 1088, "end": 1095, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Fanfiction Evaluation Dataset", "sec_num": "4" }, { "text": "These stories illustrate the expanded set of challenges and variety in fanfiction. In one story, all of the characters meet clones of themselves as male if they are female, or female if they are male. This is a variation on the practice of \"genderswapping\" characters in fanfiction (McClellan, 2014) . Coreference systems can struggle to keep up with characters with the same name but different genders. Another story in our test set is a genre of fanfiction called \"songfic\", which intersperses song lyrics into the narrative. These song lyrics often contain pronouns such as I and you that do not refer to any character.", "cite_spans": [ { "start": 282, "end": 299, "text": "(McClellan, 2014)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Fanfiction Evaluation Dataset", "sec_num": "4" }, { "text": "For quote attribution, challenges in the test set include a variation of quotation marks, sometimes used inconsistently. There is also great variation in the number of indirect quotes without clear quota-tives such as \"she said\". This can be a source of ambiguity in published fiction as well, but we find a large variety of styles in fanfiction. One fanfiction story in our evaluation dataset, for example, contains many implicit quotes in conversations among three or more characters, which can be difficult for quote attribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fanfiction Evaluation Dataset", "sec_num": "4" }, { "text": "Annotation details and inter-annotator agreement for this evaluation dataset are described below. An overview of inter-annotator agreement is provided in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 161, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Fanfiction Evaluation Dataset", "sec_num": "4" }, { "text": "To annotate character mentions in our evaluation dataset, annotators (two of the authors) were instructed to identify and group all mentions of singular characters, including pronouns, generic phrases that refer to characters such as \"the boy\", and address terms. Possessive pronouns were also annotated, with nested mentions for phrases such as his sister. Determiners and prepositional phrases attached to nouns were annotated, since they can specify characters and contribute to characterization. For an example, an old friend of my parents. Note that \"parents\" is not annotated in this example since it does not refer to a singular character. Appositives were annotated, while relative clauses (\"the woman who sat on the left\") and phrases after copulas (\"he was a terrible lawyer\") were not annotated, as we found them to act more as descriptions of characters than mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Annotation", "sec_num": "4.1" }, { "text": "After extracting character mentions, annotators grouped these mentions into character clusters that refer to the same character in the story. Note that since we focus on characters, we do not annotate other non-person entities usually included in coreference annotations. Full annotation guidelines are available online 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Annotation", "sec_num": "4.1" }, { "text": "To create a unified set of gold annotations, we resolved disagreements between annotators in a second round of annotation. The final test set of 10 annotated stories contains 2,808 annotated character mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Annotation", "sec_num": "4.1" }, { "text": "In Table 3 , we first provide inter-annotator agreement on extracting the same spans of text as character mentions by comparing BIO labeling at the Table 3 : Inter-annotator agreement (Cohen's \u03ba) between two annotators for each task, averaged across 10 fics. Extraction (BIO) is agreement on extracting the same spans of text (not attributing them to characters) with token-level BIO annotation. Attribution (all) refers to attribution of spans to characters where missed spans receive a NULL character attribution. Attribution (agreed) refers to attribution of spans that both annotators marked.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 3", "ref_id": null }, { "start": 148, "end": 155, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Character Coreference Annotation", "sec_num": "4.1" }, { "text": "token level. Tokens that begin a mention are labeled B, tokens that are inside or end a mention are labeled I, and all other tokens are labeled O. Which mentions are identified affects the agreement of attributing those mentions to characters. For this reason, we provide two attribution agreement scores. First, we calculate agreement on mentions annotated by either annotator, with a NULL character annotation if any annotator did not annotate a mention (Attribution (all) in Table 3 ). We also calculate agreement only for character mentions annotated by both annotators (Attribution (agreed) in Table 3 ). Character attribution was labeled as matching if there was significant overlap between primary character names chosen for each cluster by annotators; there were no disagreements on this.", "cite_spans": [], "ref_spans": [ { "start": 478, "end": 485, "text": "Table 3", "ref_id": null }, { "start": 599, "end": 606, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Character Coreference Annotation", "sec_num": "4.1" }, { "text": "For all these categories, inter-annotator agreement was 0.84 Cohen's \u03ba or above, \"near perfect\", for character coreference (Table 3) .", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 132, "text": "(Table 3)", "ref_id": null } ], "eq_spans": [], "section": "Character Coreference Annotation", "sec_num": "4.1" }, { "text": "Two of the authors annotated all quotes that were said aloud or written by a singular character, and attributed them to a list of characters determined from the character coreference annotations. Annotation was designed to focus on characters' voices as displayed in the stories. Thus characters' thoughts were not annotated as quotes, nor were imagined or hypothetical utterances. We also chose not to annotate indirectly reported quotes, such as \"the friend said I was very strange\" since this could be influenced more by the character or narrator reporting the quote than the original character who spoke it. However, we did annotate direct quotes that are reported by other characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Attribution Annotation", "sec_num": "4.2" }, { "text": "Inter-annotator agreement on quote attribution was 0.89 Cohen's \u03ba on the set of all quotes annotated by any annotator (see Table 3 ). Attribution agreement on the set of quote spans identified by both annotators was very high, 0.98 \u03ba. Token-level BIO agreement for marking spans as quotes was 0.97 \u03ba. The final test set of 10 stories contains 876 annotated quotes.", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 130, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Quote Attribution Annotation", "sec_num": "4.2" }, { "text": "We evaluate the pipeline against BookNLP, as well as other state-of-the-art approaches for coreference resolution and quote attribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline Evaluation", "sec_num": "5" }, { "text": "We evaluate the performance of the character coreference module on our 10 annotated fanfiction stories using the CoNLL metric (Pradhan et al., 2012; the average of MUC, B 3 , and CEAFE) and LEA metric (Moosavi and Strube, 2016). We compare our approach against different stateof-the-art approaches used for coreference resolution in the past. Along with BookNLP's approach, we consider the Stanford CoreNLP deterministic coreference model (CoreNLP (dcoref); Raghunathan et al., 2010; Recasens et al., 2013; Lee et al., 2011) and the CoreNLP statistical model (CoreNLP (coref); Clark and Manning, 2015) as traditional baselines. As a neural baseline, we evaluate the more recently proposed BERT-base model (Joshi et al., 2019) , which replaces the original GloVe embeddings (Pennington et al., 2014) with BERT (Devlin et al., 2019) in Lee et al. (2017) 's coreference resolution approach.", "cite_spans": [ { "start": 458, "end": 483, "text": "Raghunathan et al., 2010;", "ref_id": "BIBREF35" }, { "start": 484, "end": 506, "text": "Recasens et al., 2013;", "ref_id": "BIBREF36" }, { "start": 507, "end": 524, "text": "Lee et al., 2011)", "ref_id": "BIBREF24" }, { "start": 705, "end": 725, "text": "(Joshi et al., 2019)", "ref_id": "BIBREF20" }, { "start": 773, "end": 798, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF33" }, { "start": 809, "end": 830, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 834, "end": 851, "text": "Lee et al. (2017)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Evaluation", "sec_num": "5.1" }, { "text": "Micro-averaged results across the 10 annotated stories are shown in Table 4 . The FanfictionNLP approach is SpanBERT-base fine-tuned on LitBank, with the post-hoc removal of non-person and plural mentions and clusters (as described in Section 3.1). Note that these results are without the quote pronoun resolution module described in Section 3.3. Traditional approaches like BookNLP and CoreNLP (dcoref, coref) perform significantly worse than the neural models, especially on recall. Neural models that are further fine-tuned on LitBank (OL) outperform the ones that are only trained on OntoNotes (O). This suggests that further training the model on literary text data does indeed improve its performance on fanfiction narrative. Furthermore, the SpanBERT-base approaches outperform their BERT-base counterparts with an absolute improvement of 4-5 CoNLL F1 percent- L: Model is also fine-tuned on LitBank corpus. Fanfic-tionNLP is the SpanBERT-base OL model with posthoc removal of non-person entities. Note that none of the approaches had access to our fanfiction data. These results are without the quote pronoun resolultion module described in Section 3.3.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Character Coreference Evaluation", "sec_num": "5.1" }, { "text": "age points and 6 LEA F1 percentage points. Posthoc removal of non-person and plural entities improves CoNLL precision on characters by more than 12 percentage points over SpanBERT-base OL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Coreference Evaluation", "sec_num": "5.1" }, { "text": "Using our expanded set of quotation marks, we reach 96% recall and 95% precision of extracted quote spans, micro-averaged over the 10 test stories, compared with 25% recall and 55% precision for BookNLP. For attributing these extracted quotes to characters, we report average F1, precision, and recall under different coreference inputs (Table 5 ). To determine correct quote attributions, the canonical name for the character cluster attributed by systems to each quote is compared with the gold attribution name for that quote. A match is assigned if a) an assigned name has only one word, which matches any word in the gold cluster name (such as Tony and Tony Stark), or b) if more than half of the words in the name match between the two character names, excluding titles such as Ms. and Dr. Namematching is manually checked to ensure no system is penalized for selecting the wrong name within a correct character cluster. Any quote that a system fails to extract is considered a mis-attribution (an attribution to a NULL character).", "cite_spans": [], "ref_spans": [ { "start": 337, "end": 345, "text": "(Table 5", "ref_id": null } ], "eq_spans": [], "section": "Quote Attribution Evaluation", "sec_num": "5.2" }, { "text": "As baselines, we consider BookNLP and the approach of He et al. (2013) , who train a RankSVM model supervised on annotations from the novel Table 5 : Quote attribution evaluation scores. Scores are reported using the respective system's coreference (system coreference), with gold character coreference supplied (gold coreference) and with gold character and gold quote spans supplied (gold quote extraction). Attribution is calculated by a character name match to the gold cluster name. If a quote span is not extracted by a system, it is counted as a mis-attribution. Micro-averages across the 10-story test set are reported. We include Muzny et al. (2017) 's approach in the FanfictionNLP pipeline.", "cite_spans": [ { "start": 54, "end": 70, "text": "He et al. (2013)", "ref_id": "BIBREF15" }, { "start": 639, "end": 658, "text": "Muzny et al. (2017)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 140, "end": 147, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Quote Attribution Evaluation", "sec_num": "5.2" }, { "text": "Pride and Prejudice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Attribution Evaluation", "sec_num": "5.2" }, { "text": "The quality of character coreference affects quote attribution. If an entire character is not identified, there is no chance for the system to attribute a quote to that character. If a system attributes a quote to the nearest character mention and that mention is not attributed to the correct character cluster, the quote attribution will likely be incorrect. For this reason, we evaluate quote attribution with different coreference settings. System coreference in Table 5 refers to quote attribution performance when using the respective system's coreference. That is, BookNLP's coreference was evaluated with BookNLP's quote attribution and FanfictionNLP's coreference with FanfictionNLP's quote attribution. We test He et al. (2013) 's approach with the same coreference input as FanfictionNLP. Evaluations are also reported with gold character coreference, as well as with gold character coreference and with gold quote extractions, to measure attribution without the effects of differences in quote extraction accuracy. The deterministic approach of Muzny et al. (2017) , incorporated in the pipeline, outperforms both BookNLP and He et al. (2013) 's RankSVM classifier in this informal narrative domain.", "cite_spans": [ { "start": 721, "end": 737, "text": "He et al. (2013)", "ref_id": "BIBREF15" }, { "start": 1057, "end": 1076, "text": "Muzny et al. (2017)", "ref_id": "BIBREF31" }, { "start": 1138, "end": 1154, "text": "He et al. (2013)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 467, "end": 474, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Quote Attribution Evaluation", "sec_num": "5.2" }, { "text": "We test our approach for resolving pronouns within quotes (Section 3.3) on character coreference on the fanfiction evaluation set. We show results using gold quote attribution as an upper bound of the prospective improvement, and using quote attributions predicted by Muzny et al. (2017) 's approach adopted in the fanfiction pipeline. As shown in annotation information (Gold QuA) substantially improves the overall performance of coreference resolution across both CoNLL and LEA F1 scores (by 1.6 and 3.5 percentage points respectively). Similarly, coreference resolution using information from a state-of-the-art quote attribution system (Muzny et al., 2017) also results in statistically significant, although smaller, improvements across both metrics (by 0.3 percentage points and 0.8 percentage points respectively) on the 10 fanfiction stories. These results suggest that our approach is able to leverage the quote attribution outputs (speaker information) to resolve the first and second-person pronouns within quotations. It does so by assuming that the text within a quote is from the point of view of the speaker of the quote, as attributed by the quote attribution system. Table 7 shows the qualitative results on three consecutive quotes from one of the stories in our fanfiction dataset. For the first two quotations, Fanfic-tionNLP incorrectly resolves your/you to the char- Table 7 : Coreference Resolution of first-and second-person pronouns in three consecutive quotes from one of the fanfiction stories in our dataset. Results show the impact of the Quote Attribution predictions on the performance of the algorithm described in Section 3.3.", "cite_spans": [ { "start": 268, "end": 287, "text": "Muzny et al. (2017)", "ref_id": "BIBREF31" }, { "start": 641, "end": 661, "text": "(Muzny et al., 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 1185, "end": 1192, "text": "Table 7", "ref_id": null }, { "start": 1390, "end": 1397, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Quote Pronoun Resolution Module Evaluation", "sec_num": "5.3" }, { "text": "acter Caitlin. However, FanfictionNLP + I + You correctly maps the mentions to Cisco. In the third example, we find that FanfictionNLP + I + You (Muzny QuA) does not perform correct resolution as the speaker output by the quote attribution module is incorrect. This shows the dependence of this algorithm on quality quote attribution predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quote Pronoun Resolution Module Evaluation", "sec_num": "5.3" }, { "text": "There is no counterpart to the pipeline's assertion extraction in BookNLP or other systems. Qualitatively, the spans identified by TextTiling include text that relates to characterization beyond simply selecting sentences that mention characters, and with more precision than selecting whole paragraphs that mention characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assertion Extraction Qualitative Evaluation", "sec_num": "5.4" }, { "text": "For example, our approach captured sentences that described how characters were interpreting their environment. In one fanfiction story in our test set, a character \"could see stars and planets, constellations and black holes. Everything was distant, yet reachable.\" Such sentences do not contain character mentions, but certainly contribute to character development and contain useful associations made with characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assertion Extraction Qualitative Evaluation", "sec_num": "5.4" }, { "text": "These assertions also capture narration that mentions interactions between characters, but which may not mention any one character individually. In another fanfiction story in which two wizards are dueling, extracted assertions for each character includes, \"Their wands out, pointed at each other, each shaking with rage.\" These associations are important to characterization, but fall outside sentences that contain individual character mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assertion Extraction Qualitative Evaluation", "sec_num": "5.4" }, { "text": "Though most online fanfiction is publicly available, researchers must consider how users themselves view the reach of their content (Fiesler and Proferes, 2018) . Anonymity and privacy are core values of fanfiction communities; this is especially important since many participants identify as LGBTQ+ (Fiesler et al., 2016; Dym et al., 2019) . We informed Archive of Our Own, with our contact information, when scraping fanfiction and modified fanfiction examples given in this paper for privacy. We urge researchers who may use the fanfiction pipeline we present to consider how their work engages with fanfiction readers and writers, and to honor the creativity and privacy of the community and individuals behind this \"data\".", "cite_spans": [ { "start": 132, "end": 160, "text": "(Fiesler and Proferes, 2018)", "ref_id": "BIBREF13" }, { "start": 300, "end": 322, "text": "(Fiesler et al., 2016;", "ref_id": "BIBREF12" }, { "start": 323, "end": 340, "text": "Dym et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Ethics", "sec_num": "6" }, { "text": "We present a text processing pipeline for the domain of fanfiction, stories that are written by fans and inspired by original media. Large archives of fanfiction are available online and present opportunities for researchers interested in community writing practices, narrative structure, fan culture, and online communities. The presented text processing pipeline allows researchers to extract and cluster mentions of characters from fanfiction stories, along with what each character does (assertions) and says (quotes).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We assemble state-of-the-art NLP approaches for each module of this processing pipeline and evaluate them on an annotated test set, outperforming a pipeline developed for formal fiction on character coreference and quote attribution. We also present improvements in character coreference with a post-processing step that uses information from quote attribution to resolve first-and second-person pronouns within quotes. Our hope is that this pipeline will be a step toward enabling structured analysis of the text of fanfiction stories, which contain more variety than published, formal fiction. The pipeline could also be applied to other formal or informal narratives outside of fanfiction, though we have not evaluated it in other domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The pipeline is available at https://github.com/ michaelmilleryoder/fanfiction-nlp.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://archiveofourown.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/ michaelmilleryoder/fanfiction-nlp/ annotation_guidelines.md", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by NSF grant DRL 1949110. We acknowledge Shefali Garg, Ethan Xuanyue Yang, and Luke Breitfeller for work on an earlier version of this pipeline, and Matthew Sims and David Bamman for their quote attribution re-implementation. We also thank the fanfiction writers on Archive of Our Own whose creative work allowed the creation and evaluation of this pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatic Extraction of Social Networks from Literary Text: A Case Study on Alice in Wonderland", "authors": [ { "first": "Apoorv", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Anup", "middle": [], "last": "Kotalwar", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2013, "venue": "International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1202--1208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Apoorv Agarwal, Anup Kotalwar, and Owen Rambow. 2013. Automatic Extraction of Social Networks from Literary Text: A Case Study on Alice in Won- derland. In International Joint Conference on Nat- ural Language Processing, October, pages 1202- 1208.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A joint model for quotation attribution and coreference resolution", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Mariana", "suffix": "" }, { "first": "Miguel", "middle": [ "B" ], "last": "Almeida", "suffix": "" }, { "first": "Andr\u00e9 Ft", "middle": [], "last": "Almeida", "suffix": "" }, { "first": "", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mariana SC Almeida, Miguel B Almeida, and An- dr\u00e9 FT Martins. 2014. A joint model for quotation attribution and coreference resolution. In Proceed- ings of the 14th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 39-48.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An Annotated Dataset of Coreference in English Literature", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Lewke", "suffix": "" }, { "first": "Anya", "middle": [], "last": "Mansoor", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "44--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bamman, Olivia Lewke, and Anya Mansoor. 2020. An Annotated Dataset of Coreference in En- glish Literature. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 44-54.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Bayesian Mixed Effects Model of Literary Character", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Underwood", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014)", "volume": "", "issue": "", "pages": "370--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bamman, Ted Underwood, and Noah A. Smith. 2014. A Bayesian Mixed Effects Model of Literary Character. Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (ACL 2014), pages 370-379.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A social interaction model of reading", "authors": [ { "first": "Bertram", "middle": [], "last": "Bruce", "suffix": "" } ], "year": 1981, "venue": "Discourse Processes", "volume": "4", "issue": "4", "pages": "273--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertram Bruce. 1981. A social interaction model of reading. Discourse Processes, 4(4):273-311.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "EntyFi: Entity typing in fictional texts", "authors": [ { "first": "Xuan", "middle": [], "last": "Cuong", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Razniewski", "suffix": "" }, { "first": "", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2020, "venue": "WSDM 2020 -Proceedings of the 13th International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "124--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cuong Xuan Chu, Simon Razniewski, and Gerhard Weikum. 2020. EntyFi: Entity typing in fictional texts. WSDM 2020 -Proceedings of the 13th Inter- national Conference on Web Search and Data Min- ing, pages 124-132.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Entitycentric coreference resolution with model stacking", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1405--1415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark and Christopher D Manning. 2015. Entity- centric coreference resolution with model stacking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1405-1415.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT), pages 4171-4186.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Coming Out Okay\": Community Narratives for LGBTQ Identity Recovery Work", "authors": [ { "first": "Brianna", "middle": [], "last": "Dym", "suffix": "" }, { "first": "Jed", "middle": [ "R" ], "last": "Brubaker", "suffix": "" }, { "first": "Casey", "middle": [], "last": "Fiesler", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Semaan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the ACM on Human-Computer Interaction", "volume": "3", "issue": "", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brianna Dym, Jed R. Brubaker, Casey Fiesler, and Bryan Semaan. 2019. \"Coming Out Okay\": Com- munity Narratives for LGBTQ Identity Recovery Work. Proceedings of the ACM on Human- Computer Interaction, 3(CSCW):1-28.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "More Than Peer Production: Fanfiction Communities as Sites of Distributed Mentoring", "authors": [ { "first": "Sarah", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Katie", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Abigail", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Julie", "middle": [ "Ann" ], "last": "Campbell", "suffix": "" }, { "first": "P", "middle": [], "last": "David", "suffix": "" }, { "first": "Kodlee", "middle": [], "last": "Randall", "suffix": "" }, { "first": "Cecilia", "middle": [], "last": "Yin", "suffix": "" }, { "first": "", "middle": [], "last": "Aragon", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing", "volume": "", "issue": "", "pages": "259--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Evans, Katie Davis, Abigail Evans, Julie Ann Campbell, David P Randall, Kodlee Yin, and Cecilia Aragon. 2017. More Than Peer Production: Fanfic- tion Communities as Sites of Distributed Mentoring. Proceedings of the 2017 ACM Conference on Com- puter Supported Cooperative Work and Social Com- puting, pages 259-272.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Shirtless and dangerous: Quantifying linguistic signals of gender bias in an online fiction writing community", "authors": [ { "first": "Ethan", "middle": [], "last": "Fast", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Vachovsky", "suffix": "" }, { "first": "Michael S", "middle": [], "last": "Bernstein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Conference on Web and Social Media (ICWSM)", "volume": "", "issue": "", "pages": "112--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ethan Fast, Tina Vachovsky, and Michael S Bernstein. 2016. Shirtless and dangerous: Quantifying lin- guistic signals of gender bias in an online fiction writing community. In Proceedings of the 10th In- ternational Conference on Web and Social Media (ICWSM), pages 112-120.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Hooked: Art and Attachment", "authors": [ { "first": "Rita", "middle": [], "last": "Felski", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rita Felski. 2020. Hooked: Art and Attachment. Uni- versity of Chicago Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An Archive of Their Own", "authors": [ { "first": "Casey", "middle": [], "last": "Fiesler", "suffix": "" }, { "first": "Shannon", "middle": [], "last": "Morrison", "suffix": "" }, { "first": "Amy", "middle": [ "S" ], "last": "Bruckman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems -CHI '16", "volume": "", "issue": "", "pages": "2574--2585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Casey Fiesler, Shannon Morrison, and Amy S. Bruck- man. 2016. An Archive of Their Own. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems -CHI '16, pages 2574-2585.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Participant", "authors": [ { "first": "Casey", "middle": [], "last": "Fiesler", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Proferes", "suffix": "" } ], "year": 2018, "venue": "Perceptions of Twitter Research Ethics. Social Media and Society", "volume": "4", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1177/2056305118763366" ] }, "num": null, "urls": [], "raw_text": "Casey Fiesler and Nicholas Proferes. 2018. \"Partici- pant\" Perceptions of Twitter Research Ethics. Social Media and Society, 4(1).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Understanding media enjoyment: The role of transportation into narrative worlds", "authors": [ { "first": "Melanie", "middle": [ "C" ], "last": "Green", "suffix": "" }, { "first": "Timothy", "middle": [ "C" ], "last": "Brock", "suffix": "" }, { "first": "Geoff", "middle": [ "F" ], "last": "Kaufman", "suffix": "" } ], "year": 2004, "venue": "Communication Theory", "volume": "14", "issue": "4", "pages": "311--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melanie C. Green, Timothy C. Brock, and Geoff F. Kaufman. 2004. Understanding media enjoyment: The role of transportation into narrative worlds. Communication Theory, 14(4):311-327.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Identification of speakers in novels", "authors": [ { "first": "Hua", "middle": [], "last": "He", "suffix": "" }, { "first": "Denilson", "middle": [], "last": "Barbosa", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1312--1320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hua He, Denilson Barbosa, and Grzegorz Kondrak. 2013. Identification of speakers in novels. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1312-1320.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "TextTiling: Segmenting text into multi-paragraph subtopic passages", "authors": [ { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "1", "pages": "33--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A. Hearst. 1997. TextTiling: Segmenting text into multi-paragraph subtopic passages. Computa- tional Linguistics, 23(1):33-64.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Feuding families and former friends: Unsupervised learning for dynamic fictional relationships", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Anupam", "middle": [], "last": "Guha", "suffix": "" }, { "first": "Snigdha", "middle": [], "last": "Chaturvedi", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "1534--1544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jor- dan Boyd-Graber, and Hal Daum\u00e9 III. 2016. Feud- ing families and former friends: Unsupervised learn- ing for dynamic fictional relationships. In Proceed- ings of the Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies (NAACL-HLT), pages 1534-1544.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Textual Poachers: Television Fans and Participatory Culture. Routledge", "authors": [ { "first": "Henry", "middle": [], "last": "Jenkins", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henry Jenkins. 1992. Textual Poachers: Television Fans and Participatory Culture. Routledge.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "SpanBERT: Improving pre-training by representing and predicting spans", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "64--77", "other_ids": { "DOI": [ "10.1162/tacl_a_00300" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Associa- tion for Computational Linguistics, 8:64-77.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "BERT for coreference resolution: Baselines and analysis", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5803--5808", "other_ids": { "DOI": [ "10.18653/v1/D19-1588" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference reso- lution: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5803-5808, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning to Listen: Critically Considering the Role of AI in Human Storytelling and Character Creation", "authors": [ { "first": "Anna", "middle": [], "last": "Kasunic", "suffix": "" }, { "first": "Geoff", "middle": [], "last": "Kaufman", "suffix": "" } ], "year": 2018, "venue": "Proceedings ofthe First Workshop on Storytelling", "volume": "", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Kasunic and Geoff Kaufman. 2018. Learning to Listen: Critically Considering the Role of AI in Human Storytelling and Character Creation. In Pro- ceedings ofthe First Workshop on Storytelling, pages 1-13.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Overview of the author identification task at PAN-2018: Crossdomain authorship attribution and style change detection", "authors": [ { "first": "Mike", "middle": [], "last": "Kestemont", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Tschuggnall", "suffix": "" }, { "first": "Efstathios", "middle": [], "last": "Stamatatos", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "G\u00fcnther", "middle": [], "last": "Specht", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" } ], "year": 2018, "venue": "CEUR Workshop Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Kestemont, Michael Tschuggnall, Efstathios Sta- matatos, Walter Daelemans, G\u00fcnther Specht, Benno Stein, and Martin Potthast. 2018. Overview of the author identification task at PAN-2018: Cross- domain authorship attribution and style change de- tection. CEUR Workshop Proceedings, 2125.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Frowning Frodo, Wincing Leia, and a Seriously Great Friendship: Learning to Classify Emotional Relationships of Fictional Characters", "authors": [ { "first": "Evgeny", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "647--653", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evgeny Kim and Roman Klinger. 2019. Frowning Frodo, Wincing Leia, and a Seriously Great Friend- ship: Learning to Classify Emotional Relationships of Fictional Characters. In Proceedings of the Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (NAACL-HLT), pages 647-653.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 15th Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "28--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford's multi-pass sieve coref- erence resolution system at the conll-2011 shared task. In Proceedings of the 15th Conference on Computational Natural Language Learning: Shared Task, pages 28-34.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "End-to-end neural coreference resolution", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "188--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Yearning void and infinite potential: Online slash fandom as queer female space", "authors": [ { "first": "Alexis", "middle": [], "last": "Lothian", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Busse", "suffix": "" }, { "first": "Robin", "middle": [ "Anne" ], "last": "Reid", "suffix": "" } ], "year": 2007, "venue": "English Language Notes", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Lothian, Kristina Busse, and Robin Anne Reid. 2007. Yearning void and infinite potential: Online slash fandom as queer female space. English Lan- guage Notes, 45(2).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Redefining genderswap fan fiction: A Sherlock case study. Transformative Works & Cultures", "authors": [ { "first": "Ann", "middle": [], "last": "Mcclellan", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann McClellan. 2014. Redefining genderswap fan fic- tion: A Sherlock case study. Transformative Works & Cultures, 17.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "WordNet: a lexical database for English", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39-41.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Beyond Canonical Texts : A Computational Analysis of Fanfiction", "authors": [ { "first": "Smitha", "middle": [], "last": "Milli", "suffix": "" }, { "first": "David", "middle": [], "last": "Bamman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16)", "volume": "", "issue": "", "pages": "2048--2053", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smitha Milli and David Bamman. 2016. Beyond Canonical Texts : A Computational Analysis of Fan- fiction. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16), pages 2048-2053.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric", "authors": [ { "first": "Sadat", "middle": [], "last": "Nafise", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Moosavi", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/P16-1060" ] }, "num": null, "urls": [], "raw_text": "Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 632-642, Berlin, Germany. As- sociation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A two-stage sieve approach for quote attribution", "authors": [ { "first": "Felix", "middle": [], "last": "Muzny", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Angel", "middle": [ "X" ], "last": "Chang", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "460--470", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Muzny, Michael Fang, Angel X. Chang, and Dan Jurafsky. 2017. A two-stage sieve approach for quote attribution. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics (EACL 2017), volume 1, pages 460-470.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A sequence labelling approach to quote attribution", "authors": [ { "first": "O'", "middle": [], "last": "Tim", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Keefe", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Pareti", "suffix": "" }, { "first": "Irena", "middle": [], "last": "Curran", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Koprinska", "suffix": "" }, { "first": "", "middle": [], "last": "Honnibal", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "790--799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim O'Keefe, Silvia Pareti, James R. Curran, Irena Ko- prinska, and Matthew Honnibal. 2012. A sequence labelling approach to quote attribution. Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, (July):790-799.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll- 2012 shared task: Modeling multilingual unre- stricted coreference in ontonotes. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, page 1-40, USA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A multi-pass sieve for coreference resolution", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Karthik Raghunathan", "suffix": "" }, { "first": "Sudarshan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Rangarajan", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "492--501", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher D Manning. 2010. A multi-pass sieve for coreference resolution. In Pro- ceedings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing, pages 492- 501.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "The life and death of discourse entities: Identifying singleton mentions", "authors": [ { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Recasens, Marie-Catherine de Marneffe, and Christopher Potts. 2013. The life and death of dis- course entities: Identifying singleton mentions. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 627-633.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Practical text phylogeny for real-world settings", "authors": [ { "first": "Bingyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christopher", "middle": [ "W" ], "last": "Forstall", "suffix": "" }, { "first": "Anderson", "middle": [], "last": "De Rezende", "suffix": "" }, { "first": "Walter", "middle": [ "J" ], "last": "Rocha", "suffix": "" }, { "first": "", "middle": [], "last": "Scheirer", "suffix": "" } ], "year": 2018, "venue": "IEEE Access", "volume": "6", "issue": "", "pages": "41002--41012", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bingyu Shen, Christopher W. Forstall, Anderson De Rezende Rocha, and Walter J. Scheirer. 2018. Practical text phylogeny for real-world settings. IEEE Access, 6:41002-41012.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Measuring information propagation in literary social networks", "authors": [ { "first": "Matthew", "middle": [], "last": "Sims", "suffix": "" }, { "first": "David", "middle": [], "last": "Bamman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "642--652", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.47" ] }, "num": null, "urls": [], "raw_text": "Matthew Sims and David Bamman. 2020. Measuring information propagation in literary social networks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 642-652, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Literary event detection", "authors": [ { "first": "Matthew", "middle": [], "last": "Sims", "suffix": "" }, { "first": "Jong", "middle": [ "Ho" ], "last": "Park", "suffix": "" }, { "first": "David", "middle": [], "last": "Bamman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3623--3634", "other_ids": { "DOI": [ "10.18653/v1/P19-1353" ] }, "num": null, "urls": [], "raw_text": "Matthew Sims, Jong Ho Park, and David Bamman. 2019. Literary event detection. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3623-3634, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Homosexuality at the Online Hogwarts: Harry Potter Slash Fanfiction. Children's Literature", "authors": [ { "first": "Catherine", "middle": [], "last": "Tosenberger", "suffix": "" } ], "year": 2008, "venue": "", "volume": "36", "issue": "", "pages": "185--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Catherine Tosenberger. 2008. Homosexuality at the Online Hogwarts: Harry Potter Slash Fanfiction. Children's Literature, 36(1):185-207.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Harry Potter and the action prediction challenge from natural language", "authors": [ { "first": "David", "middle": [], "last": "Vilares", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "G\u00f3mez-Rodr\u00edguez", "suffix": "" } ], "year": 2019, "venue": "NAACL HLT 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Conference", "volume": "1", "issue": "", "pages": "2124--2130", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Vilares and Carlos G\u00f3mez-Rodr\u00edguez. 2019. Harry Potter and the action prediction challenge from natural language. NAACL HLT 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Confer- ence, 1:2124-2130.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Characters in Bakhtin's Theory. Studies in 20th Century Literature", "authors": [ { "first": "Anthony", "middle": [], "last": "Wall", "suffix": "" } ], "year": 1984, "venue": "", "volume": "9", "issue": "", "pages": "2334--4415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Wall. 1984. Characters in Bakhtin's Theory. Studies in 20th Century Literature, 9(1):2334-4415.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Where no one has gone before: A meta-dataset of the world's largest fanfiction repository", "authors": [ { "first": "Kodlee", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Cecilia", "middle": [], "last": "Aragon", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Katie", "middle": [], "last": "Davis", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "6106--6110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kodlee Yin, Cecilia Aragon, Sarah Evans, and Katie Davis. 2017. Where no one has gone before: A meta-dataset of the world's largest fanfiction repos- itory. In Proceedings of the Conference on Human Factors in Computing Systems, pages 6106-6110.", "links": null } }, "ref_entries": { "TABREF2": { "num": null, "content": "", "text": "The most popular 10 fandoms on Archive of Our Own by number of works, as of September 2018. We annotate 1 story from each fandom to form our test set.", "type_str": "table", "html": null }, "TABREF4": { "num": null, "content": "
", "text": "Fanfiction evaluation dataset statistics", "type_str": "table", "html": null }, "TABREF7": { "num": null, "content": "
", "text": "Character coreference performance on CoNLL and LEA metrics. O: Model is trained on OntoNotes.", "type_str": "table", "html": null }, "TABREF8": { "num": null, "content": "
PRF1PRF1PRF1
BookNLP54.6 25.434.766.8 38.949.265.0 49.756.3
He et al. (2013)54.0 53.353.656.5 55.756.156.7 56.056.3
Muzny et al. (2017) (FanfictionNLP)68.7 67.067.873.5 75.474.477.5 77.577.5
", "text": "With system coreference With gold coreference With gold quote extraction", "type_str": "table", "html": null }, "TABREF9": { "num": null, "content": "
, post-hoc resolution of first-person (I) and
second-person (you) pronouns with perfect quote
", "text": "", "type_str": "table", "html": null }, "TABREF10": { "num": null, "content": "
Coreference resolution scores on the 10 fanfiction eval-
uation stories are reported. Improvements gained from
changing the attribution of I and you within quotes are
shown, with both the Muzny et al. (2017) quotation
attribution system used in the FanfictionNLP pipeline,
as well as the upper bound of improvement with gold
quote annotation predictions.
", "text": "Quote Pronoun Resolution evaluation scores.", "type_str": "table", "html": null } } } }