ACL-OCL / Base_JSON /prefixN /json /nuse /2021.nuse-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:07.363400Z"
},
"title": "Towards a Model-Theoretic View of Narratives",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Castricato",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Georgia",
"middle": [],
"last": "Tech",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Stella",
"middle": [],
"last": "Biderman",
"suffix": "",
"affiliation": {},
"email": "stella@eleuther.ai"
},
{
"first": "Rogelio",
"middle": [
"E"
],
"last": "Cardona-Rivera",
"suffix": "",
"affiliation": {},
"email": "rogelio@cs.utah.edu"
},
{
"first": "David",
"middle": [],
"last": "Thue",
"suffix": "",
"affiliation": {},
"email": "david.thue@carleton.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose the beginnings of a formal framework for modeling narrative qua narrative. Our framework affords the ability to discuss key qualities of stories and their communication, including the flow of information from a Narrator to a Reader, the evolution of a Reader's story model over time, and Reader uncertainty. We demonstrate its applicability to computational narratology by giving explicit algorithms for measuring the accuracy with which information was conveyed to the Reader, along with two novel measurements of story coherence.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose the beginnings of a formal framework for modeling narrative qua narrative. Our framework affords the ability to discuss key qualities of stories and their communication, including the flow of information from a Narrator to a Reader, the evolution of a Reader's story model over time, and Reader uncertainty. We demonstrate its applicability to computational narratology by giving explicit algorithms for measuring the accuracy with which information was conveyed to the Reader, along with two novel measurements of story coherence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Story understanding is both (1) the process through which a cognitive agent (human or artificial) mentally constructs a plot through the perception of a narrated discourse, and (2) the outcome of that process: i.e., the agent's mental representation of the plot. The best way to computationally model story understanding is contextual to the aims of a given research program, and today we enjoy a plethora of artificial intelligence (AI)-based capabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Data-driven approaches-including statistical, neural, and neuro-symbolic ones-look to narrative as a benchmark task for demonstrating human-level competency on inferencing, question-answering, and storytelling. That is, they draw associations between event (Chambers and Jurafsky, 2008) , causal (Li et al., 2012) , and purposive (Jiang and Riloff, 2018) information extracted from textual or visual narrative corpora to answer questions or generate meaningful stories that depend on information implied and not necessarily expressed by stories (e.g. Roemmele et al., 2011; Mostafazadeh et al., 2016; Martin et al., 2018; Kim et al., 2019) .",
"cite_spans": [
{
"start": 257,
"end": 286,
"text": "(Chambers and Jurafsky, 2008)",
"ref_id": "BIBREF8"
},
{
"start": 296,
"end": 313,
"text": "(Li et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 330,
"end": 354,
"text": "(Jiang and Riloff, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 551,
"end": 573,
"text": "Roemmele et al., 2011;",
"ref_id": "BIBREF24"
},
{
"start": 574,
"end": 600,
"text": "Mostafazadeh et al., 2016;",
"ref_id": "BIBREF21"
},
{
"start": 601,
"end": 621,
"text": "Martin et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 622,
"end": 639,
"text": "Kim et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Symbolic approaches seek to understand narrative, its communication, and its effect by using AI techniques as computational modeling tools, including logic, constraint satisfaction, and automated planning. These include efforts to model creative storytelling as a search process (Riedl and Young, 2006; Thue et al., 2016) , generating stories with predictable effects on their comprehension by audiences (Cardona-Rivera et al., 2016) , and modeling story understanding through humanconstrained techniques (Martens et al., 2020) .",
"cite_spans": [
{
"start": 279,
"end": 302,
"text": "(Riedl and Young, 2006;",
"ref_id": "BIBREF22"
},
{
"start": 303,
"end": 321,
"text": "Thue et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 404,
"end": 433,
"text": "(Cardona-Rivera et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 505,
"end": 527,
"text": "(Martens et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite recent advances, few works have offered a thorough conceptual account of narrative in a way that affords reconciling how different research programs might relate to each other. Without a foundation for shared progress, our community might strain to determine how individual results may build upon each other to make progress on story understanding AI that performs as robustly and flexibly as humans do (Cardona-Rivera and Young, 2019) . In this paper, we take steps toward such a foundation.",
"cite_spans": [
{
"start": 411,
"end": 443,
"text": "(Cardona-Rivera and Young, 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We posit that such a foundation must acknowledge the diverse factors that contribute to an artifact being treated as a narrative. Key among these factors is a narrative's communicative status: unlike more-general natural language generation (cf. Gatt and Krahmer, 2018) , an audience's belief dynamics-the trajectory of belief expansions, contractions, and revisions (Alchourr\u00f3n et al., 1985) -is core to what gives a narrative experience its quality (Herman, 2013) . Failure to engage with narratives on these grounds risks losing an essential aspect of what makes narrative storytelling a vibrant and unique form of literature.",
"cite_spans": [
{
"start": 246,
"end": 269,
"text": "Gatt and Krahmer, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 367,
"end": 392,
"text": "(Alchourr\u00f3n et al., 1985)",
"ref_id": "BIBREF0"
},
{
"start": 451,
"end": 465,
"text": "(Herman, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To that end, we define a preliminary theoretical framework of narrative centered on information entropy. Our framework is built atop model theory, the set-theoretic study of language interpretation. Model theory is a field of formal logic that has been used extensively by epistomologists, linguists, and other theorists as a framework for building logical semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions. In this paper, we propose the beginnings of a formal framework for modeling narrative qua narrative. Our framework affords discussing the flow of information from a Narrator to a Reader, the evolution of a Reader's story model over time, and Reader uncertainty. Our work is grounded in the long history of narratology, drawing on the rich linguistic and philosophical history of the field to justify our notions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use our framework to make experimentally verifiable conjectures about how story readers respond to under-specification of the story world and how to use entropy to identify plot points. We additionally demonstrate its applicability to computational narratology by giving explicit algorithms for measuring the accuracy with which information was conveyed to the Reader. We also propose two novel measurements of story coherence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Before we can begin defining narrative in a formal sense, we must examine the intuitive notions of what narrative is supposed to mean. While we cannot address all of the complexity of narratology in this work, we cover some key perspectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Rigorous Notions of Narrative",
"sec_num": "2"
},
{
"text": "We begin with the structuralist account within narratology; it frames a narrative (story) as a communicative, designed artifact-the product of a narration, itself a realization (e.g. book, film) of a discourse (H\u00fchn and Sommer, 2013) . The discourse is the story's information layer (Genette, 1980) : an author-structured, temporally-organized subset of the fabula; a discourse projects a fabula's information. The fabula is the story's world, which includes its characters, or intention-driven agents; locations, or spatial context; and events, the causally-, purposely-, and chronologically-related situation changes (Bal, 1997; Rimmon-Kenan, 2002) .",
"cite_spans": [
{
"start": 210,
"end": 233,
"text": "(H\u00fchn and Sommer, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 283,
"end": 298,
"text": "(Genette, 1980)",
"ref_id": "BIBREF11"
},
{
"start": 619,
"end": 630,
"text": "(Bal, 1997;",
"ref_id": "BIBREF2"
},
{
"start": 631,
"end": 650,
"text": "Rimmon-Kenan, 2002)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Narratives as Physical Artifacts",
"sec_num": "2.1"
},
{
"text": "As a designed artifact, a narrative reflects authorial intent. Authors design the stories they tell to affect audiences in specific ways; their designs ultimately target effecting change in the minds of audiences (Bordwell, 1989) . This design stems from the authors' understanding of their fabula and of the narration that conveys its discourse. When audiences encounter the designed artifact, they perform story understanding: they attempt to mentally construct a fabula through their perception of the story's narration.",
"cite_spans": [
{
"start": 213,
"end": 229,
"text": "(Bordwell, 1989)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Narratives as Physical Artifacts",
"sec_num": "2.1"
},
{
"text": "Story psychologists frame the narration as instructions that guide story understanding (Gernsbacher et al., 1990) . The fabula in the audience's mind is termed the situation model-a mental representation of the virtual world and the events that have transpired within it, formed from information both explicitly-narrated and inferable-from a narration (Zwaan and Radvansky, 1998) . The situation model itself is the audience's understanding; it reflects a tacit belief about the fabula, and is manipulated via three (fabula-belief) update operations. These work across memory retrieval, inferencing, and question-answering cognition: (1) expansion, when the audience begins to believe something, (2) contraction, when the audience ceases to believe something, and (3) revision, when the audience expands their belief and contracts newly inconsistent beliefs.",
"cite_spans": [
{
"start": 87,
"end": 113,
"text": "(Gernsbacher et al., 1990)",
"ref_id": "BIBREF12"
},
{
"start": 352,
"end": 379,
"text": "(Zwaan and Radvansky, 1998)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Narratives as Mental Artifacts",
"sec_num": "2.2"
},
{
"text": "To the post-structuralist, the emphasis that the psychological account puts on the author is fundamentally misplaced (Barthes, 1967) . From this point of view, books are meant to be read, not written, and how they influence and are interpreted by their readers is as essential to their essence as the intention of the author. In \"Death of the Author\" Barthes (Barthes, 1967) reinforces this concept by persistently referring to the writer of a narrative not as its creator or its author, but as its sculptor -one who shapes and guides the work but does not dictate to their audience its meaning.",
"cite_spans": [
{
"start": 117,
"end": 132,
"text": "(Barthes, 1967)",
"ref_id": "BIBREF3"
},
{
"start": 359,
"end": 374,
"text": "(Barthes, 1967)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Narratives as Received Artifacts",
"sec_num": "2.3"
},
{
"text": "The core of our framework for modeling narrative come from a field of mathematical logic known as model theory. Model theory is a powerful yet flexible framework that has heavily influenced computer scientists, literary theorists, linguists, and philosophers (Sider, 2010) . Despite the centrality of model theory in our framework, a deep understanding of the topic is not necessary to work with it on an applied level. Our goal in this section is thus to give an intuitive picture of model theory that is sufficient to understand how we will use it to talk about narratives. We refer an interested reader to Sider (2010); Chang and Keisler (1990) for a more complete presentation of the subject.",
"cite_spans": [
{
"start": 259,
"end": 272,
"text": "(Sider, 2010)",
"ref_id": "BIBREF25"
},
{
"start": 633,
"end": 647,
"text": "Keisler (1990)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Model-Theoretic View of Narrative",
"sec_num": "3"
},
{
"text": "The central object of study in model theory is a \"model.\" Loosely speaking, a model is a world in which particular propositions are true. A model has two components: a domain, which is the set of objects the model makes claims about, and a theory, which is a set of consistent sentences that make claims about elements of the domain. Models in many ways resemble fabulas, in that they describe the relational properties of objects. Model theory, however, requires that the theory of a model be complete -every expressible proposition must be either true or false in a particular model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Outline of Model Theory",
"sec_num": "3.1"
},
{
"text": "Meanwhile, our notion of a fabula can be incomplete -it can leave the truth of some propositions undefined. This means that the descriptions we are interested in do not correspond to only one model, but rather that there is an infinite set of models that are consistent with the description. This may seem like a limitation, but we will show in Section 6 that it is actually amenable to analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Outline of Model Theory",
"sec_num": "3.1"
},
{
"text": "As an example, consider a simple world in which people can play cards with one another and wear clothes of various colours. The description \"Jay wears blue. Ali plays cards with Jay.\" is incomplete because it does not say what colours Ali wears nor what other colours Jay wears. This description is consistent with a world in which there are characters other than Jay and Ali or colours other than blue (varying the domain), as well as one where additional propositions such as \"Ali wears blue.\" hold (varying the theory).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Outline of Model Theory",
"sec_num": "3.1"
},
{
"text": "Although we learn more about the domain and the theory of the narrator's model as the story goes on, we will never learn every single detail. Some of these details may not even be known to the narrator! For this reason, our framework puts a strong emphasis on consistency between models, and on the set of all models that are consistent with a particular set of statements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Outline of Model Theory",
"sec_num": "3.1"
},
{
"text": "Another very important aspect of model theory is that it is highly modular. Much of model theory is independent of the underlying logical semantics, which allows us to paint a very general picture. If a particular application requires augmenting the storytelling semantics with additional logical operators or relations, that is entirely non-problematic. For example, it is common for fabulas to contain Cause(X, Y) := \"X causes Y\" and Aft(X, Y) := \"Y occurs after X.\" Although we don't specifically define either of these relations, they can be included in a particular application by simply adding them to the underlying logic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Outline of Model Theory",
"sec_num": "3.1"
},
{
"text": "As detailed in section 2, the fabula and story-world (i.e., situation) model are two central components of how people talk about storytelling. In this section, we introduce formal definitions of these concepts as well as some of their properties. Definition 3.1. A language, L, is a set of rules for forming syntactically valid propositions. In this work we will make very light assumptions about L and leave its design largely up to the application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story-World Models and the Fabula",
"sec_num": "3.2"
},
{
"text": "A language describes syntactic validity, but it doesn't contain a notion of truth. For that, we need a model. Definition 3.2. A story world model, S, over a language L is comprised of two parts: a domain, which is the set of things that exist in the story, and an interpretation function, which takes logical formulae and maps them to corresponding objects in the domain. In other words, the interpretation function is what connects the logical expression \"A causes B\" to the signified fact in the world that the thing we refer to as A causes the thing we refer to as B. Definition 3.3. The theory of a story world model, S, is the set of all propositions that are true in S. It is denotedS. When we say \"P is true in the model S\" we mean that P \u2208S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story-World Models and the Fabula",
"sec_num": "3.2"
},
{
"text": "Formalizing the concept of a fabula is a bit trickier. Traditionally, fabulas are represented diagrammatically as directed graphs, but this representation gives little insight into their core attributes. We posit that, at their core, fabulas are relational objects. Specifically, they are a collection of elements of the domain of the story-world model together with claims about the relationships between those objects. Additionally, there is a sense in which the fabula is a \"scratch pad\" for the story-world model. While a reader may not even be able to hold an entire infinite story-world model in their head, they can more easily grasp the distillation of that story-world model into a fabula. Definition 3.4. A reasoner's fabula for a story world model S, denoted F , is a set of propositions that makes claims about S. A proposition P is a member of F if it is an explicit belief of the reasoner about the narrative that the reasoner deems important to constructing an accurate story-world model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story-World Models and the Fabula",
"sec_num": "3.2"
},
{
"text": "An important aspect of stories is that they are a way to convey information. In this section, we will discuss how to formalize this process and what we can learn about it. Although stories can be constructed and conveyed in many different ways, we will speak of a Narrator who tells the story and a Reader who receives it for simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conveying Story Information",
"sec_num": "4"
},
{
"text": "The core of how we model storytelling as an act of communication can be seen in Figure 1 . This diagram represents the transmission of information from the Narrator's story-world (S N ) to the Reader's (S R ), with each arrow representing the transmission from one representation to another. In an idealized world, stories would be conveyed by d : straight from the story world of the narrator (S N ) to the story world of the reader (S R ). In actuality, narrators must convey their ideas through media 1 . To do this, the narrator compresses their mental story world (via \u03c6) into a fabula (F N ) which is then conveyed to the reader via speech, writing, etc. The conveyance of the fabula as understood by the Narrator (F N ) to the fabula as understood by the Reader (F R ) is denoted in our diagram by d. d is in many ways the real-world replacement for the function d the Narrator is unable to carry out. Once the discourse has been consumed by the Reader, the Reader then takes their reconstructed fabula (F R ) and uses the received information to update their story world model (S R , via \u03c8).",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Conveying Story Information",
"sec_num": "4"
},
{
"text": "S N S R F N F R \u03c6 d d \u03c8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conveying Story Information",
"sec_num": "4"
},
{
"text": "Often times, information conveyed from the Narrator to the Reader is \"conveyed correctly.\" By this, we mean that the essential character of the story was conveyed from the Narrator to the Reader in such a way that the Reader forms accurate beliefs about the story-world. While accuracy is not always a primary consideration -some stories feature unreliable narrators or deliberately mislead the Reader to induce experiences such as suspense, fear, and anticipation -the ability to discuss the accuracy and consistency of the telling of the story is an essential part of analyzing a narrative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accurately Conveying Information",
"sec_num": "4.1"
},
{
"text": "The d arrow in our diagram suggests a reasonable criteria for accurate conveyance: a story is accurately conveyed if the path S N \u2192 F N \u2192 F R \u2192 S R and the path S N S R compute the same (or, in practice, similar) functions. In mathematics, this property of path-independence is known as commutativity and the diagram is called a \"commutative diagram\" when it holds. For the purposes of narrative work, the essential aspect is that the arrows \"map corresponding objects correspondingly.\" That is, if a story is accurately conveyed from N to R then for each proposition P \u2208 S N there should be a corresponding P \u2208 S R such that the interpretations of P and P (with respect to their respective models) have the same truth value and (\u03c6 \u2022 d \u2022 \u03c8)(P ) = P . In other words, P and P make the same claims about the same things.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accurately Conveying Information",
"sec_num": "4.1"
},
{
"text": "The transference of information depicted in fig. 1 gives rise to a straightforward way to understand how the Reader gains knowledge during the course of the story and incorporates new information into their existing story-world model. One pass through the diagram from S N to S R represents \"one time step\" of the evolution of the Reader's world model 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 50,
"text": "fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Time-Evolution of Story-World Models",
"sec_num": "4.2"
},
{
"text": "Iterating this process over the the entire work gives a time series of story-world models, S R (t), with S R (i) representing the Reader's story-world model at time t = i. We are also typically interested in how the story-world model changes over time, as the Reader revises their understanding of the story-world through consuming the discourse. This will be the subject of the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Evolution of Story-World Models",
"sec_num": "4.2"
},
{
"text": "A commonly accepted notion in narratology is that at any given moment, a reader contains a potentially infinite set of possible worlds. Determining which of these worlds agree with each other is a required attribute for consuming discourse. How do we discuss the notion of collapsing possible worlds upon acquiring new knowledge?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Look at Temporal Evolution, with Applications to Plot",
"sec_num": "5"
},
{
"text": "Assume that we have a narrator, N , and reader R with fabulas F N and F R respectively. Given our definition of a story world model, S, we define S(t) as the set of all world models that satisfy F R (t). Let \u03c1 t+1 refer to the set of formulae that are contained in F R (t + 1)\\F R (t). Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Look at Temporal Evolution, with Applications to Plot",
"sec_num": "5"
},
{
"text": "S R (t + 1) = S R (t + 1) \u2229 S R (t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Look at Temporal Evolution, with Applications to Plot",
"sec_num": "5"
},
{
"text": "and similarl\u1ef9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Look at Temporal Evolution, with Applications to Plot",
"sec_num": "5"
},
{
"text": "S R (t + 1) =S R (t + 1) \u2229S R (t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Look at Temporal Evolution, with Applications to Plot",
"sec_num": "5"
},
{
"text": "refer to the shared world models between the two adjacent time steps. Note that it must follow \u2200\u03c1 \u2208 P t+1 , \u2200s \u2208S R (t + 1), \u03c1 \u2208 s. That is to say, the story worlds that remain between the two time steps are the ones that agree on the propositions added by consuming F N (t + 1). Since this can be repeated inductively, we can assume that for any such t we have that all such models agree on all such provided propositions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Look at Temporal Evolution, with Applications to Plot",
"sec_num": "5"
},
{
"text": "Something to note that for \u03c1 \u2208 P t+1 , \u03c1 will always be either true or false inS R (t)regardless if it is expressed in the fabula or not sinceS R (t) is the logical closure of S R (t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Look at Temporal Evolution, with Applications to Plot",
"sec_num": "5"
},
{
"text": "Note that a set of story worldsS R (t) does not provide us a transition function to discuss how the world evolves over time. Furthermore, there is no reasonable way to inferS R (t) \u2192S R (t + 1), as S R (t) provides no information about the actions that could inhibit or allow for this transition-it simply provides us information about whether a proposition is true within our story world. To rectify this, we need to expand our commutative diagram to act across time. The full diagram can be found in the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collapse of Worlds over Time",
"sec_num": "5.1"
},
{
"text": "Let \u03b6 N denote the transition function from F N (t) to F N (t + 1). Define \u03b6 R likewise. See Figure 2 on page 10. Note that there is no inherent general form of \u03b6 N or \u03b6 R as they are significantly context dependent. One can think of them as performing graph edits on F N and F R respectively, to add the new information expressed in S N (t + 1) for",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 101,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Collapse of Worlds over Time",
"sec_num": "5.1"
},
{
"text": "\u03b6 N and (d \u2022 \u03c6)(S N (t + 1)) for \u03b6 R .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collapse of Worlds over Time",
"sec_num": "5.1"
},
{
"text": "The objective of \u03b6 R in turn is to guide the fabula to reach goals. This imposes a duality of \u03c8 and \u03b6 R . \u03c8 attempts to generate the best candidate story worlds for the reader's current understanding, where as \u03b6 R eliminates them by the direction the author wants to go. This in turn brings us to the notion of compression and expansion. If \u03c8 is left unchecked, it will continuously expand the fabula. In turn \u03b6 R is given the goal of compressing the story worlds that \u03c8 produces by looking at the resulting transition functions that best match the author's intent. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collapse of Worlds over Time",
"sec_num": "5.1"
},
{
"text": "Stories contain many different threads and facts, and it would be nice to be able to identify the ones that are relevant to the plot. We begin with the idea of the relevance of one question to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "Definition 5.1. Consider a question q about a story, where q has the form \"if A then B\" and possible values for A = {T, F } and possible values for B = {T, F }. We say that the relevance of B to A given some prior \u03b3 is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(A = a i |\u03b3) \u2212 H(B = b j |A = a i , \u03b3)",
"eq_num": "(1)"
}
],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "where a i and b j are the true answers to A and B and H refers to binary entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "Note that the relevance of B to A depends on the true answers. This is perhaps surprising, but after some consideration it should be clear that this has to be true. After all, the causal relationship between A and B could depend on the true answers! Consider the case where A is \"is Harry Potter the prophesied Heir of Slytherin?\" and B is \"can Harry Potter speak Parseltongue because he is a descendent of Slytherin?\" If Harry is a blood descendant of Slytherin and that's why he can speak Parseltongue, then B is highly relevant to A. However, the actual truth of the matter is that Harry's abilities are completely independent of his heritage and arose due to a childhood experience. Therefore B does not in fact have relevance to A even though it could have had relevance to A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "Having defined a notion of the relevance of Question A to Question B, our next step is to connect our work to existing narratological analysis. Consider Barthes' notion of kernels and satellites. (Barthes and Duisit, 1975) Definition 5.2. A kernel is a narrative event such that after its completion, the beliefs a reader holds as they pertain to the story have drastically changed. 4 Definition 5.3. A satellite is a narrative event that supports a kernel. They are the minor plot points that lead up to major plot points. They do not result in massive shift in beliefs.",
"cite_spans": [
{
"start": 196,
"end": 222,
"text": "(Barthes and Duisit, 1975)",
"ref_id": "BIBREF4"
},
{
"start": 383,
"end": 384,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "Of importance to note is that satellites imply the existence of kernels, e.g. small plot points will explain and lead up to a large plot point, but kernels do not imply the existence of satellites-kernels do not require satellites to exist. One can think of this as when satellites exist kernels must always exist on their boundary whether they are referred to in the text or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "A set of satellites, s = {s 1 , . . . , s n }, is said to be relevant to a kernel, k, if after the kernel's competition, the reader believes that the set of questions posed by k are relevant to their understanding of the story world given prior s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "Note the definition of relevance. Simply put, A denotes the questions that define some notion of story world level coherency while B denotes the set of questions that define some notion of transitional coherency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plot Relevance",
"sec_num": "5.2"
},
{
"text": "So far we have spoken about the Reader's storyworld model as if there is only one, but in light of the discussion in section 3 it is unclear it truly makes sense to do so. In actuality, the Reader never learns to \"true story-world model\" (insofar as one can even be said to exist). Rather, the Reader has an evolving set of \"plausible story-world models\" that are extrapolated based on the incomplete information conveyed in the story. The purpose of this section is to detail how these \"plausibilities\" interact with each other and with plausibilities at other time steps. It likely seems natural to model the Reader's uncertainty with a probabilistic model. Unfortunately, the topological structure of first-order logic makes that impossible as there is no way to define a probability distribution over the set of models that are consistent with a set of sentences. Instead, we are forced to appeal to filters, a weaker notion of size that captures the difference between \"large\" and \"small\" sets. Again we develop the theory of ultrafil-ters only to the extent that we require, and refer an interested reader to a graduate text in mathematical logic for a thorough discussion. Definition 6.1. Let Q be a set of sentences that make claims about a narrative. A non-empty collection F w \u2286 P(Q) is a weak filter iff 1. \u2200X, Y \u2208 P(Q), X \u2208 F w and X \u2286 Y \u2286 P(Q) implies Y \u2208 F w 2. \u2200X \u2208 P(Q), X \u2208 F w or P(Q)\\X \u2208 F w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possible Worlds and Reader Uncertainty",
"sec_num": "6"
},
{
"text": "We say that F w is a weak ultrafilter and denote it UF w if the second requirement is replaced by \u2200X \u2208 P(Q), X \u2208 F w \u21d0\u21d2 P(Q)\\X \u2208 F w (Askounis et al., 2016) .",
"cite_spans": [
{
"start": 133,
"end": 156,
"text": "(Askounis et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Possible Worlds and Reader Uncertainty",
"sec_num": "6"
},
{
"text": "A reader's beliefs at time t defines a weak filter over the set of possible story-world models {S i R }. Call this filter F w , dropping the t when it is clear from context. Each element U \u2208 F w is a set of story world models that define a plausibility. This plausibility describes a set of propositions about the story that the reader thinks paints a coherent and plausible picture. Formally, a plausibility identified with the largest set of sentences that is true for every model in U , or \u2229 S\u2208U T (S) where T (S) denotes the set of true statements in S. That is, the set of plausible facts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possible Worlds and Reader Uncertainty",
"sec_num": "6"
},
{
"text": "The intuition for the formal definition of a weak filter is that 1. means that adding worlds to an element of the filter (which decreases the number of elements in \u2229 S\u2208U T (S)) doesn't stop it from describing a plausibility since it is specifying fewer facts; and that 2. means that it is not the case that both P and \u00acP are plausible. It's important to remember that membership in F w is a binary property, and so a statement is either plausible or is not plausible. We do not have shades of plausibility due to the aforementioned lack of a probability distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possible Worlds and Reader Uncertainty",
"sec_num": "6"
},
{
"text": "As a framework for modeling the Reader's uncertainty, weak filters underspecify the space of plausible story world as a whole in favor of capturing what the reader \"has actively in mind\" when reading. This is precisely because the ultrafilter axiom is not required, and so for some propositions neither P nor \u00acP are judged to be plausible. When asked to stop and consider the truth of a specific proposition, the reader is confronted with the fact that there are many ways that they can precisify their world models. How a Reader responds to this confrontation is an experimental question that we leave to future work, but we conjecture that with sufficient time and motivation a Reader will build a weak ultrafilter UF w that extends F w and takes a position on the plausibility of all statements in the logical closure of their knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possible Worlds and Reader Uncertainty",
"sec_num": "6"
},
{
"text": "Once the Reader has fleshed out the space of plausibilities, we can use UF w to build the ultraproduct of the Reader's story-world models. An ultraproduct (Chang and Keisler, 1990 ) is a way of using an ultrafilter to engage in reconciliation and build a single consistent story world-model out of a space of plausibilities. Intuitively, an ultraproduct can be thought of as a vote between the various models about the truth of individual propositions. A proposition is considered to be true in the ultraproduct if and only if the set of models in which it is true is an element of the ultrafilter. We conjecture that real-world rational agents with uncertain beliefs find the ultraproduct of their world models to be a reasonable reconciliation of their beliefs and that idealized perfectly rational agents will provably gravitate towards the ultraproduct as the correct reconciliation.",
"cite_spans": [
{
"start": 166,
"end": 179,
"text": "Keisler, 1990",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Possible Worlds and Reader Uncertainty",
"sec_num": "6"
},
{
"text": "Finally, we demonstrate that our highly abstract framework is of practical use by using it to derive explicit computational tools that can benefit computational narratology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications to Computational Narratology",
"sec_num": "7"
},
{
"text": "It is important to acknowledge that a reader can never reason over an infinite set of worlds. Therefore, it is often best to consider a finite sample of worlds. Given the (non-finite) set of story worlds, S(t), there must exist a set s \u2282 UF w (t) such that every element in s is one of the \"more likely\" interpretations of the story world. This notion of more likely is out of scope of this paper; however, in practice, \"more likely\" simply denotes probability conditioned fromS(t \u2212 1). It is equally important to note that every element of s , by definition, can be represented in the reader's mind by the same fabula, say F (t). Let Q be some set of implications that we would like to determine the truth assignment of. Let P s (q) refer to the proportion of story worlds in s such that q is true. 5 Clearly, P s (q) is conditioned on s . We can 5 An equivalent form of P (q) exists for when we do not have a form of measure. Particularly, define P (q) = 1 when q is true in the majority of story worlds, as defined by our express the entropy of this as H(P s (q)) = H(q|s ) = H(A = T |s ) \u2212 H(B = b j |A = T, s ) Therefore averaging over H(P s (q)) for all q \u2208 Q is equivalent to determining the relevance of our implication to our hypothesis. This now brings us to EWC, or entropy of world coherence. These implications are of the form \"Given something in the ground truth that all story worlds believe, then X\" where X is a proposition held by the majority of story worlds but not all. We define EWC as",
"cite_spans": [
{
"start": 848,
"end": 849,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of World Coherence",
"sec_num": "7.1"
},
{
"text": "EWC(s , Q) = 1 |Q| q\u2208Q P s (q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of World Coherence",
"sec_num": "7.1"
},
{
"text": "Note our definition of plot relevance. It is particularly of value to not only measure the coherency of the rules that govern our story world but also to measure the coherency of the transitions that govern it over time. We can define a similar notion to EWC, called Entropy of Transitional Coherence, which aims to measure the agreement of how beliefs change over time. In doing so, we can accurately measure the reader's understanding of the laws that govern the dynamics of the story world rather than just the relationships that exist in a static frame.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "To understand ETC we must first delve into the dynamics of modal logic. Note that for a proposition to be \"necessary\" in one frame of a narrative, it must have been plausible in a prior frame. (Sider, 2010) Things that are necessary, the reader knows; hence, the set of necessary propositions is a subset of a prior frame's possible propositions.",
"cite_spans": [
{
"start": 193,
"end": 206,
"text": "(Sider, 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "We must define a boolean lattice to continue Definition 7.1. A boolean lattice of a set of propositions, Q, is a graph whose vertices are elements of Q and for any two a, b \u2208 Q if a =\u21d2 b then there exists an edge (a, b) unless a = b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "Note that a boolean lattice is a directed acyclic graph (DAG) and as such has source vertices with no parents. In the case of boolean lattices, a source vertex refers to an axiom, as sources are not provable by other sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "We define one reader at two times, denoted UF w (t) and UF w (t ) where t < t. We define ultrafilter. Similarly, let P (q) = 0 otherwise. For those with prior model theory experience, P (q) = 1 if q holds in an ultraproduct of story world models. a filtration of possible worlds s (t ) similar to how we did in the previous section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "Given W (t) \u2208 UF w (t), a ground truth at time t, we restrict our view of W (t) to the maximal PW of time t . This can be done by looking at",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "W = argmax W (t)\u2229s i |B(W (t)) \u2229 (\u2229 s\u2208s i B(s))|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "Reason being is that it does not make sense to query about propositions that are undefined in prior frames. This effectively can be viewed as a pullback through the commutative diagram outlined previously. See Figure 2 on page 10. Something to note however is that this pullback is not necessary for ETC in the theoretical setting, as all world models would agree on any proposition not contained in their respective Boolean lattices-this is not the case when testing on human subjects. Human subjects would be more likely to guess if they are presented with a query that has no relevance to their current understanding. (Trabasso et al., 1982; Mandler and Johnson, 1977) We can however similarly define ETC by utilizing W as our ground truth with EWC. Since W is not the minimal ground truth for a particular frame, it encodes information about the ground truth where the narrative will be going by frame t. Therefore, define Q similarly over time t relative to W . We can also use this to define P s (t ) (q) \u2200q \u2208 Q. We denote ETC as ETC(s (t ), Q) = 1 |Q| q\u2208Q P s (t ) (q)",
"cite_spans": [
{
"start": 621,
"end": 644,
"text": "(Trabasso et al., 1982;",
"ref_id": "BIBREF27"
},
{
"start": 645,
"end": 671,
"text": "Mandler and Johnson, 1977)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 210,
"end": 218,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "ETC differs from EWC in the form of implications that reside in Q. Particularly since ETC wants to measure the coherency of a reader's internal transition model, \u2200q \u2208 Q where q := A =\u21d2 B we have that A is the belief a reader holds before a kernel and that B is a belief the reader holds after a kernel. Since the kernel is defined as a plot point which changes the majority of a reader's beliefs, we are in turn measuring some notion of faithfulness of \u03b6 R .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of Transitional Coherence",
"sec_num": "7.2"
},
{
"text": "In this paper, we defined a preliminary theoretical framework of narrative that affords new precision to common narratological concepts, including fabulas, story worlds, the conveyance of information from Narrator to Reader, and the way that the Reader's active beliefs about the story can update as they receive that information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": null
},
{
"text": "Thanks to this precision, we were able to define a rigorous and measurable notion of plot relevance, which we used to formalize Barthes' notions of kernels and satellites. We also give a novel formulation and analysis of Reader uncertainty, and form experimentally verifiable conjectures on the basis of our theories. We further demonstrated the value of our framework by formalizing two new narrativefocused measures: Entropy of World Coherence and Entropy of Transitional Coherence, which measure the agreement of story world models frames and faithfulness of \u03b6 R respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": null
},
{
"text": "Our framework also opens up new avenues for future research in narratology and related fields. While we were unable to explore their consequences within the scope of this paper, the formulation of narratives via model theory opens the door to leveraging the extensive theoretical work that has been done on models and applying it to narratology. The analysis of the temporal evolution of models in section 5 suggests connections with reinforcement learning for natural language understanding. In section 6 we make testable conjectures about the behavior of Reader agents and in section 7 we describe how to convert our theoretical musings into practical metrics for measuring the consistency and coherency of stories. Figure 2 : Commutative diagram expressing \u03b6 R and \u03b6 N . Some edge labels were removed for clarity. Refer to figure 1 on page 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 718,
"end": 726,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": null
},
{
"text": "F N (t + 1) S N (t + 1) F N (t) S N (t) F R (t + 1) S R (t + 1) F R (t) S R (t) d \u03c6 d \u03b6 N \u03c8 \u03b6 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": null
},
{
"text": "Nevertheless, having a conception of d is very important on a formal level as we will see later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For simplicity we will speak of this as a discrete time series, though for some media such as film it may make sense to model it as a continuous phenomenon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There is no single best way to define an author's intent. For instance, we could have easily said that \u03c8 denotes author intent while \u03b6R determines which intents are best grounded in reality. The choice, however, needs to be made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The notion of \"drastic\" is equivalent to \"majority.\" To rigoriously define Barthes' Kernel, and hence Barthes' Cardinal, we would require ultraproducts-which is outside of the scope of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On the Logic of Theory Change: Partial Meet Contraction and Revision Functions",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Alchourr\u00f3n",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "G\u00e4rdenfors",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Makinson",
"suffix": ""
}
],
"year": 1985,
"venue": "Journal of Symbolic Logic",
"volume": "",
"issue": "",
"pages": "510--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. E. Alchourr\u00f3n, P. G\u00e4rdenfors, and D. Makinson. 1985. On the Logic of Theory Change: Partial Meet Contraction and Revision Functions. Journal of Symbolic Logic, pages 510-530.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Knowledge means all, belief means most",
"authors": [
{
"first": "Dimitris",
"middle": [],
"last": "Askounis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Costas",
"suffix": ""
},
{
"first": "Yorgos",
"middle": [],
"last": "Koutras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zikos",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Applied Non-Classical Logics",
"volume": "26",
"issue": "3",
"pages": "173--192",
"other_ids": {
"DOI": [
"10.1080/11663081.2016.1214804"
]
},
"num": null,
"urls": [],
"raw_text": "Dimitris Askounis, Costas D. Koutras, and Yorgos Zikos. 2016. Knowledge means all, belief means most. Journal of Applied Non-Classical Logics, 26(3):173-192.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Narratology: Introduction to the theory of narrative",
"authors": [
{
"first": "Mieke",
"middle": [],
"last": "Bal",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mieke Bal. 1997. Narratology: Introduction to the the- ory of narrative. University of Toronto Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The death of the author",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Barthes",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Barthes. 1967. The death of the author. Fontana.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An introduction to the structural analysis of narrative",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Barthes",
"suffix": ""
},
{
"first": "Lionel",
"middle": [],
"last": "Duisit",
"suffix": ""
}
],
"year": 1975,
"venue": "New literary history",
"volume": "6",
"issue": "",
"pages": "237--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Barthes and Lionel Duisit. 1975. An introduc- tion to the structural analysis of narrative. New liter- ary history, 6(2):237-272.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Making Meaning: Inference and Rhetoric in the Interpretation of Cinema",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bordwell",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bordwell. 1989. Making Meaning: Inference and Rhetoric in the Interpretation of Cinema. Cam- bridge: Harvard University Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Question Answering in the Context of Stories Generated by Computers",
"authors": [
{
"first": "Rogelio",
"middle": [
"E"
],
"last": "Cardona-Rivera",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"W"
],
"last": "Price",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Winer",
"suffix": ""
},
{
"first": "R. Michael",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Cognitive Systems",
"volume": "4",
"issue": "",
"pages": "227--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rogelio E. Cardona-Rivera, Thomas W. Price, David R. Winer, and R. Michael Young. 2016. Question An- swering in the Context of Stories Generated by Com- puters. Advances in Cognitive Systems, 4:227-246.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Desiderata for a Computational Model of Human Online Narrative Sensemaking",
"authors": [
{
"first": "Rogelio",
"middle": [
"E"
],
"last": "Cardona-Rivera",
"suffix": ""
},
{
"first": "R. Michael",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI Spring Symposium on Story-enabled Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rogelio E. Cardona-Rivera and R. Michael Young. 2019. Desiderata for a Computational Model of Hu- man Online Narrative Sensemaking. In AAAI Spring Symposium on Story-enabled Intelligence.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised learning of narrative event chains",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "789--797",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Unsu- pervised learning of narrative event chains. In Pro- ceedings of Annual Meeting of the Association for Computational Linguistics, pages 789-797.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Model theory",
"authors": [
{
"first": "Chen",
"middle": [
"Chung"
],
"last": "",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "H Jerome",
"middle": [],
"last": "Keisler",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Chung Chang and H Jerome Keisler. 1990. Model theory. Elsevier.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "61",
"issue": "",
"pages": "65--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Gatt and E. Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial In- telligence Research, 61:65-170.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Narrative Discourse: An Essay in Method",
"authors": [
{
"first": "G\u00e9rard",
"middle": [],
"last": "Genette",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00e9rard Genette. 1980. Narrative Discourse: An Essay in Method. Cornell University Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Investigating differences in general comprehension skill",
"authors": [
{
"first": "Morton",
"middle": [
"A"
],
"last": "Gernsbacher",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Verner",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"E"
],
"last": "Faust",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition",
"volume": "16",
"issue": "3",
"pages": "430--445",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morton A. Gernsbacher, Kathleen R. Verner, and Mark E. Faust. 1990. Investigating differences in general comprehension skill. Journal of Experimen- tal Psychology: Learning, Memory, and Cognition, 16(3):430-445.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Storytelling and the Sciences of Mind",
"authors": [
{
"first": "David",
"middle": [],
"last": "Herman",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Herman. 2013. Storytelling and the Sciences of Mind. MIT Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Narration in poetry and drama",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "H\u00fchn",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Sommer",
"suffix": ""
}
],
"year": 2013,
"venue": "the living handbook of narratology. Hamburg U",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter H\u00fchn and Roy Sommer. 2013. Narration in po- etry and drama. In Peter H\u00fchn, John Pier, Wolf Schmid, and J\u00f6rg Sch\u00f6nert, editors, the living hand- book of narratology. Hamburg U., Hamburg, Ger- many.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning prototypical goal activities for locations",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1297--1307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu Jiang and Ellen Riloff. 2018. Learning proto- typical goal activities for locations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1297-1307.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Progressive attention memory network for movie story question answering",
"authors": [
{
"first": "Junyeong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Minuk",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Kyungsu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chang",
"middle": [
"D"
],
"last": "Yoo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "8337--8346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyeong Kim, Minuk Ma, Kyungsu Kim, Sungjin Kim, and Chang D. Yoo. 2019. Progressive atten- tion memory network for movie story question an- swering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8337-8346.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Crowdsourcing narrative intelligence",
"authors": [
{
"first": "Boyang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Lee-Urban",
"suffix": ""
},
{
"first": "Darren",
"middle": [
"Scott"
],
"last": "Appling",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"O"
],
"last": "Riedl",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Cognitive Systems",
"volume": "1",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boyang Li, Stephen Lee-Urban, Darren Scott Appling, and Mark O. Riedl. 2012. Crowdsourcing narrative intelligence. Advances in Cognitive Systems, 1:1- 18.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Remembrance of things parsed: Story structure and recall",
"authors": [
{
"first": "M",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Nancy S Johnson",
"middle": [],
"last": "Mandler",
"suffix": ""
}
],
"year": 1977,
"venue": "Cognitive psychology",
"volume": "9",
"issue": "1",
"pages": "111--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean M Mandler and Nancy S Johnson. 1977. Remem- brance of things parsed: Story structure and recall. Cognitive psychology, 9(1):111-151.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The visual narrative engine: A computational model of the visual narrative parallel architecture",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Martens",
"suffix": ""
},
{
"first": "Rogelio",
"middle": [
"E"
],
"last": "Cardona-Rivera",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 8th Annual Conference on Advances in Cognitive Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Martens, Rogelio E. Cardona-Rivera, and Neil Cohn. 2020. The visual narrative engine: A compu- tational model of the visual narrative parallel archi- tecture. In Proceedings of the 8th Annual Confer- ence on Advances in Cognitive Systems.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Event representations for automated story generation with deep neural nets",
"authors": [
{
"first": "Lara",
"middle": [
"J"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Prithviraj",
"middle": [],
"last": "Ammanabrolu",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"O"
],
"last": "Riedl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 32nd AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lara J. Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark O. Riedl. 2018. Event representations for au- tomated story generation with deep neural nets. In Proceedings of the 32nd AAAI Conference on Artifi- cial Intelligence.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A corpus and cloze evaluation for deeper understanding of commonsense stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "839--849",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Story planning as exploratory creativity: Techniques for expanding the narrative search space",
"authors": [
{
"first": "O",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "R",
"middle": [
"Michael"
],
"last": "Riedl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2006,
"venue": "New Generation Computing",
"volume": "24",
"issue": "3",
"pages": "303--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark O. Riedl and R. Michael Young. 2006. Story planning as exploratory creativity: Techniques for expanding the narrative search space. New Genera- tion Computing, 24(3):303-323.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Narrative Fiction: Contemporary Poetics",
"authors": [
{
"first": "",
"middle": [],
"last": "Shlomith Rimmon-Kenan",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shlomith Rimmon-Kenan. 2002. Narrative Fiction: Contemporary Poetics. Routledge.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Choice of plausible alternatives: An evaluation of commonsense causal reasoning",
"authors": [
{
"first": "Melissa",
"middle": [],
"last": "Roemmele",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"S"
],
"last": "Cosmin Adrian Bejan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gordon",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 AAAI Spring Symposium Series",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S. Gordon. 2011. Choice of plausible alterna- tives: An evaluation of commonsense causal reason- ing. In 2011 AAAI Spring Symposium Series.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Logic for philosophy",
"authors": [
{
"first": "Theodore",
"middle": [],
"last": "Sider",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theodore Sider. 2010. Logic for philosophy. Oxford University Press, USA.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Delayed roles with authorable continuity in plan-based interactive storytelling",
"authors": [
{
"first": "David",
"middle": [],
"last": "Thue",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Schiffel",
"suffix": ""
}
],
"year": 2016,
"venue": "Interactive Storytelling",
"volume": "",
"issue": "",
"pages": "258--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Thue, Stephan Schiffel, Ragnar Adolf \u00c1rnason, Ingibergur Sindri Stefnisson, and Birgir Steinarsson. 2016. Delayed roles with authorable continuity in plan-based interactive storytelling. In Interactive Storytelling, pages 258-269, Cham. Springer Inter- national Publishing.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Causal cohesion and story coherence",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Trabasso",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Trabasso et al. 1982. Causal cohesion and story coherence. ERIC.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Situation models in language comprehension and memory",
"authors": [
{
"first": "Rolf",
"middle": [
"A"
],
"last": "Zwaan",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [
"A"
],
"last": "Radvansky",
"suffix": ""
}
],
"year": 1998,
"venue": "Psychological Bulletin",
"volume": "123",
"issue": "2",
"pages": "162--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rolf A. Zwaan and Gabriel A. Radvansky. 1998. Situ- ation models in language comprehension and mem- ory. Psychological Bulletin, 123(2):162-85.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "A commutative diagram outlining storytelling.",
"num": null
}
}
}
}