{ "paper_id": "P13-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:33:26.917212Z" }, "title": "Learning Latent Personas of Film Characters", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "dbamman@cs.cmu.edu" }, { "first": "Brendan", "middle": [], "last": "O'connor", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "nasmith@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present two latent variable models for learning character types, or personas, in film, in which a persona is defined as a set of mixtures over latent lexical classes. These lexical classes capture the stereotypical actions of which a character is the agent and patient, as well as attributes by which they are described. As the first attempt to solve this problem explicitly, we also present a new dataset for the text-driven analysis of film, along with a benchmark testbed to help drive future work in this area.", "pdf_parse": { "paper_id": "P13-1035", "_pdf_hash": "", "abstract": [ { "text": "We present two latent variable models for learning character types, or personas, in film, in which a persona is defined as a set of mixtures over latent lexical classes. These lexical classes capture the stereotypical actions of which a character is the agent and patient, as well as attributes by which they are described. As the first attempt to solve this problem explicitly, we also present a new dataset for the text-driven analysis of film, along with a benchmark testbed to help drive future work in this area.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Philosophers and dramatists have long argued whether the most important element of narrative is plot or character. Under a classical Aristotelian perspective, plot is supreme; 1 modern theoretical dramatists and screenwriters disagree. 2 Without addressing this debate directly, much computational work on narrative has focused on learning the sequence of events by which a story is defined; in this tradition we might situate seminal work on learning procedural scripts (Schank and Abelson, 1977; Regneri et al., 2010) , narrative chains (Chambers and Jurafsky, 2008) , and plot structure (Finlayson, 2011; Elsner, 2012; McIntyre and Lapata, 2010; Goyal et al., 2010) .", "cite_spans": [ { "start": 236, "end": 237, "text": "2", "ref_id": null }, { "start": 471, "end": 497, "text": "(Schank and Abelson, 1977;", "ref_id": "BIBREF25" }, { "start": 498, "end": 519, "text": "Regneri et al., 2010)", "ref_id": "BIBREF23" }, { "start": 539, "end": 568, "text": "(Chambers and Jurafsky, 2008)", "ref_id": "BIBREF3" }, { "start": 590, "end": 607, "text": "(Finlayson, 2011;", "ref_id": "BIBREF10" }, { "start": 608, "end": 621, "text": "Elsner, 2012;", "ref_id": "BIBREF9" }, { "start": 622, "end": 648, "text": "McIntyre and Lapata, 2010;", "ref_id": "BIBREF17" }, { "start": 649, "end": 668, "text": "Goyal et al., 2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present a complementary perspective that addresses the importance of character in defining 1 \"Dramatic action . . . is not with a view to the representation of character: character comes in as subsidiary to the actions . . . The Plot, then, is the first principle, and, as it were, the soul of a tragedy: Character holds the second place.\" Poetics I.VI (Aristotle, 335 BCE).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 \"Aristotle was mistaken in his time, and our scholars are mistaken today when they accept his rulings concerning character. Character was a great factor in Aristotle's time, and no fine play ever was or ever will be written without it\" (Egri, 1946, p. 94) ; \"What the reader wants is fascinating, complex characters\" (McKee, 1997, 100 ). a story. Our testbed is film. Under this perspective, a character's latent internal nature drives the action we observe. Articulating narrative in this way leads to a natural generative story: we first decide that we're going to make a particular kind of movie (e.g., a romantic comedy), then decide on a set of character types, or personas, we want to see involved (the PROTAGONIST, the LOVE INTER-EST, the BEST FRIEND). After picking this set, we fill out each of these roles with specific attributes (female, 28 years old, klutzy); with this cast of characters, we then sketch out the set of events by which they interact with the world and with each other (runs but just misses the train, spills coffee on her boss) -through which they reveal to the viewer those inherent qualities about themselves. This work is inspired by past approaches that infer typed semantic arguments along with narrative schemas (Chambers and Jurafsky, 2009; Regneri et al., 2011) , but seeks a more holistic view of character, one that learns from stereotypical attributes in addition to plot events. This work also naturally draws on earlier work on the unsupervised learning of verbal arguments and semantic roles (Pereira et al., 1993; Grenager and Manning, 2006; Titov and Klementiev, 2012) and unsupervised relation discovery (Yao et al., 2011) .", "cite_spans": [ { "start": 238, "end": 257, "text": "(Egri, 1946, p. 94)", "ref_id": null }, { "start": 319, "end": 336, "text": "(McKee, 1997, 100", "ref_id": null }, { "start": 1250, "end": 1279, "text": "(Chambers and Jurafsky, 2009;", "ref_id": "BIBREF4" }, { "start": 1280, "end": 1301, "text": "Regneri et al., 2011)", "ref_id": "BIBREF24" }, { "start": 1538, "end": 1560, "text": "(Pereira et al., 1993;", "ref_id": "BIBREF21" }, { "start": 1561, "end": 1588, "text": "Grenager and Manning, 2006;", "ref_id": "BIBREF14" }, { "start": 1589, "end": 1616, "text": "Titov and Klementiev, 2012)", "ref_id": "BIBREF26" }, { "start": 1653, "end": 1671, "text": "(Yao et al., 2011)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This character-centric perspective leads to two natural questions. First, can we learn what those standard personas are by how individual characters (who instantiate those types) are portrayed? Second, can we learn the set of attributes and actions by which we recognize those common types? How do we, as viewers, recognize a VILLIAN?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At its most extreme, this perspective reduces to learning the grand archetypes of Joseph Campbell (1949) or Carl Jung (1981) , such as the HERO or TRICKSTER. We seek, however, a more finegrained set that includes not only archetypes, but stereotypes as well -characters defined by a fixed set of actions widely known to be representative of a class. This work offers a data-driven method for answering these questions, presenting two probablistic generative models for inferring latent character types. This is the first work that attempts to learn explicit character personas in detail; as such, we present a new dataset for character type induction in film and a benchmark testbed for evaluating future work. 3", "cite_spans": [ { "start": 89, "end": 104, "text": "Campbell (1949)", "ref_id": "BIBREF2" }, { "start": 113, "end": 124, "text": "Jung (1981)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our primary source of data comes from 42,306 movie plot summaries extracted from the November 2, 2012 dump of English-language Wikipedia. 4 These summaries, which have a median length of approximately 176 words, 5 contain a concise synopsis of the movie's events, along with implicit descriptions of the characters (e.g., \"rebel leader Princess Leia,\" \"evil lord Darth Vader\"). To extract structure from this data, we use the Stanford CoreNLP library 6 to tag and syntactically parse the text, extract entities, and resolve coreference within the document. With this structured representation, we extract linguistic features for each character, looking at immediate verb governors and attribute syntactic dependencies to all of the entity's mention headwords, extracted from the typed dependency tuples produced by the parser; we refer to \"CCprocessed\" syntactic relations described in de Marneffe and Manning (2008) :", "cite_spans": [ { "start": 138, "end": 139, "text": "4", "ref_id": null }, { "start": 889, "end": 916, "text": "Marneffe and Manning (2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": "2.1" }, { "text": "\u2022 Agent verbs. Verbs for which the entity is an agent argument (nsubj or agent). \u2022 Patient verbs. Verbs for which the entity is the patient, theme or other argument (dobj, nsubjpass, iobj, or any prepositional argument prep *). \u2022 Attributes. Adjectives and common noun words that relate to the mention as adjectival modifiers, noun-noun compounds, appositives, or copulas (nsubj or appos governors, or nsubj, appos, amod, nn dependents of an entity mention).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": "2.1" }, { "text": "3 All datasets and software for replication can be found at http://www.ark.cs.cmu.edu/personas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": "2.1" }, { "text": "4 http://dumps.wikimedia.org/enwiki/ 5 More popular movies naturally attract more attention on Wikipedia and hence more detail: the top 1,000 movies by box office revenue have a median length of 715 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": "2.1" }, { "text": "6 http://nlp.stanford.edu/software/ corenlp.shtml These three roles capture three different ways in which character personas are revealed: the actions they take on others, the actions done to them, and the attributes by which they are described. For every character we thus extract a bag of (r, w) tuples, where w is the word lemma and r is one of {agent verb, patient verb, attribute} as identified by the above rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": "2.1" }, { "text": "Our second source of information consists of character and movie metadata drawn from the November 4, 2012 dump of Freebase. 7 At the movie level, this includes data on the language, country, release date and detailed genre (365 non-mutually exclusive categories, including \"Epic Western,\" \"Revenge,\" and \"Hip Hop Movies\"). Many of the characters in movies are also associated with the actors who play them; since many actors also have detailed biographical information, we can ground the characters in what we know of those real people -including their gender and estimated age at the time of the movie's release (the difference between the release date of the movie and the actor's date of birth).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metadata", "sec_num": "2.2" }, { "text": "Across all 42,306 movies, entities average 3.4 agent events, 2.0 patient events, and 2.1 attributes. For all experiments described below, we restrict our dataset to only those events that are among the 1,000 most frequent overall, and only characters with at least 3 events. 120,345 characters meet this criterion; of these, 33,559 can be matched to Freebase actors with a specified gender, and 29,802 can be matched to actors with a given date of birth. Of all actors in the Freebase data whose age is given, the average age at the time of movie is 37.9 (standard deviation 14.1); of all actors whose gender is known, 66.7% are male. 8 The age distribution is strongly bimodal when conditioning on gender: the average age of a female actress at the time of a movie's release is 33.0 (s.d. 13.4), while that of a male actor is 40.5 (s.d. 13.7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metadata", "sec_num": "2.2" }, { "text": "One way we recognize a character's latent type is by observing the stereotypical actions they perform (e.g., VILLAINS strangle), the actions done to them (e.g., VILLAINS are foiled and arrested) and the words by which they are described (VILLAINS are evil). To capture this intuition, we define a persona as a set of three typed distributions: one for the words for which the character is the agent, one for which it is the patient, and one for words by which the character is attributively modified. Each distribution ranges over a fixed set of latent word classes, or topics. Figure 1 illustrates this definition for a toy example: a ZOMBIE persona may be characterized as being the agent of primarily eating and killing actions, the patient of killing actions, and the object of dead attributes. The topic labeled eat may include words like eat, drink, and devour. Figure 1: A persona is a set of three distributions over latent topics. In this toy example, the ZOM-BIE persona is primarily characterized by being the agent of words from the eat and kill topics, the patient of kill words, and the object of words from the dead topic.", "cite_spans": [], "ref_spans": [ { "start": 578, "end": 586, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Personas", "sec_num": "3" }, { "text": "Both models that we present here simultaneously learn three things: 1.) a soft clustering over words to topics (e.g., the verb \"strangle\" is mostly a type of Assault word); 2.) a soft clustering over topics to personas (e.g., VILLIANS perform a lot of Assault actions); and 3.) a hard clustering over characters to personas (e.g., Darth Vader is a VIL-LAIN.) They each use different evidence: since our data includes not only textual features (in the form of actions and attributes of the characters) but also non-textual information (such as movie genre, age and gender), we design a model that exploits this additional source of information in discriminating between character types; since this extralinguistic information may not always be available, we also design a model that learns only from the text itself. We present the text-only model first for simplicity. Throughout, V is the word vocabulary size, P is the number of personas, and K is the number of topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4" }, { "text": "In the most basic model, we only use information from the structured text, which comes as a bag of (r, w) tuples for each character in a movie, where w is the word lemma and r is the relation of the word with respect to the character (one of agent verb, patient verb or attribute, as outlined in \u00a72.1 above). The generative story runs as follows. First, let there be K latent word topics; as in LDA (Blei et al., 2003) , these are words that will be soft-clustered together by virtue of appearing in similar contexts. Each latent word cluster \u03c6 k \u223c Dir(\u03b3) is a multinomial over the V words in the vocabulary, drawn from a Dirichlet parameterized by \u03b3. Next, let a persona p be defined as a set of three multinomials \u03c8 p over these K topics, one for each typed role r, each drawn from a Dirichlet with a role-specific hyperparameter (\u03bd r ).", "cite_spans": [ { "start": 399, "end": 418, "text": "(Blei et al., 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dirichlet Persona Model", "sec_num": "4.1" }, { "text": "Every document (a movie plot summary) contains a set of characters, each of which is associated with a single latent persona p; for every observed (r, w) tuple associated with the character, we sample a latent topic k from the role-specific \u03c8 p,r . Conditioned on this topic assignment, the observed word is drawn from \u03c6 k . The distribution of these personas for a given document is determined by a document-specific multinomial \u03b8, drawn from a Dirichlet parameterized by \u03b1. Figure 2 (above left) illustrates the form of the model. To simplify inference, we collapse out the persona-topic distributions \u03c8, the topic-word distributions \u03c6 and the persona distribution \u03b8 for each document. Inference on the remaining latent variables -the persona p for each character type and the topic z for each word associated with that character -is conducted via collapsed Gibbs sampling (Griffiths and Steyvers, 2004) ; at each iteration, for each character e, we sample their persona p e :", "cite_spans": [ { "start": 875, "end": 905, "text": "(Griffiths and Steyvers, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 476, "end": 484, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Dirichlet Persona Model", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (p e = k | p \u2212e , z, \u03b1, \u03bd) \u221d c \u2212e d,k + \u03b1 k \u00d7 j (c \u2212e r j ,k,z j +\u03bdr j ) (c \u2212e r j ,k, +K\u03bdr j )", "eq_num": "(1)" } ], "section": "Dirichlet Persona Model", "sec_num": "4.1" }, { "text": "Here, c \u2212e d,k is the count of all characters in document d whose current persona sample is also k (not counting the current character e under consideration); 9 j ranges over all (r j , w j ) tuples associated with character e. Each c \u2212e r j ,k,z j is the count of all tuples with role r j and current topic z j used with persona k. c \u2212e r j ,k, is the same count, summing over all topics z. In other words, the probability that character e embodies persona k is proportional to the number of other characters in the plot summary who also embody that persona (plus the Dirichlet hyperparameter \u03b1 k ) times the contribution of each observed word w j for that character, given its current topic assignment z j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dirichlet Persona Model", "sec_num": "4.1" }, { "text": "Once all personas have been sampled, we sam-ple the latent topics for each tuple as the following.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dirichlet Persona Model", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (z j = k | p, z \u2212j , w, r, \u03bd, \u03b3) \u221d (c \u2212j r j ,p,k +\u03bdr j ) (c \u2212j r j ,p, +K\u03bdr j ) \u00d7 (c \u2212j k,w j +\u03b3) (c \u2212j k, +V \u03b3)", "eq_num": "(2)" } ], "section": "Dirichlet Persona Model", "sec_num": "4.1" }, { "text": "Here, conditioned on the current sample p for the character's persona, the probability that tuple j originates in topic k is proportional to the number of other tuples with that same role r j drawn from the same topic for that persona (c \u2212j r j ,p,k ), normalized by the number of other r j tuples associated with that persona overall (c \u2212j r j ,p, ), multiplied by the number of times word w j is associated with that topic (c \u2212j k,w j ) normalized by the total number of other words associated with that topic overall (c \u2212j k, ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dirichlet Persona Model", "sec_num": "4.1" }, { "text": "We optimize the values of the Dirichlet hyperparameters \u03b1, \u03bd and \u03b3 using slice sampling with a uniform prior every 20 iterations for the first 500 iterations, and every 100 iterations thereafter. After a burn-in phase of 10,000 iterations, we collect samples every 10 iterations (to lessen autocorrelation) until a total of 100 have been collected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dirichlet Persona Model", "sec_num": "4.1" }, { "text": "To incorporate observed metadata in the form of movie genre, character age and character gender, we adopt an \"upstream\" modeling approach (Mimno and McCallum, 2008) , letting those observed features influence the conditional probability with which a given character is expected to assume a particular persona, prior to observing any of their actions. This captures the increased likelihood, for example, that a 25-year-old male actor in an action movie will play an ACTION HERO than he will play a VALLEY GIRL.", "cite_spans": [ { "start": 138, "end": 164, "text": "(Mimno and McCallum, 2008)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Persona Regression", "sec_num": "4.2" }, { "text": "To capture these effects, each character's latent persona is no longer drawn from a documentspecific Dirichlet; instead, the P -dimensional simplex is the output of a multiclass logistic regression, where the document genre metadata m d and the character age and gender metadata m e together form a feature vector that combines with personaspecific feature weights to form the following loglinear distribution over personas, with the probability for persona k being:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Persona Regression", "sec_num": "4.2" }, { "text": "P (p = k | m d , m e , \u03b2) = exp([m d ;me] \u03b2 k ) 1+ P P \u22121 j=1 exp([m d ;me] \u03b2 j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Persona Regression", "sec_num": "4.2" }, { "text": "(3) The persona-specific \u03b2 coefficients are learned through Monte Carlo Expectation Maximization (Wei and Tanner, 1990) , in which we alternate between the following:", "cite_spans": [ { "start": 97, "end": 119, "text": "(Wei and Tanner, 1990)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Persona Regression", "sec_num": "4.2" }, { "text": "1. Given current values for \u03b2, for all characters e in all plot summaries, sample values of p e and z j for all associated tuples. 2. Given input metadata features m and the associated sampled values of p, find the values of \u03b2 that maximize the standard multiclass logistic regression log likelihood, subject to 2 regularization. Figure 2 (above right) illustrates this model. As with the Dirichlet persona model, inference on p for step 1 is conducted with collapsed Gibbs sampling; the only difference in the sampling probability from equation 1 is the effect of the prior, which here is deterministically fixed as the output of the regression.", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 338, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Persona Regression", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (p e = k | p \u2212e , z, \u03bd, m d , m e , \u03b2) \u221d exp([m d ; m e ] \u03b2 k ) \u00d7 j (c \u2212e r j ,k,z j +\u03bdr j ) (c \u2212e r j ,k, +K\u03bdr j )", "eq_num": "(4)" } ], "section": "Persona Regression", "sec_num": "4.2" }, { "text": "The sampling equation for the topic assignments z is identical to that in equation 2. In practice we optimize \u03b2 every 1,000 iterations, until a burn-in phase of 10,000 iterations has been reached; at this point we following the same sampling regime as for the Dirichlet persona model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Persona Regression", "sec_num": "4.2" }, { "text": "We evaluate our methods in two quantitative ways by measuring the degree to which we recover two different sets of gold-standard clusterings. This evaluation also helps offer guidance for model selection (in choosing the number of latent topics and personas) by measuring performance on an objective task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "First, we consider all character names that occur in at least two separate movies, generally as a consequence of remakes or sequels; this includes proper names such as \"Rocky Balboa,\" \"Oliver Twist,\" and \"Indiana Jones,\" as well as generic type names such as \"Gang Member\" and \"The Thief\"; to minimize ambiguity, we only consider character names consisting of at least two tokens. Each of these names is used by at least two different characters; for example, a character named \"Jason Bourne\" is portrayed in The Bourne Identity, The Bourne Supremacy, and The Bourne Ultimatum. While these characters are certainly free to assume different roles in different movies, we believe that, in the aggregate, they should tend to embody the same character type and thus prove to be a natural clustering to recover. 970 character names occur at least twice in our data, and 2,666 individual characters use one of those names. Let those 970 character names define 970 unique gold clusters whose members include the individual characters who use that name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Names", "sec_num": "5.1" }, { "text": "As a second external measure of validation, we consider a manually created clustering presented at the website TV Tropes, 10 a wiki that collects user-submitted examples of common tropes (narrative, character and plot devices) found in television, film, and fiction, among other media. While TV Tropes contains a wide range of such conventions, we manually identified a set of 72 tropes that could reasonably be labeled character types, including THE CORRUPT CORPO-RATE EXECUTIVE, THE HARDBOILED DETEC-TIVE, THE JERK JOCK, THE KLUTZ and THE SURFER DUDE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TV Tropes", "sec_num": "5.2" }, { "text": "We manually aligned user-submitted examples of characters embodying these 72 character types with the canonical references in Freebase to create a test set of 501 individual characters. While the 72 character tropes represented here are a more subjective measure, we expect to be able to at least partially recover this clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TV Tropes", "sec_num": "5.2" }, { "text": "To measure the similarity between the two clusterings of movie characters, gold clusters G and induced latent persona clusters C, we calculate the variation of information (Meil\u0203, 2007) :", "cite_spans": [ { "start": 172, "end": 185, "text": "(Meil\u0203, 2007)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Variation of Information", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V I(G, C) = H(G) + H(C) \u2212 2I(G, C)", "eq_num": "(5)" } ], "section": "Variation of Information", "sec_num": "5.3" }, { "text": "= H(G|C) + H(C|G)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variation of Information", "sec_num": "5.3" }, { "text": "VI measures the information-theoretic distance between the two clusterings: a lower value means greater similarity, and VI = 0 if they are identical. Low VI indicates that (induced) clusters and (gold) clusters tend to overlap; i.e., knowing a character's (induced) cluster usually tells us their (gold) cluster, and vice versa. Variation of information is a metric (symmetric and obeys triangle inequality), and has a number of other desirable properties. Table 1 presents the VI between the learned persona clusters and gold clusters, for varying numbers of personas (P = {25, 50, 100}) and topics (K = {25, 50, 100}). To determine significance with respect to a random baseline, we conduct a permutation test (Fisher, 1935; Pitman, 1937 ) in which we randomly shuffle the labels of the learned persona clusters and count the number of times in 1,000 such trials that the VI of the observed persona labels is lower than the VI of the permuted labels; this defines a nonparametric p-value. All results presented are significant at p < 0.001 (i.e. observed VI is never lower than the simulation VI).", "cite_spans": [ { "start": 712, "end": 726, "text": "(Fisher, 1935;", "ref_id": "BIBREF11" }, { "start": 727, "end": 739, "text": "Pitman, 1937", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 457, "end": 464, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Variation of Information", "sec_num": "5.3" }, { "text": "Over all tests in comparison to both gold clusterings, we see VI improve as both P and, to a lesser extent, K increase. While this may be expected as the number of personas increase to match the number of distinct types in the gold clusters (970 and 72, respectively), the fact that VI improves as the number of latent topics increases suggests that more fine-grained topics are helpful for capturing nuanced character types. 11 The difference between the persona regression model and the Dirichlet persona model here is not significant; while VI allows us to compare models with different numbers of latent clusters, its requirement that clusterings be mutually informative places a high overhead on models that are fundamentally unidirectional (in Table 1 , for example, the room for improvement between two models of the same P and K is naturally smaller than the bigger difference between different P or K). While we would naturally prefer a text-only model to be as expressive as a model that requires potentially hard to acquire metadata, we tease apart whether a distinction actually does exist by evaluating the purity of the gold clusters with respect to the labels assigned them.", "cite_spans": [ { "start": 426, "end": 428, "text": "11", "ref_id": null } ], "ref_spans": [ { "start": 750, "end": 757, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Variation of Information", "sec_num": "5.3" }, { "text": "For gold clusters G = {g 1 . . . g k } and inferred clusters C = {c 1 . . . c j } we calculate purity as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Purity", "sec_num": "5.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Purity = 1 N k max j |g k \u2229 c j |", "eq_num": "(7)" } ], "section": "Purity", "sec_num": "5.4" }, { "text": "While purity cannot be used to compare models of different persona size P , it can help us distinguish between models of the same size. A model can attain perfect purity, however, by placing all characters into a single cluster; to control for this, we present a controlled baseline in which each character is assigned a latent character type label proportional to the size of the latent clusters we have learned (so that, for example, if one latent persona cluster contains 3.2% of the total characters, Figure 3 : Dramatis personae of The Dark Knight (2008) , illustrating 3 of the 100 character types learned by the persona regression model, along with links from other characters in those latent classes to other movies. Each character type is listed with the top three latent topics with which it is associated.", "cite_spans": [ { "start": 546, "end": 559, "text": "Knight (2008)", "ref_id": null } ], "ref_spans": [ { "start": 505, "end": 513, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Purity", "sec_num": "5.4" }, { "text": "the probability of selecting that persona at random is 3.2%). Table 2 presents each model's absolute purity score paired with its improvement over its controlled permutation (e.g., \u219141%).", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 69, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Purity", "sec_num": "5.4" }, { "text": "Within each fixed-size partition, the use of metadata yields a substantial improvement over the Dirichlet model, both in terms of absolute purity and in its relative improvement over its sizedcontrolled baseline. In practice, we find that while the Dirichlet model distinguishes between character personas in different movies, the persona regression model helps distinguish between different personas within the same movie.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Purity", "sec_num": "5.4" }, { "text": "As with other generative approaches, latent persona models enable exploratory data analysis. To illustrate this, we present results from the persona regression model learned above, with 50 latent lexical classes and 100 latent personas. Figure 3 visualizes this data by focusing on a single movie, The Dark Knight (2008) ; the movie's protagonist, Batman, belongs to the same latent persona as Detective Jim Gordon, as well as other action movie protagonists Jason Bourne and Tony Stark (Iron Man). The movie's antagonist, The Joker, belongs to the same latent persona as Dracula from Van Helsing and Colin Sullivan from The Departed, illustrating the ability of personas to be informed by, but still cut across, different genres. Table 3 presents an exhaustive list of all 50 top-ics, along with an assigned label that consists of the single word with the highest PMI for that class. Of note are topics relating to romance (unite, marry, woo, elope, court), commercial transactions (purchase, sign, sell, owe, buy), and the classic criminal schema from Chambers (2011) (sentence, arrest, assign, convict, promote). Table 4 presents the most frequent 14 personas in our dataset, illustrated with characters from the 500 highest grossing movies. The personas learned are each three separate mixtures of the 50 latent topics (one for agent relations, one for patient relations, and one for attributes), as illustrated in figure 1 above. Rather than presenting a 3 \u00d7 50 histogram for each persona, we illustrate them by listing the most characteristic topics, movie characters, and metadata features associated with it. Characteristic actions and features are defined as those having the highest smoothed pointwise mutual information with that class; exemplary characters are those with the highest posterior probability of being drawn from that class. Among the personas learned are canonical male action heroes (exemplified by the protagonists of The Bourne Supremacy, Speed, and Taken), superheroes (Hulk, Batman and Robin, Hector of Troy) and several romantic comedy types, largely characterized by words drawn from the FLIRT topic, including flirt, reconcile, date, dance and forgive. Table 3 : Latent topics learned for K = 50 and P = 100. The words shown for each class are those with the highest smoothed PMI, with the label being the single word with the highest PMI.", "cite_spans": [ { "start": 307, "end": 320, "text": "Knight (2008)", "ref_id": null } ], "ref_spans": [ { "start": 237, "end": 245, "text": "Figure 3", "ref_id": null }, { "start": 731, "end": 738, "text": "Table 3", "ref_id": null }, { "start": 1116, "end": 1123, "text": "Table 4", "ref_id": null }, { "start": 2187, "end": 2194, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Exploratory Data Analysis", "sec_num": "6" }, { "text": "We present a method for automatically inferring latent character personas from text (and metadata, when available). While our testbed has been textual synopses of film, this approach is easily extended to other genres (such as novelistic fiction) and to non-fictional domains as well, where the choice of portraying a real-life person as embodying a particular kind of persona may, for instance, give insight into questions of media framing and bias in newswire; self-presentation of individual personas likewise has a long history in communication theory (Goffman, 1959) and may be useful for inferring user types for personalization systems (El-Arini et al., 2012) . While the goal of this work has been to induce a set of latent character classes and partition all characters among them, one interesting question that remains is how a specific character's actions may informatively be at odds with their inferred persona, given the choice of that persona as the single best fit to explain the actions we observe. By examining how any individual character deviates from the behavior indicative of their type, we might be able to paint a more nuanced picture of how a character can embody a specific persona while resisting it at the same time.", "cite_spans": [ { "start": 556, "end": 571, "text": "(Goffman, 1959)", "ref_id": "BIBREF12" }, { "start": 643, "end": 666, "text": "(El-Arini et al., 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://download.freebase.com/ datadumps/ 8 Whether this extreme 2:1 male/female ratio reflects an inherent bias in film or a bias in attention on Freebase (or Wikipedia, on which it draws) is an interesting research question in itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The \u2212e superscript denotes counts taken without considering the current sample for character e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://tvtropes.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This trend is robust to the choice of cluster metric: here VI and F -score have a correlation of \u22120.87; as more latent topics and personas are added, clustering improves (causing the F -score to go up and the VI distance to go down).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Megan Morrison at the CMU School of Drama for early conversations guiding our work, as well as the anonymous reviewers for helpful comments. The research reported in this article was supported by U.S. National Science Foundation grant IIS-0915187 and by an ARCS scholarship to D.B. This work was made possible through the use of computing resources made available by the Pittsburgh Supercomputing Center.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF2": { "ref_id": "b2", "title": "The Hero with a Thousand Faces", "authors": [ { "first": "Joseph", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 1949, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Campbell. 1949. The Hero with a Thousand Faces. Pantheon Books.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised learning of narrative event chains", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Unsu- pervised learning of narrative event chains. In Pro- ceedings of ACL-08: HLT.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised learning of narrative schemas and their participants", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 47th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers and Dan Jurafsky. 2009. Unsu- pervised learning of narrative schemas and their par- ticipants. In Proceedings of the 47th Annual Meet- ing of the ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Inducing Event Schemas and their Participants from Unlabeled Text", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers. 2011. Inducing Event Schemas and their Participants from Unlabeled Text. Ph.D. thesis, Stanford University.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Stanford typed dependencies manual", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe and Christopher D. Man- ning. 2008. Stanford typed dependencies manual. Technical report, Stanford University.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Art of Dramatic Writing", "authors": [ { "first": "Lajos", "middle": [], "last": "Egri", "suffix": "" } ], "year": 1946, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lajos Egri. 1946. The Art of Dramatic Writing. Simon and Schuster, New York.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Transparent user models for personalization", "authors": [ { "first": "Khalid", "middle": [], "last": "El-Arini", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Paquet", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Herbrich", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 18th ACM SIGKDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khalid El-Arini, Ulrich Paquet, Ralf Herbrich, Jurgen Van Gael, and Blaise Ag\u00fcera y Arcas. 2012. Trans- parent user models for personalization. In Proceed- ings of the 18th ACM SIGKDD.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Character-based kernels for novelistic plot structure", "authors": [ { "first": "Micha", "middle": [ "Elsner" ], "last": "", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha Elsner. 2012. Character-based kernels for nov- elistic plot structure. In Proceedings of the 13th Conference of the EACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning Narrative Structure from Annotated Folktales", "authors": [ { "first": "", "middle": [], "last": "Mark Alan Finlayson", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Alan Finlayson. 2011. Learning Narrative Structure from Annotated Folktales. Ph.D. thesis, MIT.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Design of Experiments", "authors": [ { "first": "R", "middle": [ "A" ], "last": "Fisher", "suffix": "" } ], "year": 1935, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. A. Fisher. 1935. The Design of Experiments. Oliver and Boyde, Edinburgh and London.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Presentation of the Self in Everyday Life", "authors": [ { "first": "Erving", "middle": [], "last": "Goffman", "suffix": "" } ], "year": 1959, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erving Goffman. 1959. The Presentation of the Self in Everyday Life. Anchor.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatically producing plot unit representations for narrative text", "authors": [ { "first": "Amit", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Goyal, Ellen Riloff, and Hal Daum\u00e9, III. 2010. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Con- ference on EMNLP.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised discovery of a statistical verb lexicon", "authors": [ { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trond Grenager and Christopher D. Manning. 2006. Unsupervised discovery of a statistical verb lexicon. In Proceedings of the 2006 Conference on EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Finding scientific topics", "authors": [ { "first": "L", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2004, "venue": "", "volume": "101", "issue": "", "pages": "5228--5235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. PNAS, 101(suppl. 1):5228-5235.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Archetypes and The Collective Unconscious", "authors": [ { "first": "Carl", "middle": [], "last": "Jung", "suffix": "" } ], "year": 1981, "venue": "Collected Works. Bollingen", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Jung. 1981. The Archetypes and The Collective Unconscious, volume 9 of Collected Works. Bollin- gen, Princeton, NJ, 2nd edition.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Plot induction and evolutionary search for story generation", "authors": [ { "first": "Neil", "middle": [], "last": "Mcintyre", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neil McIntyre and Mirella Lapata. 2010. Plot induc- tion and evolutionary search for story generation. In Proceedings of the 48th Annual Meeting of the ACL. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Story: Substance, Structure, Style and the Principles of Screenwriting", "authors": [ { "first": "Robert", "middle": [], "last": "Mckee", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert McKee. 1997. Story: Substance, Structure, Style and the Principles of Screenwriting. Harper- Colllins.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Comparing clusterings-an information based distance", "authors": [ { "first": "Marina", "middle": [], "last": "Meil\u0203", "suffix": "" } ], "year": 2007, "venue": "Journal of Multivariate Analysis", "volume": "98", "issue": "5", "pages": "873--895", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Meil\u0203. 2007. Comparing clusterings-an in- formation based distance. Journal of Multivariate Analysis, 98(5):873-895.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Topic models conditioned on arbitrary features with dirichlet-multinomial regression", "authors": [ { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2008, "venue": "Proceedings of UAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mimno and Andrew McCallum. 2008. Topic models conditioned on arbitrary features with dirichlet-multinomial regression. In Proceedings of UAI.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Distributional clustering of english words", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Naftali", "middle": [], "last": "Tishby", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of english words. In Proceedings of the 31st Annual Meeting of the ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Significance tests which may be applied to samples from any population. Supplement to the", "authors": [ { "first": "E", "middle": [ "J G" ], "last": "Pitman", "suffix": "" } ], "year": 1937, "venue": "Journal of the Royal Statistical Society", "volume": "4", "issue": "1", "pages": "119--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. J. G. Pitman. 1937. Significance tests which may be applied to samples from any population. Supple- ment to the Journal of the Royal Statistical Society, 4(1):119-130.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning script knowledge with web experiments", "authors": [ { "first": "Michaela", "middle": [], "last": "Regneri", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meeting of the ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Learning script participants from unlabeled data", "authors": [ { "first": "Michaela", "middle": [], "last": "Regneri", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michaela Regneri, Alexander Koller, Josef Ruppen- hofer, and Manfred Pinkal. 2011. Learning script participants from unlabeled data. In Proceedings of the Conference on Recent Advances in Natural Lan- guage Processing.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Scripts, plans, goals, and understanding: An inquiry into human knowledge structures", "authors": [ { "first": "C", "middle": [], "last": "Roger", "suffix": "" }, { "first": "Robert", "middle": [ "P" ], "last": "Schank", "suffix": "" }, { "first": "", "middle": [], "last": "Abelson", "suffix": "" } ], "year": 1977, "venue": "Lawrence Erlbaum", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger C. Schank and Robert P. Abelson. 1977. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Lawrence Erlbaum, Hillsdale, NJ.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A bayesian approach to unsupervised semantic role induction", "authors": [ { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Klementiev", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Titov and Alexandre Klementiev. 2012. A bayesian approach to unsupervised semantic role in- duction. In Proceedings of the 13th Conference of EACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A Monte Carlo implementation of the EM algorithm and the poor man's data augmentation algorithms", "authors": [ { "first": "C", "middle": [ "G" ], "last": "Greg", "suffix": "" }, { "first": "Martin", "middle": [ "A" ], "last": "Wei", "suffix": "" }, { "first": "", "middle": [], "last": "Tanner", "suffix": "" } ], "year": 1990, "venue": "Journal of the American Statistical Association", "volume": "85", "issue": "", "pages": "699--704", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg C. G. Wei and Martin A. Tanner. 1990. A Monte Carlo implementation of the EM algorithm and the poor man's data augmentation algorithms. Journal of the American Statistical Association, 85:699-704.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Structured relation discovery using generative models", "authors": [ { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Limin Yao, Aria Haghighi, Sebastian Riedel, and An- drew McCallum. 2011. Structured relation discov- ery using generative models. In Proceedings of the Conference on EMNLP.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "text": "Number of personas (hyperparameter) K Number of word topics (hyperparameter) D Number of movie plot summaries E Number of characters in movie d W Number of (role, word) tuples used by character e \u03c6 k Topic k's distribution over V words. r Tuple role: agent verb, patient verb, attribute \u03c8p,r Distribution over topics for persona p in role r \u03b8 d Movie d's distribution over personas pe Character e's persona (integer, p \u2208 {1..P }) j A specific (r, w) tuple in the data zj Word topic for tuple j wj Word for tuple j \u03b1 Concentration parameter for Dirichlet model \u03b2 Feature weights for regression model \u00b5, \u03c3 2 Gaussian mean and variance (for regularizing \u03b2) m d Movie features (from movie metadata) me Entity features (from movie actor metadata) \u03bdr, \u03b3 Dirichlet concentration parameters", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Above: Dirichlet persona model (left) and persona regression model (right). Bottom: Definition of variables.", "uris": null, "num": null }, "TABREF1": { "num": null, "content": "
KModelP = 25Character Names \u00a75.1 P = 50P = 100P = 25TV Tropes \u00a75.2 P = 50P = 100
25 50 100Persona regression 62.8 (\u219141%) 59.5 (\u219140%) 53.7 (\u219133%) 42.3 (\u219131%) 38.5 (\u219124%) 33.1 (\u219125%) Dirichlet persona 54.7 (\u219127%) 50.5 (\u219126%) 45.4 (\u219117%) 39.5 (\u219120%) 31.7 (\u219128%) 25.1 (\u219121%) Persona regression 63.1 (\u219142%) 59.8 (\u219142%) 53.6 (\u219134%) 42.9 (\u219130%) 39.1 (\u219133%) 31.3 (\u219120%) Dirichlet persona 57.2 (\u219134%) 49.0 (\u219123%) 44.7 (\u219116%) 39.7 (\u219130%) 31.5 (\u219132%) 24.6 (\u219122%) Persona regression 63.1 (\u219142%) 57.7 (\u219139%) 53.0 (\u219134%) 43.5 (\u219133%) 32.1 (\u219128%) 26.5 (\u219122%) Dirichlet persona 55.3 (\u219130%) 49.5 (\u219124%) 45.2 (\u219118%) 39.7 (\u219134%) 29.9 (\u219124%) 23.6 (\u219119%)
", "html": null, "type_str": "table", "text": "Variation of information between learned personas and gold clusters for different numbers of topics K and personas P . Lower values are better. All values are reported in bits." }, "TABREF2": { "num": null, "content": "", "html": null, "type_str": "table", "text": "Purity scores of recovering gold clusters. Higher values are better. Each absolute purity score is paired with its improvement over a controlled baseline of permuting the learned labels while keeping the cluster proportions the same." } } } }