{ "paper_id": "P18-1043", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:40:46.616114Z" }, "title": "Event2Mind: Commonsense Inference on Events, Intents, and Reactions", "authors": [ { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "hrashkin@cs.washington.edu" }, { "first": "\u2020\u21e4", "middle": [], "last": "Maarten", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" }, { "first": "Sap", "middle": [], "last": "\u2020\u21e4", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" }, { "first": "Emily", "middle": [], "last": "Allaway", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "eallaway@cs.washington.edu" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "nasmith@cs.washington.edu" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" }, { "first": "Paul", "middle": [ "G" ], "last": "Allen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We investigate a new commonsense inference task: given an event described in a short free-form text (\"X drinks coffee in the morning\"), a system reasons about the likely intents (\"X wants to stay awake\") and reactions (\"X feels alert\") of the event's participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people's intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.", "pdf_parse": { "paper_id": "P18-1043", "_pdf_hash": "", "abstract": [ { "text": "We investigate a new commonsense inference task: given an event described in a short free-form text (\"X drinks coffee in the morning\"), a system reasons about the likely intents (\"X wants to stay awake\") and reactions (\"X feels alert\") of the event's participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people's intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Understanding a narrative requires commonsense reasoning about the mental states of people in relation to events. For example, if \"Alex is dragging his feet at work\", pragmatic implications about Alex's intent are that \"Alex wants to avoid doing things\" (Figure 1 ). We can also infer that Alex's emotional reaction might be feeling \"lazy\" or \"bored\". Furthermore, while not explicitly mentioned, we can infer that people other than Alex are affected by the situation, and these people are likely to feel \"frustrated\" or \"impatient\".", "cite_spans": [], "ref_spans": [ { "start": 254, "end": 263, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This type of pragmatic inference can potentially be useful for a wide range of NLP applications \u21e4 These two authors contributed equally. Figure 1 : Examples of commonsense inference on mental states of event participants. In the third example event, common sense tells us that Y is likely to feel betrayed as a result of X reading their diary. that require accurate anticipation of people's intents and emotional reactions, even when they are not explicitly mentioned. For example, an ideal dialogue system should react in empathetic ways by reasoning about the human user's mental state based on the events the user has experienced, without the user explicitly stating how they are feeling. Similarly, advertisement systems on social media should be able to reason about the emotional reactions of people after events such as mass shootings and remove ads for guns which might increase social distress (Goel and Isaac, 2016) . Also, pragmatic inference is a necessary step toward automatic narrative understanding and generation (Tomai and Forbus, 2010; Ding and Riloff, 2016; Ding et al., 2017) . However, this type of social commonsense reasoning goes far beyond the widely studied entailment tasks (Bowman et al., 2015; Dagan et al., 2006) and thus falls outside the scope of existing benchmarks.", "cite_spans": [ { "start": 903, "end": 925, "text": "(Goel and Isaac, 2016)", "ref_id": "BIBREF19" }, { "start": 1030, "end": 1054, "text": "(Tomai and Forbus, 2010;", "ref_id": "BIBREF47" }, { "start": 1055, "end": 1077, "text": "Ding and Riloff, 2016;", "ref_id": "BIBREF14" }, { "start": 1078, "end": 1096, "text": "Ding et al., 2017)", "ref_id": "BIBREF13" }, { "start": 1202, "end": 1223, "text": "(Bowman et al., 2015;", "ref_id": "BIBREF6" }, { "start": 1224, "end": 1243, "text": "Dagan et al., 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 137, "end": 145, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce a new task, corpus, Table 1 : Example annotations of intent and reactions for 6 event phrases. Each annotator could fill in up to three free-responses for each mental state.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 55, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "PersonX drags", "sec_num": null }, { "text": "and model, supporting commonsense inference on events with a specific focus on modeling stereotypical intents and reactions of people, described in short free-form text. Our study is in a similar spirit to recent efforts of Ding and Riloff (2016) and Zhang et al. (2017) , in that we aim to model aspects of commonsense inference via natural language descriptions. Our new contributions are:", "cite_spans": [ { "start": 224, "end": 246, "text": "Ding and Riloff (2016)", "ref_id": "BIBREF14" }, { "start": 251, "end": 270, "text": "Zhang et al. (2017)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "PersonX drags", "sec_num": null }, { "text": "(1) a new corpus that supports commonsense inference about people's intents and reactions over a diverse range of everyday events and situations,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PersonX drags", "sec_num": null }, { "text": "inference about even those people who are not directly mentioned by the event phrase, and (3) a task formulation that aims to generate the textual descriptions of intents and reactions, instead of classifying their polarities or classifying the inference relations between two given textual descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PersonX drags", "sec_num": null }, { "text": "Our work establishes baseline performance on this new task, demonstrating that, given the phrase-level inference dataset, neural encoderdecoder models can successfully compose phrasal embeddings for previously unseen events and reason about the mental states of their participants. Furthermore, in order to showcase the practical implications of commonsense inference on events and people's mental states, we apply our model to modern movie scripts, which provide a new insight into the gender bias in modern films beyond what previous studies have offered (England et al., 2011; Agarwal et al., 2015; Ramakrishna et al., 2017; Sap et al., 2017) . The resulting corpus includes around 25,000 event phrases, which combine automatically extracted phrases from stories and blogs with all idiomatic verb phrases listed in the Wiktionary. Our corpus is publicly available. 1", "cite_spans": [ { "start": 557, "end": 579, "text": "(England et al., 2011;", "ref_id": "BIBREF17" }, { "start": 580, "end": 601, "text": "Agarwal et al., 2015;", "ref_id": "BIBREF1" }, { "start": 602, "end": 627, "text": "Ramakrishna et al., 2017;", "ref_id": "BIBREF38" }, { "start": 628, "end": 645, "text": "Sap et al., 2017)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "PersonX drags", "sec_num": null }, { "text": "One goal of our investigation is to probe whether it is feasible to build computational models that can perform limited, but well-scoped commonsense inference on short free-form text, which we refer to as event phrases. While there has been much prior research on phrase-level paraphrases (Pavlick et al., 2015) and phrase-level entailment (Dagan et al., 2006) , relatively little prior work focused on phrase-level inference that requires prag-matic or commonsense interpretation. We scope our study to two distinct types of inference: given a phrase that describes an event, we want to reason about the likely intents and emotional reactions of people who caused or affected by the event. This complements prior work on more general commonsense inference (Speer and Havasi, 2012; Li et al., 2016; Zhang et al., 2017) , by focusing on the causal relations between events and people's mental states, which are not well covered by most existing resources.", "cite_spans": [ { "start": 289, "end": 311, "text": "(Pavlick et al., 2015)", "ref_id": "BIBREF36" }, { "start": 340, "end": 360, "text": "(Dagan et al., 2006)", "ref_id": "BIBREF11" }, { "start": 757, "end": 781, "text": "(Speer and Havasi, 2012;", "ref_id": "BIBREF45" }, { "start": 782, "end": 798, "text": "Li et al., 2016;", "ref_id": "BIBREF31" }, { "start": 799, "end": 818, "text": "Zhang et al., 2017)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "We collect a wide range of phrasal event descriptions from stories, blogs, and Wiktionary idioms. Compared to prior work on phrasal embeddings (Wieting et al., 2015; Pavlick et al., 2015) , our work generalizes the phrases by introducing (typed) variables. In particular, we replace words that correspond to entity mentions or pronouns with typed variables such as PersonX or PersonY, as shown in examples in Table 1 . More formally, the phrases we extract are a combination of a verb predicate with partially instantiated arguments. We keep specific arguments together with the predicate, if they appear frequently enough (e.g., PersonX eats pasta for dinner). Otherwise, the arguments are replaced with an untyped blank (e.g., PersonX eats for dinner). In our work, only person mentions are replaced with typed variables, leaving other types to future research.", "cite_spans": [ { "start": 143, "end": 165, "text": "(Wieting et al., 2015;", "ref_id": "BIBREF50" }, { "start": 166, "end": 187, "text": "Pavlick et al., 2015)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 409, "end": 416, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "Inference types The first type of pragmatic inference is about intent. We define intent as an explanation of why the agent causes a volitional event to occur (or \"none\" if the event phrase was unintentional). The intent can be considered a mental pre-condition of an action or an event. For example, if the event phrase is PersonX takes a stab at , the annotated intent might be that \"PersonX wants to solve a problem\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "The second type of pragmatic inference is about emotional reaction. We define reaction as an explanation of how the mental states of the agent and other people involved in the event would change as a result. The reaction can be considered a mental post-condition of an action or an event. For example, if the event phrase is that PersonX gives PersonY as a gift, PersonX might \"feel good about themselves\" as a result, and PersonY might \"feel grateful\" or \"feel thankful\". ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "We extract phrasal events from three different corpora for broad coverage: the ROC Story training set (Mostafazadeh et al., 2016) , the Google Syntactic N-grams (Goldberg and Orwant, 2013) , and the Spinn3r corpus (Gordon and Swanson, 2008) . We derive events from the set of verb phrases in our corpora, based on syntactic parses (Klein and Manning, 2003) . We then replace the predicate subject and other entities with the typed variables (e.g., PersonX, PersonY), and selectively substitute verb arguments with blanks ( ). We use frequency thresholds to select events to annotate (for details, see Appendix A.1). Additionally, we supplement the list of events with all 2,000 verb idioms found in Wiktionary, in order to cover events that are less compositional. 2 Our final annotation corpus contains nearly 25,000 event phrases, spanning over 1,300 unique verb predicates (Table 2 ).", "cite_spans": [ { "start": 102, "end": 129, "text": "(Mostafazadeh et al., 2016)", "ref_id": "BIBREF35" }, { "start": 161, "end": 188, "text": "(Goldberg and Orwant, 2013)", "ref_id": "BIBREF20" }, { "start": 214, "end": 240, "text": "(Gordon and Swanson, 2008)", "ref_id": "BIBREF21" }, { "start": 331, "end": 356, "text": "(Klein and Manning, 2003)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 876, "end": 884, "text": "(Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Event Extraction", "sec_num": "2.1" }, { "text": "We design an Amazon Mechanical Turk task to annotate the mental pre-and post-conditions of event phrases. A snippet of our MTurk HIT design is shown in Figure 2 . For each phrase, we ask three annotators whether the agent of the event, PersonX, intentionally causes the event, and if so, to provide up to three possible textual descriptions of their intents. We then ask annotators to provide up to three possible reactions that PersonX might experience as a result. We also ask annotators to provide up to three possible reactions of other people, when applicable. These other people can be either explicitly mentioned (e.g., \"PersonY\" in PersonX punches", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 160, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Crowdsourcing", "sec_num": "2.2" }, { "text": "PersonY's lights out), or only implied Figure 2 : Intent portion of our annotation task. We allow annotators to label events as invalid if the phrase is unintelligible. The full annotation setup is shown in Figure 8 in the appendix.", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 47, "text": "Figure 2", "ref_id": null }, { "start": 207, "end": 215, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Crowdsourcing", "sec_num": "2.2" }, { "text": "(e.g., given the event description PersonX yells at the classroom, we can infer that other people such as \"students\" in the classroom may be affected by the act of PersonX). For quality control, we periodically removed workers with high disagreement rates, at our discretion. To prune the set of events that will be annotated for intent and reaction, we ran a preliminary annotation to filter out candidate events that have implausible coreferences. In this preliminary task, annotators were shown a combinatorial list of coreferences for an event (e.g., PersonX punches PersonX's lights out, PersonX punches PersonY's lights out) and were asked to select only the plausible ones (e.g., PersonX punches PersonY's lights out). Each set of coreferences was annotated by 3 workers, yielding an overall agreement of \uf8ff =0.4. This annotation excluded 8,406 events with implausible coreference from our set (out of 17,806 events).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Crowdsourcing", "sec_num": "2.2" }, { "text": "Our dataset contains nearly 25,000 event phrases, with annotators rating 91% of our extracted events as \"valid\" (i.e., the event makes sense). Of those events, annotations for the multiple choice portions of the task (whether or not there exists intent/reaction) agree moderately, with an average Cohen's \uf8ff = 0.45 ( Table 2) . The individual \uf8ff scores generally indicate that turkers disagree half as often as if they were randomly selecting answers.", "cite_spans": [], "ref_spans": [ { "start": 316, "end": 324, "text": "Table 2)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Mental State Descriptions", "sec_num": "2.3" }, { "text": "Importantly, this level of agreement is acceptable in our task formulation for two reasons. First, unlike linguistic annotations on syntax or semantics where experts in the corresponding theory would generally agree on a single correct label, pragmatic interpretations may better be defined as distributions over multiple correct labels (e.g., after PersonX takes a test, PersonX might feel relieved and/or stressed; de Marneffe et al., 2012). Second, because we formulate our task as a conditional language modeling problem, where a distribution over the textual descriptions of intents and reactions is conditioned on the event description, this variation in the labels is only as expected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mental State Descriptions", "sec_num": "2.3" }, { "text": "A majority of our events are annotated as willingly caused by the agent (86%, Cohen's \uf8ff = 0.48), and 26% involve other people (\uf8ff = 0.41). Most event patterns in our data are fully instantiated, with only 22% containing blanks ( ). In our corpus, the intent annotations are slightly longer (3.4 words on average) than the reaction annotations (1.5 words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mental State Descriptions", "sec_num": "2.3" }, { "text": "Given an event phrase, our models aim to generate three entity-specific pragmatic inferences: Per-sonX's intent, PersonX's reaction, and others' reactions. The general outline of our model architecture is illustrated in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 220, "end": 228, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "The input to our model is an event pattern described through free-form text with typed variables such as PersonX gives PersonY as a gift. For notation purposes, we describe each event pattern E as a sequence of word embeddings he 1 , e 2 , . . . , e n i 2 R n\u21e5D . This input is encoded as a vector h E 2 R H that will be used for predicting output. The output of the model is its hypotheses about PersonX's intent, PersonX's reaction, and others' reactions (v i ,v x , and v o , respectively). We experiment with representing the Figure 3 : Overview of the model architecture.", "cite_spans": [], "ref_spans": [ { "start": 530, "end": 538, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "From an encoded event, our model predicts intents and reactions in a multitask setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "output in two decoding set-ups: three vectors interpretable as discrete distributions over words and phrases (n-gram reranking) or three sequences of words (sequence decoding).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Encoding events The input event phrase E is compressed into an H-dimensional embedding h E via an encoding function f :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "R n\u21e5D ! R H : h E = f (e 1 , . . . , e n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "We experiment with several ways for defining f , inspired by standard techniques in sentence and phrase classification (Kim, 2014) . First, we experiment with max-pooling and mean-pooling over the word vectors {e i } n i=1 . We also consider a convolutional neural network (ConvNet; LeCun et al., 1998) taking the last layer of the network as the encoded version of the event. Lastly, we encode the event phrase with a bi-directional RNN (specifically, a GRU; Cho et al., 2014) , concatenating the final hidden states of the forward and backward cells as the encoding:", "cite_spans": [ { "start": 119, "end": 130, "text": "(Kim, 2014)", "ref_id": "BIBREF28" }, { "start": 273, "end": 282, "text": "(ConvNet;", "ref_id": null }, { "start": 283, "end": 302, "text": "LeCun et al., 1998)", "ref_id": "BIBREF30" }, { "start": 460, "end": 477, "text": "Cho et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "h E = [ ! h n ; h 1 ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "For hyperparameters and other details, we refer the reader to Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Though the event sequences are typically rather short (4.6 tokens on average), our model still benefits from the ConvNet and BiRNN's ability to compose words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "We use three decoding modules that take the event phrase embedding h E and output distributions of possible PersonX's intent (v i ), PersonX's reactions (v x ), and others' reactions (v o ). We experiment with two different decoder set-ups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic inference decoding", "sec_num": null }, { "text": "First, we experiment with n-gram re-ranking, considering the |V | most frequent {1, 2, 3}grams in our annotations. Each decoder projects the event phrase embedding h E into a |V |dimensional vector, which is then passed through a softmax function. For instance, the distribution over descriptions of PersonX's intent is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic inference decoding", "sec_num": null }, { "text": "v i = softmax(W i h E + b i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic inference decoding", "sec_num": null }, { "text": "Second, we experiment with sequence generation, using RNN decoders to generate the textual description. The event phrase embedding h E is set as the initial state h dec of three decoder RNNs (using GRU cells), which then output the intent/reactions one word at a time (using beam-search at test time). For example, an event's intent sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic inference decoding", "sec_num": null }, { "text": "(v i = v (0) i v (1) i . . .) is computed as follows: v (t+1) i = softmax(W i RNN(v (t) i , h (t) i,dec ) + b i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic inference decoding", "sec_num": null }, { "text": "Training objective We minimize the crossentropy between the predicted distribution over words and phrases, against the one actually observed in our dataset. Further, we employ multitask learning, simultaneously minimizing the loss for all three decoders at each iteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic inference decoding", "sec_num": null }, { "text": "Training details We fix our input embeddings, using 300-dimensional skip-gram word embeddings trained on Google News (Mikolov et al., 2013) . For decoding, we consider a vocabulary of size |V | = 14,034 in the n-gram re-ranking setup. For the sequence decoding setup, we only consider the unigrams in V , yielding an output space of 7,110 at each time step.", "cite_spans": [ { "start": 117, "end": 139, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Pragmatic inference decoding", "sec_num": null }, { "text": "We randomly divided our set of 24,716 unique events (57,094 annotations) into a training/dev./test set using an 80/10/10% split. Some annotations have multiple responses (i.e., a crowdworker gave multiple possible intents and reactions), in which case we take each of the combinations of their responses as a separate training example. Table 3 summarizes the performance of different encoding models on the dev and test set in terms of cross-entropy and recall at 10 predicted intents and reactions. As expected, we see a moderate improvement in recall and cross-entropy when using the more compositional encoder models (Con-vNet and BiRNN; both n-gram and sequence de- Table 3 : Average cross-entropy (lower is better) and recall @10 (percentage of times the gold falls within the top 10 decoded; higher is better) on development and test sets for different modeling variations. We show recall values for PersonX's intent, PersonX's reaction and others' reaction (denoted as \"Intent\", \"XReact\", and \"OReact\"). Note that because of two different decoding setups, cross-entropy between n-gram and sequence decoding are not directly comparable.", "cite_spans": [], "ref_spans": [ { "start": 336, "end": 343, "text": "Table 3", "ref_id": null }, { "start": 670, "end": 677, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Pragmatic inference decoding", "sec_num": null }, { "text": "coding setups). Additionally, BiRNN models outperform ConvNets on cross-entropy in both decoding setups. Looking at the recall split across intent vs. reaction labels (\"Intent\", \"XReact\" and \"OReact\" columns), we see that much of the improvement in using these two models is within the prediction of PersonX's intents. Note that recall for \"OReact\" is much higher, since a majority of events do not involve other people.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Results", "sec_num": "4" }, { "text": "Human evaluation To further assess the quality of our models, we randomly select 100 events from our test set and ask crowd-workers to rate generated intents and reactions. We present 5 workers with an event's top 10 most likely intents and reactions according to our model and ask them to select all those that make sense to them. We evaluate each model's precision @10 by computing the average number of generated responses that make sense to annotators. Figure 4 summarizes the results of this evaluation. In most cases, the performance is higher for the sequential decoder than the corresponding n-gram decoder. The biggest gain from using sequence decoders is in intent prediction, possibly because intent explanations are more likely to be longer. The BiRNN and ConvNet encoders consistently have higher precision than the mean-pooling with the BiRNN-seq setup slightly outperforming other models. Unless otherwise specified, this is the model we employ in further sections. Figure 4 : Average precision @10 of each model's top ten responses in the human evaluation. We show results for various encoder functions (meanpool, ConvNet, BiRNN-100d) combined with two decoding setups (n-gram re-ranking, sequence generation).", "cite_spans": [], "ref_spans": [ { "start": 457, "end": 465, "text": "Figure 4", "ref_id": null }, { "start": 981, "end": 989, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Empirical Results", "sec_num": "4" }, { "text": "We test whether certain types of events are easier for predicting commonsense inference. In Figure 6, Figure 5 : Sample predictions from homotopic embeddings (gradual interpolation between Event1 and Event2), selected from the top 10 beam elements decoded in the sequence generation setup. Examples highlight differences captured when ideas are similar (going to and coming from school), when only a single word differs (washes versus cuts), and when two events are unrelated. ilar for all three sets of events, it is 10% behind intent prediction on the full development set. Additionally, predicting other people's reactions is more difficult for the model when other people are explicitly mentioned. Unsurprisingly, idioms are particularly difficult for commonsense inference, perhaps due to the difficulty in composing meaning over nonliteral or noncompositional event descriptions.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 101, "text": "Figure 6,", "ref_id": "FIGREF0" }, { "start": 102, "end": 110, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Error analyses", "sec_num": null }, { "text": "To further evaluate the geometry of the embedding space, we analyze interpolations between pairs of event phrases (from outside the train set), similar to the homotopic analysis of Bowman et al. (2016) . For a handful of event pairs, we decode intents, reactions for PersonX, and reactions for other people from points sampled at equal inter-vals on the interpolated line between two event phrases. We show examples in Figure 5 . The embedding space distinguishes changes from generally positive to generally negative words and is also able to capture small differences between event phrases (such as \"washes\" versus \"cuts\").", "cite_spans": [ { "start": 181, "end": 201, "text": "Bowman et al. (2016)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 419, "end": 427, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Error analyses", "sec_num": null }, { "text": "Through Event2Mind inference, we can attempt to bring to the surface what is implied about people's behavior and mental states. We employ this inference to analyze implicit bias in modern films. As shown in Figure 7 , our model is able to analyze character portrayal beyond what is explicit in text, by performing pragmatic inference on character actions to explain aspects of a character's mental state. In this section, we use our model's inference to shed light on gender differences in intents behind and reactions to characters' actions.", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 215, "text": "Figure 7", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Analyzing Bias via Event2Mind Inference", "sec_num": "5" }, { "text": "For our portrayal analyses, we use scene descriptions from 772 movie scripts released by Gorinski and Lapata (2015), assigned to over 21,000 characters as done by Sap et al. (2017) . We extract events from the scene descriptions, and generate their 10 most probable intent and reaction sequences using our BiRNN sequence model (as in Figure 7 ). We then categorize generated intents and reactions into groups based on LIWC category scores of the generated output (Tausczik and Pennebaker, 2016 (2007, top) and Pretty Woman (1990, bottom) , augmented with Event2mind inferences on the characters' intents and reactions. E.g., our model infers that the event PersonX sits on PersonX's bed, lost in thought implies that the agent, Vivian, is sad or worried. aggregated for each character, and standardized (zero-mean and unit variance).", "cite_spans": [ { "start": 163, "end": 180, "text": "Sap et al. (2017)", "ref_id": "BIBREF43" }, { "start": 463, "end": 493, "text": "(Tausczik and Pennebaker, 2016", "ref_id": "BIBREF46" }, { "start": 494, "end": 505, "text": "(2007, top)", "ref_id": null }, { "start": 517, "end": 537, "text": "Woman (1990, bottom)", "ref_id": null } ], "ref_spans": [ { "start": 334, "end": 342, "text": "Figure 7", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Processing of Movie Scripts", "sec_num": "5.1" }, { "text": "We compute correlations with gender for each category of intent or reaction using a logistic regression model, testing significance while using Holm's correction for multiple comparisons (Holm, 1979) . 4 To account for the gender skew in scene presence (29.4% of scenes have women), we statistically control for the total number of words in a character's scene descriptions. Note that the original event phrases are all gender agnostic, as their participants have been replaced by variables (e.g., PersonX). We also find that the types of gender biases uncovered remain similar when we run these analyses on the human annotations or the generated words and phrases from the BiRNN with n-gram re-ranking decoding setup.", "cite_spans": [ { "start": 187, "end": 199, "text": "(Holm, 1979)", "ref_id": "BIBREF26" }, { "start": 202, "end": 203, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Processing of Movie Scripts", "sec_num": "5.1" }, { "text": "and Needs', 'Personal Concerns', 'Biological Processes', 'Cognitive Processes', 'Social Words', 'Affect Words', 'Perceptual Processes' . We refer the reader to Tausczik and Pennebaker (2016) or http://liwc.wpengine.com/ compare-dictionaries/ for a complete list of category descriptions.", "cite_spans": [ { "start": 4, "end": 134, "text": "Needs', 'Personal Concerns', 'Biological Processes', 'Cognitive Processes', 'Social Words', 'Affect Words', 'Perceptual Processes'", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Processing of Movie Scripts", "sec_num": "5.1" }, { "text": "4 Given the data limitation, we represent gender as a binary, but acknowledge that gender is a more complex social construct. Our Event2Mind inferences automate portrayal analyses that previously required manual annotations (Behm-Morawitz and Mastro, 2008; Prentice and Carranza, 2002; England et al., 2011) . Shown in Table 4 , our results indicate a gender bias in the behavior ascribed to characters, consistent with psychology and gender studies literature (Collins, 2011). Specifically, events with female semantic agents are intended to be helpful to other people (intents involving FRIEND, FAMILY, and AFFILIATION), particularly relating to eating and making food for themselves and others (INGEST, BODY). Events with male agents on the other hand are motivated by and resulting in achievements (ACHIEVE, MONEY, REWARDS, POWER).", "cite_spans": [ { "start": 224, "end": 256, "text": "(Behm-Morawitz and Mastro, 2008;", "ref_id": "BIBREF3" }, { "start": 257, "end": 285, "text": "Prentice and Carranza, 2002;", "ref_id": "BIBREF37" }, { "start": 286, "end": 307, "text": "England et al., 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 319, "end": 326, "text": "Table 4", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Processing of Movie Scripts", "sec_num": "5.1" }, { "text": "Women's looks and sexuality are also emphasized, as their actions' intents and reactions are sexual, seen, or felt (SEXUAL, SEE, PERCEPT). Men's actions, on the other hand, are motivated by violence or fighting (DEATH, ANGER, RISK), with strong negative reactions (SAD, ANGER, NEGA-TIVE EMOTION).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Processing of Movie Scripts", "sec_num": "5.1" }, { "text": "Our approach decodes nuanced implications into more explicit statements, helping to identify and explain gender bias that is prevalent in modern literature and media. Specifically, our results indicate that modern movies have the bias to portray female characters as having pro-social attitudes, whereas male characters are portrayed as being competitive or pro-achievement. This is consistent with gender stereotypes that have been studied in movies in both NLP and psychology literature (Agarwal et al., 2015; Madaan et al., 2017; Prentice and Carranza, 2002; England et al., 2011) .", "cite_spans": [ { "start": 489, "end": 511, "text": "(Agarwal et al., 2015;", "ref_id": "BIBREF1" }, { "start": 512, "end": 532, "text": "Madaan et al., 2017;", "ref_id": "BIBREF32" }, { "start": 533, "end": 561, "text": "Prentice and Carranza, 2002;", "ref_id": "BIBREF37" }, { "start": 562, "end": 583, "text": "England et al., 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Processing of Movie Scripts", "sec_num": "5.1" }, { "text": "Prior work has sought formal frameworks for inferring roles and other attributes in relation to events (Baker et al., 1998; Das et al., 2014; Schuler et al., 2009; Hartshorne et al., 2013, inter alia) , implicitly connoted by events (Reisinger et al., 2015; White et al., 2016; Greene, 2007; Rashkin et al., 2016) , or sentiment polarities of events (Ding and Riloff, 2016; Choi and Wiebe, 2014; Russo et al., 2015; Ding and Riloff, 2018) . In addition, recent work has studied the patterns which evoke certain polarities (Reed et al., 2017) , the desires which make events affective (Ding et al., 2017) , the emotions caused by events (Vu et al., 2014) , or, conversely, identifying events or reasoning behind particular emotions (Gui et al., 2017) . Compared to this prior literature, our work uniquely learns to model intents and reactions over a diverse set of events, includes inference over event participants not explicitly mentioned in text, and formulates the task as predicting the textual descriptions of the implied commonsense instead of classifying various event attributes.", "cite_spans": [ { "start": 103, "end": 123, "text": "(Baker et al., 1998;", "ref_id": "BIBREF2" }, { "start": 124, "end": 141, "text": "Das et al., 2014;", "ref_id": "BIBREF12" }, { "start": 142, "end": 163, "text": "Schuler et al., 2009;", "ref_id": "BIBREF44" }, { "start": 164, "end": 200, "text": "Hartshorne et al., 2013, inter alia)", "ref_id": null }, { "start": 233, "end": 257, "text": "(Reisinger et al., 2015;", "ref_id": "BIBREF41" }, { "start": 258, "end": 277, "text": "White et al., 2016;", "ref_id": "BIBREF49" }, { "start": 278, "end": 291, "text": "Greene, 2007;", "ref_id": "BIBREF23" }, { "start": 292, "end": 313, "text": "Rashkin et al., 2016)", "ref_id": "BIBREF39" }, { "start": 350, "end": 373, "text": "(Ding and Riloff, 2016;", "ref_id": "BIBREF14" }, { "start": 374, "end": 395, "text": "Choi and Wiebe, 2014;", "ref_id": "BIBREF9" }, { "start": 396, "end": 415, "text": "Russo et al., 2015;", "ref_id": "BIBREF42" }, { "start": 416, "end": 438, "text": "Ding and Riloff, 2018)", "ref_id": "BIBREF15" }, { "start": 522, "end": 541, "text": "(Reed et al., 2017)", "ref_id": "BIBREF40" }, { "start": 584, "end": 603, "text": "(Ding et al., 2017)", "ref_id": "BIBREF13" }, { "start": 636, "end": 653, "text": "(Vu et al., 2014)", "ref_id": "BIBREF48" }, { "start": 731, "end": 749, "text": "(Gui et al., 2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Previous work in natural language inference has focused on linguistic entailment (Bowman et al., 2015; Bos and Markert, 2005) while ours focuses on commonsense-based inference. There also has been inference or entailment work that is more generation focused: generating, e.g., entailed statements (Zhang et al., 2017; Blouw and Eliasmith, 2018) , explanations of causality (Kang et al., 2017) , or paraphrases (Dong et al., 2017) . Our work also aims at generating inferences from sentences; however, our models infer implicit information about mental states and causality, which has not been studied by most previous systems.", "cite_spans": [ { "start": 81, "end": 102, "text": "(Bowman et al., 2015;", "ref_id": "BIBREF6" }, { "start": 103, "end": 125, "text": "Bos and Markert, 2005)", "ref_id": "BIBREF5" }, { "start": 297, "end": 317, "text": "(Zhang et al., 2017;", "ref_id": "BIBREF51" }, { "start": 318, "end": 344, "text": "Blouw and Eliasmith, 2018)", "ref_id": "BIBREF4" }, { "start": 373, "end": 392, "text": "(Kang et al., 2017)", "ref_id": "BIBREF27" }, { "start": 410, "end": 429, "text": "(Dong et al., 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Also related are commonsense knowledge bases (Espinosa and Lieberman, 2005; Speer and Havasi, 2012) . Our work complements these ex-isting resources by providing commonsense relations that are relatively less populated in previous work. For instance, ConceptNet contains only 25% of our events, and only 12% have relations that resemble intent and reaction. We present a more detailed comparison with ConceptNet in Appendix C.", "cite_spans": [ { "start": 45, "end": 75, "text": "(Espinosa and Lieberman, 2005;", "ref_id": "BIBREF18" }, { "start": 76, "end": 99, "text": "Speer and Havasi, 2012)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We introduced a new corpus, task, and model for performing commonsense inference on textuallydescribed everyday events, focusing on stereotypical intents and reactions of people involved in the events. Our corpus supports learning representations over a diverse range of events and reasoning about the likely intents and reactions of previously unseen events. We also demonstrate that such inference can help reveal implicit gender bias in movie scripts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://tinyurl.com/event2mind", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We compiled the list of idiomatic verb phrases by crossreferencing between Wiktionary's English idioms category and the Wiktionary English verbs categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We only consider content word categories: 'Core Drives", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their insightful comments. We also thank xlab members at the University of Washington, Martha Palmer, Tim O'Gorman, Susan Windisch Brown, Ghazaleh Kazeminejad as well as other members at the University of Colorado at Boulder for many helpful comments for our development of the annotation pipeline. This work was supported in part by National Science Foundation Graduate Research Fellowship Program under grant DGE-1256082, NSF grant IIS-1714566, and the DARPA CwC program through ARO (W911NF-15-1-0543).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Key female characters in film have more to talk about besides men: Automating the bechdel test", "authors": [ { "first": "Apoorv", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Jiehan", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Kamath", "suffix": "" }, { "first": "Sriramkumar", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "Shirin Ann", "middle": [], "last": "Dey", "suffix": "" } ], "year": 2015, "venue": "NAACL", "volume": "", "issue": "", "pages": "830--840", "other_ids": {}, "num": null, "urls": [], "raw_text": "Apoorv Agarwal, Jiehan Zheng, Shruti Kamath, Sri- ramkumar Balasubramanian, and Shirin Ann Dey. 2015. Key female characters in film have more to talk about besides men: Automating the bechdel test. In NAACL, pages 830-840, Denver, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The berkeley framenet project", "authors": [ { "first": "Collin", "middle": [ "F" ], "last": "Baker", "suffix": "" }, { "first": "Charles", "middle": [ "J" ], "last": "Fillmore", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In COLING- ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mean girls? The influence of gender portrayals in teen movies on emerging adults' gender-based attitudes and beliefs", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Behm", "suffix": "" }, { "first": "-", "middle": [], "last": "Morawitz", "suffix": "" }, { "first": "Dana", "middle": [ "E" ], "last": "Mastro", "suffix": "" } ], "year": 2008, "venue": "Journalism & Mass Communication Quarterly", "volume": "85", "issue": "1", "pages": "131--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elizabeth Behm-Morawitz and Dana E Mastro. 2008. Mean girls? The influence of gender portrayals in teen movies on emerging adults' gender-based atti- tudes and beliefs. Journalism & Mass Communica- tion Quarterly, 85(1):131-146.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Using neural networks to generate inferential roles for natural language", "authors": [ { "first": "Peter", "middle": [], "last": "Blouw", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Eliasmith", "suffix": "" } ], "year": 2018, "venue": "Frontiers in Psychology", "volume": "8", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3389/fpsyg.2017.02335" ] }, "num": null, "urls": [], "raw_text": "Peter Blouw and Chris Eliasmith. 2018. Using neu- ral networks to generate inferential roles for natural language. Frontiers in Psychology, 8:2335.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Recognising textual entailment with robust logical inference", "authors": [ { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Katja", "middle": [], "last": "Markert", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Bos and Katja Markert. 2005. Recognising tex- tual entailment with robust logical inference. In MLCW.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Generating sentences from a continuous space", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2016, "venue": "CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew M. Dai, Rafal J\u00f3zefowicz, and Samy Ben- gio. 2016. Generating sentences from a continuous space. In CoNLL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "SSST@EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In SSST@EMNLP.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "+/-effectwordnet: Sense-level lexicon acquisition for opinion inference", "authors": [ { "first": "Yoonjung", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoonjung Choi and Janyce Wiebe. 2014. +/- effectwordnet: Sense-level lexicon acquisition for opinion inference. In EMNLP.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Content analysis of gender roles in media: Where are we now and where should we go?", "authors": [ { "first": "L", "middle": [], "last": "Rebecca", "suffix": "" }, { "first": "", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2011, "venue": "Sex Roles", "volume": "64", "issue": "3-4", "pages": "290--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca L Collins. 2011. Content analysis of gender roles in media: Where are we now and where should we go? Sex Roles, 64(3-4):290-298.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The PASCAL recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Textual Entailment", "volume": "", "issue": "", "pages": "177--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges. Eval- uating Predictive Uncertainty, Visual Object Classi- fication, and Recognising Textual Entailment, pages 177-190. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Frame-semantic parsing", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Desai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "1", "pages": "9--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipanjan Das, Desai Chen, Andr\u00e9 F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguis- tics, 40(1):9-56.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Why is an event affective? Classifying affective events based on human needs", "authors": [ { "first": "Haibo", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2017, "venue": "AAAI Workshop on Affective Content Analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haibo Ding, Tianyu Jiang, and Ellen Riloff. 2017. Why is an event affective? Classifying affective events based on human needs. In AAAI Workshop on Affective Content Analysis.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Acquiring knowledge of affective events from blogs using label propagation", "authors": [ { "first": "Haibo", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haibo Ding and Ellen Riloff. 2016. Acquiring knowl- edge of affective events from blogs using label prop- agation. In AAAI.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Weakly supervised induction of affective events by optimizing semantic consistency", "authors": [ { "first": "Haibo", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haibo Ding and Ellen Riloff. 2018. Weakly supervised induction of affective events by optimizing semantic consistency. In AAAI.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning to paraphrase for question answering", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In EMNLP.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Gender role portrayal and the Disney princesses", "authors": [ { "first": "Dawn", "middle": [ "Elizabeth" ], "last": "England", "suffix": "" }, { "first": "Lara", "middle": [], "last": "Descartes", "suffix": "" }, { "first": "Melissa", "middle": [ "A" ], "last": "Collier-Meek", "suffix": "" } ], "year": 2011, "venue": "Sex roles", "volume": "64", "issue": "7-8", "pages": "555--567", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dawn Elizabeth England, Lara Descartes, and Melissa A Collier-Meek. 2011. Gender role por- trayal and the Disney princesses. Sex roles, 64(7- 8):555-567.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Eventnet: Inferring temporal relations between commonsense events", "authors": [ { "first": "H", "middle": [], "last": "Jos\u00e9", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Espinosa", "suffix": "" }, { "first": "", "middle": [], "last": "Lieberman", "suffix": "" } ], "year": 2005, "venue": "MICAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9 H. Espinosa and Henry Lieberman. 2005. Event- net: Inferring temporal relations between common- sense events. In MICAI.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Facebook Moves to Ban Private Gun Sales on its Site and Instagram", "authors": [ { "first": "Vindu", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Isaac", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vindu Goel and Mike Isaac. 2016. Face- book Moves to Ban Private Gun Sales on its Site and Instagram. https://www. nytimes.com/2016/01/30/technology/ facebook-gun-sales-ban.html. Ac- cessed: 2018-02-19.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A dataset of syntactic-ngrams over time from a very large corpus of english books", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Orwant", "suffix": "" } ], "year": 2013, "venue": "SEM2013", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In SEM2013.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sto-ryUpgrade: finding stories in internet weblogs", "authors": [ { "first": "S", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Reid", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "", "middle": [], "last": "Swanson", "suffix": "" } ], "year": 2008, "venue": "ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew S Gordon and Reid Swanson. 2008. Sto- ryUpgrade: finding stories in internet weblogs. In ICWSM.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Movie script summarization as graph-based scene extraction", "authors": [ { "first": "Philip", "middle": [], "last": "John Gorinski", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2015, "venue": "NAACL", "volume": "", "issue": "", "pages": "1066--1076", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip John Gorinski and Mirella Lapata. 2015. Movie script summarization as graph-based scene extrac- tion. In NAACL, pages 1066-1076.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Spin: Lexical semantics, transitivity, and the identification of implicit sentiment", "authors": [ { "first": "Stephan", "middle": [ "Charles" ], "last": "Greene", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Charles Greene. 2007. Spin: Lexical seman- tics, transitivity, and the identification of implicit sentiment.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A question answering approach for emotion cause extraction", "authors": [ { "first": "Lin", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Jiannan", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jiachen", "middle": [], "last": "Du", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering ap- proach for emotion cause extraction. In EMNLP.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The verbcorner project: Toward an empirically-based semantic decomposition of verbs", "authors": [ { "first": "Joshua", "middle": [ "K" ], "last": "Hartshorne", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshua K. Hartshorne, Claire Bonial, and Martha Palmer. 2013. The verbcorner project: Toward an empirically-based semantic decomposition of verbs. In EMNLP.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A simple sequentially rejective multiple test procedure", "authors": [ { "first": "Sture", "middle": [], "last": "Holm", "suffix": "" } ], "year": 1979, "venue": "Scandinavian journal of statistics", "volume": "", "issue": "", "pages": "65--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, pages 65-70.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Detecting and explaining causes from text for a time series event", "authors": [ { "first": "Dongyeop", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Gangal", "suffix": "" }, { "first": "Ang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, and Eduard H. Hovy. 2017. Detecting and explain- ing causes from text for a time series event. In EMNLP.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D Manning. 2003. Accu- rate unlexicalized parsing. In ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Gradient-based learning applied to document recognition", "authors": [ { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Haffner", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the IEEE", "volume": "86", "issue": "11", "pages": "2278--2324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Commonsense knowledge base completion", "authors": [ { "first": "Xiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Aynaz", "middle": [], "last": "Taheri", "suffix": "" }, { "first": "Lifu", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In ACL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Analyzing gender stereotyping in bollywood movies", "authors": [ { "first": "Nishtha", "middle": [], "last": "Madaan", "suffix": "" }, { "first": "Sameep", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Taneea", "middle": [ "S" ], "last": "Agrawaal", "suffix": "" }, { "first": "Vrinda", "middle": [], "last": "Malhotra", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nishtha Madaan, Sameep Mehta, Taneea S. Agrawaal, Vrinda Malhotra, Aditi Aggarwal, and Mayank Sax- ena. 2017. Analyzing gender stereotyping in bolly- wood movies. CoRR, abs/1710.04117.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Did it happen? the pragmatic complexity of veridicality assessment. Computational Linguistics", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2012, "venue": "", "volume": "38", "issue": "", "pages": "301--333", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Christopher D. Man- ning, and Christopher Potts. 2012. Did it happen? the pragmatic complexity of veridicality assessment. Computational Linguistics, 38:301-333.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Gregory", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A corpus and cloze evaluation for deeper understanding of commonsense stories", "authors": [ { "first": "Nasrin", "middle": [], "last": "Mostafazadeh", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Kohli", "suffix": "" }, { "first": "James", "middle": [], "last": "Allen", "suffix": "" } ], "year": 2016, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of commonsense stories. In NAACL.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In ACL.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "What women and men should be, shouldn't be, are allowed to be, and don't have to be: The contents of prescriptive gender stereotypes", "authors": [ { "first": "A", "middle": [], "last": "Deborah", "suffix": "" }, { "first": "Erica", "middle": [], "last": "Prentice", "suffix": "" }, { "first": "", "middle": [], "last": "Carranza", "suffix": "" } ], "year": 2002, "venue": "Psychology of women quarterly", "volume": "26", "issue": "4", "pages": "269--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deborah A Prentice and Erica Carranza. 2002. What women and men should be, shouldn't be, are al- lowed to be, and don't have to be: The contents of prescriptive gender stereotypes. Psychology of women quarterly, 26(4):269-281.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Linguistic analysis of differences in portrayal of movie characters", "authors": [ { "first": "Anil", "middle": [], "last": "Ramakrishna", "suffix": "" }, { "first": "Nikolaos", "middle": [], "last": "Victor R Mart\u00ednez", "suffix": "" }, { "first": "Karan", "middle": [], "last": "Malandrakis", "suffix": "" }, { "first": "Shrikanth", "middle": [], "last": "Singla", "suffix": "" }, { "first": "", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "1669--1678", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anil Ramakrishna, Victor R Mart\u00ednez, Nikolaos Ma- landrakis, Karan Singla, and Shrikanth Narayanan. 2017. Linguistic analysis of differences in portrayal of movie characters. In ACL, pages 1669-1678, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Connotation frames: A data-driven investigation", "authors": [ { "first": "Sameer", "middle": [], "last": "Hannah Rashkin", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hannah Rashkin, Sameer Singh, and Yejin Choi. 2016. Connotation frames: A data-driven investigation. In ACL.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Learning lexicofunctional patterns for first-person affect", "authors": [ { "first": "Lena", "middle": [], "last": "Reed", "suffix": "" }, { "first": "Jiaqi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Anand", "suffix": "" }, { "first": "Marilyn", "middle": [ "A" ], "last": "Walker", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lena Reed, JiaQi Wu, Shereen Oraby, Pranav Anand, and Marilyn A. Walker. 2017. Learning lexico- functional patterns for first-person affect. In ACL.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Semantic proto-roles. TACL", "authors": [ { "first": "Drew", "middle": [], "last": "Reisinger", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Ferraro", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Harman", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Rawlins", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2015, "venue": "", "volume": "3", "issue": "", "pages": "475--488", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. TACL, 3:475- 488.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Semeval-2015 task 9: Clipeval implicit polarity of events. In SemEval@NAACL-HLT", "authors": [ { "first": "Irene", "middle": [], "last": "Russo", "suffix": "" }, { "first": "Tommaso", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Russo, Tommaso Caselli, and Carlo Strapparava. 2015. Semeval-2015 task 9: Clipeval implicit polar- ity of events. In SemEval@NAACL-HLT.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Connotation frames of power and agency in modern films", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Marcella", "middle": [ "Cindy" ], "last": "Prasetio", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "2329--2334", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maarten Sap, Marcella Cindy Prasetio, Ari Holtzman, Hannah Rashkin, and Yejin Choi. 2017. Connota- tion frames of power and agency in modern films. In EMNLP, pages 2329-2334.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Verbnet overview, extensions, mappings and applications", "authors": [ { "first": "Karin", "middle": [ "Kipper" ], "last": "Schuler", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Susan", "middle": [ "Windisch" ], "last": "Brown", "suffix": "" } ], "year": 2009, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karin Kipper Schuler, Anna Korhonen, and Su- san Windisch Brown. 2009. Verbnet overview, ex- tensions, mappings and applications. In NAACL.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Representing general relational knowledge in conceptnet 5", "authors": [ { "first": "Robert", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Speer and Catherine Havasi. 2012. Represent- ing general relational knowledge in conceptnet 5. In LREC.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "The psychological meaning of words: LIWC and computerized text analysis methods", "authors": [ { "first": "R", "middle": [], "last": "Yla", "suffix": "" }, { "first": "James", "middle": [ "W" ], "last": "Tausczik", "suffix": "" }, { "first": "", "middle": [], "last": "Pennebaker", "suffix": "" } ], "year": 2016, "venue": "J. Lang. Soc. Psychol", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yla R Tausczik and James W Pennebaker. 2016. The psychological meaning of words: LIWC and com- puterized text analysis methods. J. Lang. Soc. Psy- chol.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Using narrative functions as a heuristic for relevance in story understanding", "authors": [ { "first": "Emmett", "middle": [], "last": "Tomai", "suffix": "" }, { "first": "Ken", "middle": [], "last": "Forbus", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Intelligent Narrative Technologies III Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmett Tomai and Ken Forbus. 2010. Using narrative functions as a heuristic for relevance in story under- standing. In Proceedings of the Intelligent Narrative Technologies III Workshop, page 9. ACM.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Acquiring a dictionary of emotion-provoking events", "authors": [ { "first": "Hoa", "middle": [ "Trong" ], "last": "Vu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Sakriani", "middle": [], "last": "Sakti", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2014, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoa Trong Vu, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Acquir- ing a dictionary of emotion-provoking events. In EACL.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Universal decompositional semantics on universal dependencies", "authors": [ { "first": "Aaron", "middle": [], "last": "Steven White", "suffix": "" }, { "first": "Drew", "middle": [], "last": "Reisinger", "suffix": "" }, { "first": "Keisuke", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Vieira", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Rawlins", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2016, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaron Steven White, Drew Reisinger, Keisuke Sak- aguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In EMNLP.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "From paraphrase database to compositional paraphrase model and back", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2015, "venue": "TACL", "volume": "3", "issue": "", "pages": "345--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compo- sitional paraphrase model and back. TACL, 3:345- 358.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Ordinal common-sense inference", "authors": [ { "first": "Sheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2017, "venue": "TACL", "volume": "5", "issue": "", "pages": "379--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheng Zhang, Rachel Rudinger, Kevin Duh, and Ben- jamin Van Durme. 2017. Ordinal common-sense in- ference. TACL, 5:379-395.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Recall @ 10 (%) on different subsets of the development set for intents, PersonX's reactions, and other people's reactions, using the BiRNN 100d model. \"Full dev\" represents the recall on the entire development dataset.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Two scene description snippets from Juno", "type_str": "figure", "num": null, "uris": null }, "TABREF3": { "num": null, "html": null, "text": "Data and annotation agreement statistics for our new phrasal inference corpus. Each event is annotated by three crowdworkers.", "type_str": "table", "content": "" }, "TABREF4": { "num": null, "html": null, "text": "EventPersonX punches PersonY's lights out 1. Does this event make sense enough for you to answer questions 2-5?(Or does it have too many meanings?)Yes, can answer No, can't answer or has too many meanings", "type_str": "table", "content": "
Before the event
2. Does PersonX willingly cause this event?
Yes
No
a). Why?
(Try to describe without reusing words from the event)
Because PersonXto (be)
wants ...[write a reason]
[write another reason -optional]
[write another reason -optional]
" }, "TABREF5": { "num": null, "html": null, "text": "Coreference among Person variables With the typed Person variable setup, events involving multiple people can have multiple meanings depending on coreference interpretation (e.g.,", "type_str": "table", "content": "
PersonX eats PersonY's lunchhas
very different mental state implications from
PersonX eats PersonX's lunch).
" }, "TABREF10": { "num": null, "html": null, "text": ").3 The intent and reaction categories are then", "type_str": "table", "content": "
PersonX hugs ___ , planting
a smooch on PersonY's cheek
Juno laughs and hugs her
father, planting a smooch
on his cheek.
PersonX sits on PersonX's
bed , lost in thought
R e a c t io n
Vivian sits on her bed,
lost in thought. Her
bags are packed, ...
" }, "TABREF12": { "num": null, "html": null, "text": "Select LIWC categories correlated with gender. All results are significant when corrected for multiple comparisons at p < 0.001, except \u2020 p < 0.05 and \u2021 p < 0.01.", "type_str": "table", "content": "" } } } }