|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:06:02.176346Z" |
|
}, |
|
"title": "Script Induction as Association Rule Mining", |
|
"authors": [ |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Belyy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "vandurme@jhu.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We show that the count-based Script Induction models of Chambers and Jurafsky (2008) and Jans et al. (2012) can be unified in a general framework of narrative chain likelihood maximization. We provide efficient algorithms based on Association Rule Mining (ARM) and weighted set cover that can discover interesting patterns in the training data and combine them in a reliable and explainable way to predict the missing event. The proposed method, unlike the prior work, does not assume full conditional independence and makes use of higher-order count statistics. We perform the ablation study and conclude that the inductive biases introduced by ARM are conducive to better performance on the narrative cloze test.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We show that the count-based Script Induction models of Chambers and Jurafsky (2008) and Jans et al. (2012) can be unified in a general framework of narrative chain likelihood maximization. We provide efficient algorithms based on Association Rule Mining (ARM) and weighted set cover that can discover interesting patterns in the training data and combine them in a reliable and explainable way to predict the missing event. The proposed method, unlike the prior work, does not assume full conditional independence and makes use of higher-order count statistics. We perform the ablation study and conclude that the inductive biases introduced by ARM are conducive to better performance on the narrative cloze test.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The goal of this paper is to demonstrate how the efforts in Script Induction (SI), up until recently dominated by statistical approaches (Chambers and Jurafsky, 2008; Jans et al., 2012; Pichotta and Mooney, 2014; Rudinger et al., 2015a,b) , can be productively framed and extended as a special case of Association Rule Mining (ARM), a wellestablished problem in Data Mining (Agrawal et al., 1993 (Agrawal et al., , 1994 Han et al., 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 166, |
|
"text": "(Chambers and Jurafsky, 2008;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 167, |
|
"end": 185, |
|
"text": "Jans et al., 2012;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 212, |
|
"text": "Pichotta and Mooney, 2014;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 238, |
|
"text": "Rudinger et al., 2015a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 395, |
|
"text": "(Agrawal et al., 1993", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 419, |
|
"text": "(Agrawal et al., , 1994", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 437, |
|
"text": "Han et al., 2000)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We start by introducing SI and ARM and then demonstrate a unification under a general chain likelihood maximization framework. We discuss how the existing count-based SI models tackle this maximization problem using na\u00efve Bayes assumptions. We provide an alternative: mining higherorder count statistics using ARM and picking the most reliable rules using the weighted set cover algorithm. We validate the proposed approach and demonstrate improved performance over other count-based approaches. We conclude with a discussion on the implications and potential extensions of the proposed framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Transaction t Narrative chain Itemset I Co-occurring events sup({i 1 , i 2 }) C(i 1 , i 2 ) int({a} \u2192 {e}) P (a|e) = C(a,e) C( * ,e) sup(I), |I| > 2 Eq. 5 int(A \u2192 {e}), |A| > 1 Eq. 12 Table 1 : Mapping between ARM and Count-based SI terminology. Bolded are contributions of this paper. Namely, we make use of frequent itemsets and interesting rules, or higher-order count statistics that can be efficiently mined and used in the narrative cloze test.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 191, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ARM term SI term", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our intent in this work is not to establish new state of the art results in the area of SI. Rather, our primary contribution is retrospective, drawing a connection between a sub-topic in Computational Linguistics (CL) with a major pre-existing area of Computer Science, i.e., Data Mining. In the case one approached SI through counting co-occurrence statistics, then the existing tools of ARM lead naturally to solutions that had not been previously considered within CL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ARM term SI term", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ARM is a prevalent problem in Data Mining, introduced by Agrawal et al. (1993) . The task is often referred to as market basket analysis due to its widespread usage for discovering interesting patterns in consumer purchases. The applicability of ARM extends far beyond this specific scenario, where examples of ARM usage for NLP applications include detecting annotation inconsistencies (Nov\u00e1k and Raz\u00edmov\u00e1, 2009) , discovering strongly-related events (Shibata and Kurohashi, 2011) , adding missing knowledge to the KB (Gal\u00e1rraga et al., 2013) , as well as understanding clinical narratives (Boytcheva et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 78, |
|
"text": "Agrawal et al. (1993)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 413, |
|
"text": "(Nov\u00e1k and Raz\u00edmov\u00e1, 2009)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 481, |
|
"text": "(Shibata and Kurohashi, 2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 543, |
|
"text": "(Gal\u00e1rraga et al., 2013)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 615, |
|
"text": "(Boytcheva et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Association Rule Mining", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "ARM aims to extract interesting patterns from a transactional database D. A transaction is a set of items, and a non-empty subset of a transaction is called an itemset. We define support as the number of transactions we observe an itemset I in:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Association Rule Mining", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "sup(I) = |{t|t \u2208 D, I \u2286 t}|.", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Association Rule Mining", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We say that an itemset I is frequent, if its support (on a given database D) exceeeds a user-defined threshold t sup : sup(I) \u2265 t sup .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Association Rule Mining", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A pair of itemsets A, B is called a rule if A \u2229 B = \u2205 and is denoted as A \u2192 B. We say that a rule A \u2192 B is interesting if 1) both A and B are frequent, 2) the interestingess of the rule exceeds a user-defined threshold t int : int(A \u2192 B) \u2265 t int . The definition of the interestingness function int(\u2022) is problem-specific.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Association Rule Mining", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "ARM is thus concerned with:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Association Rule Mining", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1. mining frequent itemsets from a transactional database, 2. discovering interesting rules from frequent itemsets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Association Rule Mining", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The concept of script knowledge in AI, along with early knowledge-based methods to learn scripts were introduced by Minsky (1974); Schank and Abelson (1977) ; Mooney and DeJong (1985) . With the rise of statistical methods, the next generation of algorithms made use of co-occurrence statistics and distributional semantics for script learning Jurafsky, 2008, 2009; Jans et al., 2012; Pichotta and Mooney, 2014) . Our primary focus is on drawing connections between ARM and this body of work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 156, |
|
"text": "Schank and Abelson (1977)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 183, |
|
"text": "Mooney and DeJong (1985)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 365, |
|
"text": "Jurafsky, 2008, 2009;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 384, |
|
"text": "Jans et al., 2012;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 411, |
|
"text": "Pichotta and Mooney, 2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Script Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Following Chambers and Jurafsky (2008) , we define a narrative chain as \"a partially ordered set of narrative events that share a common actor\", where the partial ordering typically represents temporal or causal order of events, and a narrative event is \"a tuple of an event and its participants, represented as typed dependencies\". Formally, we define a narrative event e := (v, d), where v is a verb lemma, and d is a dependency arc between the verb and the common actor (dobj or nsubj). An example of a narrative chain is given in Figure 1 . SI is thus concerned with:", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 38, |
|
"text": "Chambers and Jurafsky (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 534, |
|
"end": 542, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Script Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "1. automatic mining of commonly co-occurring sets of narrative events from text, 2. partially ordering those sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Script Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The narrative cloze test (Chambers and Jurafsky, 2008 ) is a standard extrinsic evaluation procedure for Task 1 of SI. In this test, a sequence of narrative events is automatically extracted from a document, and one event is removed; the goal is to predict the missing event. Formally, given an incomplete narrative chain {e 1 , e 2 , . . . , e L } and an insertion point k \u2208 [L], we would like to predict the most likely missing event\u00ea to complete the chain:", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 53, |
|
"text": "(Chambers and Jurafsky, 2008", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Script Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "{e 1 , e 2 , . . . , e k ,\u00ea, e k+1 , . . . e L }.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Script Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Although the recent work in SI (Rudinger et al., 2015b; Pichotta and Mooney, 2016; Peng and Roth, 2016; Weber et al., 2018) has focused on a Language Modeling (LM) approach for the narrative cloze test, it is fundamentally different from ARM in that it makes use of the total ordering of events and is thus incomparable to ARM, which does not assume any ordering of events within a chain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 55, |
|
"text": "(Rudinger et al., 2015b;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 56, |
|
"end": 82, |
|
"text": "Pichotta and Mooney, 2016;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 103, |
|
"text": "Peng and Roth, 2016;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 123, |
|
"text": "Weber et al., 2018)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Script Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In the next section, we survey two of the most influential count-based SI models, showing how each of them is related to ARM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Script Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3 Count-based Script Induction", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Script Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The original model for this task by Chambers and Jurafsky (2008) is based on the pointwise mutual information (PMI) between events.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 64, |
|
"text": "Chambers and Jurafsky (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "pmi(e 1 , e 2 ) \u221d log C(e 1 , e 2 ) C(e 1 , * )C( * , e 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", 2where C(e 1 , e 2 ) is defined as the number of narrative chains where e 1 and e 2 both occurred and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "C(e, * ) := e \u2208E C(e, e ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where E is a fixed vocabulary of narrative events. The model selects the missing event\u00ea in the narrative cloze test according to the scor\u00ea", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e = arg max e\u2208E L i=1 pmi(e, e i ),", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "assuming that the missing event\u00ea is inserted at the end of the existing chain (k = L).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "From 2and 3we observe that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e = arg max e\u2208E L i=1 pmi(e, e i ) = arg max e\u2208E L i=1 log C(e, e i ) C(e, * )C( * , e i ) = arg max e\u2208E log L i=1 C(e, e i ) C(e, * ) = arg max e\u2208E log L i=1 P (e i |e) = arg max e\u2208E L i=1 P (e i |e).", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One way to interpret Eq. 4 is to say that it was obtained from the following model with the na\u00efve Bayes assumption: e = arg max e\u2208E P (e 1 , e 2 , . . . , e L |e).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Importantly, in the above equation, no assumptions are made about the order in which events e 1 , . . . , e L happened and we treat the narrative chain as a document, where individual events are features (the \"bag of events\" assumption).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unordered PMI model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The bigram probability model was proposed by Jans et al. (2012) and was also used by Pichotta and Mooney (2014) . It utilizes positional information between co-occurring events. It selects the missing event\u00ea according to the scor\u00ea", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 63, |
|
"text": "Jans et al. (2012)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 111, |
|
"text": "Pichotta and Mooney (2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bigram Probability model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "e = arg max e\u2208E k i=1 P (e|e i ) \u2022 L i=k+1 P (e i |e) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bigram Probability model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where k is the insertion point of the missing event e, P (e 2 |e 1 ) = C ord (e 1 ,e 2 ) C ord (e 1 , * ) , and counts C ord (e 1 , e 2 ) are ordered, e.g. C ord (e 1 , e 2 ) = C ord (e 2 , e 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bigram Probability model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Similarly to the Unordered PMI model, we can relax the conditional independence assumption. However, to apply Bayes' theorem, we would need (e 1 , e 2 ) and (e 2 , e 1 ) to be the same events in the outcome space, thus we have to assume unordered counts: C(e 1 , e 2 ) = C ord (e 1 , e 2 ) + C ord (e 2 , e 1 ). Proceeding with this, we get: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bigram Probability model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "e = arg max e\u2208E k i=1 P (e|e i ) \u2022 L i=k+1 P (e i |e) = arg max", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bigram Probability model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where the last equality is obtained by relaxing the full conditional independence assumption (similar to Eq. 5). It follows that the Bigram Probability model with unordered counts is exactly the Unordered PMI model augmented with the prior probability of a missing event multiplied by its position in a chain. Additionally, note that if k = 1, this model is equivalent to maximizing the posterior probability of a missing event (rather than the likelihood of a narrative chain in Eq. 5): e = arg max ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bigram Probability model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Similar to Eq. 5, we view the narrative chain e 1 , . . . , e n as a set, and thus Eq. 6 is not a language model in the traditional NLP sense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bigram Probability model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The models defined by Eqs. 5, 6, and 7 are hard to compute directly: without simplifying assumptions, they would require huge number of parameters and large training sets (Jurafsky and Martin, 2019) . A common approach in the existing Count-based SI work is to assume full conditional independence. A viable and less restrictive alternative, as we show in this section, is estimating higher-order count statistics via mining association rules (Section 4.1) and combining the most confident rules to predict the missing event with a simple weighted set cover algorithm (Section 4.2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 198, |
|
"text": "(Jurafsky and Martin, 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "More formally, during the training phase, we would like to populate the set of interesting rules S = {S \u2192 {e}}, whose antecedents are sub-sets of the event space S \u2282 E, and consequents are single events e, e \u2208 S. We denote as S e all the rules with the same consequent event e.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "During the test phase, where we have an incomplete narrative chain {e 1 , e 2 , . . . , e L } and want to predict a missing event, we will use rules from S e to efficiently decompose P (e 1 , e 2 , . . . , e L |e) into", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "P (S 1 |e) \u2022 P (S 2 |e) \u2022 . . . \u2022 P (S t |e)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "for each candidate event e. Naturally, this means selecting a set of rules whose antecedents {S 1 , S 2 , . . . , S t } (we call this set a candidate cover) are pairwise disjoint (S i \u2229S j = \u2205 \u2200i, j \u2208 [t]), and cover the event chain fully (S 1 \u222a S 2 \u222a . . . \u222a S t = {e 1 , e 2 , . . . , e L }).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To quantify the goodness of the decomposition, we define a score function for a candidate cover {S 1 , . . . , S t } and a candidate event e as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "score(S 1 , S 2 , . . . , S t ; e) = t i=1 P (S i |e). (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each candidate event e, we select the best candidate cover\u015c e according to the score function: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "S e = arg max", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In Section 4.1, we explain how the set of rules S is populated from the SI training corpus. In Section 4.2, we provide a randomized algorithm that solves problem 9 with a provably bounded error.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SI as ARM", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As discussed in Section 2.1, in order to discover the set of interesting rules S, we need to mine frequent itemsets first. This can be achieved by any frequent itemset mining algorithm, such as Apriori (Agrawal et al., 1994) , Eclat (Zaki, 2000) , or FP-growth (Han et al., 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 224, |
|
"text": "(Agrawal et al., 1994)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 245, |
|
"text": "(Zaki, 2000)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 279, |
|
"text": "(Han et al., 2000)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Next, for the rule mining step we define an interestingness function int(S \u2192 E) over a rule S \u2192 E:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "int(S \u2192 E) = sup(S \u222a E) S sup(S \u222a E) ,", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where S ranges over all itemsets of size |S| and is disjoint with E. Note that int(S \u2192 E) provides a maximum likelihood estimate of P (S|E) for the probability space defined over sets of events, and sup(\u2022) is a generalization of the previously defined C(\u2022, \u2022) for event sets of size larger than two.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The denominator of (11) requires calculating the support over exponentially many itemsets. We can instead use the following simpler formula:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "wsup k (I) = t\u2208D |t| \u2212 |I| k \u2022 1 I\u2286t ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where D is a transactional database of narrative event chains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Lemma 1. end for 11: end for 12: Return S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our intent is to use the above interestingness function to score rules from S that have a single event as a consequent, and thus Eq. 11 can be further simplified:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "int(S \u2192 {e}) = sup(S \u222a {e}) wsup |S| ({e}) .", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Assuming that for each rule S \u2192 {e} the antecedent is bounded in size and small, we can precompute wsup k ({e}) for each e \u2208 E and each k \u2208 [|S|] in a single pass over the database. Note also that wsup 0 (I) = sup(I) and thus wsup k (\u2022) is a generalization of support (1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Given an interestingness function, we can now proceed to mine interesting rules over frequent event sets. The rule mining process is shown in Algorithm 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "After a set of interesting rules S is populated, we can perform test-time inference on new narrative chains with Eqs. 9 and 10. To facilitate this, we frame the inference problem as the weighted set cover problem. The latter was known to be NPcomplete by Karp (1972) , but there is a simple greedy algorithm by Chvatal (1979) that provides an approximate solution. To make it applicable to the search problem 9, we will run it (for each candidate event e) on the set S, mined by Algorithm 1, with the following weight function:", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 266, |
|
"text": "Karp (1972)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 325, |
|
"text": "Chvatal (1979)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "w(S) = \u2212 ln int(S \u2192 {e}) = \u2212 ln P (S|e).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The following lemma provides a lower bound on the score of the candidate cover obtained by Algorithm 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Algorithm 2 Greedy weighted set cover 1: Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 A set of interesting rules S e , \u2022 A narrative chain e 1 , e 2 , . . . , e L .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "2: Output: An approximation (within a O(log L) factor) of the best cover {S 1 , S 2 , . . . , S t }. 3: Initialization: 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "U 0 = {e 1 , e 2 , . . . , e L } 5: t = 0 6: while U t = \u2205 do 7: t = t + 1 8: S t = arg max S \u2208Se |S \u2229U t\u22121 | w(S ) 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "U t = U t\u22121 \\S t 10: end while 11: Return {S 1 , S 2 , . . . , S t }.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "4.2 Score estimation via weighted set cover Lemma 2. Algorithm 2 finds a candidate cover {S 1 , . . . , S t } for a narrative chain {e 1 , . . . , e L } and a candidate event e, such that score(S 1 , . . . , S t ; e) \u2265 OP T ln L+1 , where OP T is the score of the best candidate cover\u015c e . Proof. Chvatal (1979) showed that Algorithm 2 finds a weighted set cover", |
|
"cite_spans": [ |
|
{ |
|
"start": 297, |
|
"end": 311, |
|
"text": "Chvatal (1979)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "{S 1 , . . . , S t }, such that OP T cover \u2264 t i=1 w(S i ) \u2264 (ln L + 1)OP T cover .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Since the weight w(\u2022) is a negative log probability:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "t i=1 w(S i ) = \u2212 t i=1 ln P (S i |e) = \u2212 ln score(S 1 , . . . , S t ; e) \u2264 (ln L + 1)OP T cover .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "By exponentiating left and right-hand sides and noting that OP T = e \u2212OP Tcover (by definition of the weight and score functions), we get: score(S 1 , . . . , S t ; e) \u2265 e \u2212(ln L+1)OP Tcover", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2265 OP T ln L+1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "If we group the rules S \u2192 {e} by the consequent event and order by |S| w(S) within each group, then step 8 in Algorithm 2 becomes equivalent to iterating over ordered rules in S e . The overall running time to score the candidate event e is O(L + |S e |).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Additionally, O( e\u2208E |S e | log |S e |) preprocessing time is needed to group and order the rules in S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mining interesting rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We perform experiments on the New York Times part of the Annotated Gigaword dataset by (Napoles et al., 2012) . Chains of narrative events are constructed from the (automatically generated) in-document coreference chains: from each document in the dataset, we extract all coreference chains and retain the longest one, with length two or greater. We also filter top-10 occurring events which are mostly reporting verbs such as \"say\" and \"think\" and convey little meaning for SI task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 109, |
|
"text": "(Napoles et al., 2012)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Training is done on the 1994-2006 portion (1.3M chains with 8.7M narrative events), development set is a subset of 2007-2008 portion (10K chains with 62K narrative events), and test set is a subset of 2009-2010 portion (5K chains with 31K narrative events).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We implement and compare models described in Sections 3 and 4, along with a strong baseline Unigram model by Pichotta and Mooney (2014) , which ranks each event according to its unigram probability in the training corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 135, |
|
"text": "Pichotta and Mooney (2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For testing the Unordered PMI and Bigram models, we use implementations from the Nachos software package (Rudinger et al., 2015a) . Both models are tuned to use skip-grams (as defined by Jans et al. (2012)) of size up to the chain length, which allows to reduce data sparsity and is consistent with the set of rules (of size two) generated by ARM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 129, |
|
"text": "(Rudinger et al., 2015a)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "ARM consists of 1) mining frequent itemsets and 2) obtaining interesting rules from those itemsets. For frequent itemsets mining, we use the FP-growth algorithm by Han et al. (2000) with a t sup = 100 threshold. For rule mining, we implement Algorithm 1. Since the rule mining step is much less computationally intensive than itemset mining, we can use a more permissive t int = 10 \u22125 threshold. We use the same thresholds across all models by applying the following back-off strategy in the Unordered PMI and Bigram models: ", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 181, |
|
"text": "Han et al. (2000)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We perform two experiments, comparing existing count-based SI models with three variants of the proposed ARM model. The performance is measured using Recall@50 and Mean Reciprocal Rank.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the first experiment, we establish that the count-based pruning, introduced by ARM support and interestingness thresholds (t sup and t int , respectively) for reducing the search space during rule mining, does contribute to better performance on the narrative cloze test. We also validate empirically that the ARM model with binary (of size two) rules is equivalent to the UOP model by Chambers and Jurafsky (2008) . Finally, we compare variants of the ARM model, which vary in a way of incorporating a prior probability of the missing event. We conclude that the posterior ARM model, given by Eq. 7, achieves the best performance. The results of this experiment are outlined in Table 2. In the second experiment, we compare the bestscoring ARM model and other baseline models on 5,000 test chains. We achieve 5% relative improvement for Mean Reciprocal Rank (MRR) and 10% for Recall@50, which can be attributed to using higher-order count statistics and the selection of the prior for the missing event. The scalability of both rule mining and inference algorithms suggests that the performance may be further improved as the training corpus size grows and more reliable higher-order statistics become available. The results of this experiment are shown in Table 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 389, |
|
"end": 417, |
|
"text": "Chambers and Jurafsky (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 682, |
|
"end": 690, |
|
"text": "Table 2.", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1261, |
|
"end": 1268, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Similar to Rudinger et al. (2015b) , we also note that all models tend to improve their performance on longer chains, which may be explained by the availability of additional contextual information. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 34, |
|
"text": "Rudinger et al. (2015b)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our decision to approach count-based SI as ARM was motivated by a previously under-explored similarity of these well-established areas, which we outlined in this paper. Drawing similarities from the existing work on Classification using Association Rules (CAR) (Liu et al., 1998; Thabtah et al., 2005) , we proposed a scoring function that uses ARM-based count statistics to reliably predict the missing event in the narrative cloze test.", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 279, |
|
"text": "(Liu et al., 1998;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 301, |
|
"text": "Thabtah et al., 2005)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "One downside of relying solely on count-based statistics is the low support of longer itemsets due to data sparsity. On the other hand, modern contextual encoders (Devlin et al., 2018) mitigate this via parameter sharing. Reliably mining rules whose support and interestingness are based on both counts and properties of dense embeddings can be a promising direction of future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 184, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by DARPA KAIROS. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsement.We would like to thank Suzanna Sia, Kenton Murray, Noah Weber, and three anonymous reviewers for their feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Mining association rules between sets of items in large databases", |
|
"authors": [ |
|
{ |
|
"first": "Rakesh", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "Imieli\u0144ski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arun", |
|
"middle": [], |
|
"last": "Swami", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the 1993 ACM SIGMOD international conference on Management of data", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "207--216", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rakesh Agrawal, Tomasz Imieli\u0144ski, and Arun Swami. 1993. Mining association rules between sets of items in large databases. In Proceedings of the 1993 ACM SIGMOD international conference on Manage- ment of data, pages 207-216.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Fast algorithms for mining association rules", |
|
"authors": [ |
|
{ |
|
"first": "Rakesh", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramakrishnan", |
|
"middle": [], |
|
"last": "Srikant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. 20th int. conf. very large data bases, VLDB", |
|
"volume": "1215", |
|
"issue": "", |
|
"pages": "487--499", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rakesh Agrawal, Ramakrishnan Srikant, et al. 1994. Fast algorithms for mining association rules. In Proc. 20th int. conf. very large data bases, VLDB, volume 1215, pages 487-499.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Mining association rules from clinical narratives", |
|
"authors": [ |
|
{ |
|
"first": "Svetla", |
|
"middle": [], |
|
"last": "Boytcheva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Ivelina Nikolova, and Galia Angelova", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "130--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Svetla Boytcheva, Ivelina Nikolova, and Galia An- gelova. 2017. Mining association rules from clinical narratives. In RANLP, pages 130-138.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised learning of narrative event chains", |
|
"authors": [ |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "789--797", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Unsu- pervised learning of narrative event chains. In Pro- ceedings of ACL-08: HLT, pages 789-797.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Unsupervised learning of narrative schemas and their participants", |
|
"authors": [ |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "602--610", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2009. Unsu- pervised learning of narrative schemas and their par- ticipants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP: Volume 2-Volume 2, pages 602-610. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A greedy heuristic for the setcovering problem", |
|
"authors": [ |
|
{ |
|
"first": "Vasek", |
|
"middle": [], |
|
"last": "Chvatal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "Mathematics of operations research", |
|
"volume": "4", |
|
"issue": "3", |
|
"pages": "233--235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vasek Chvatal. 1979. A greedy heuristic for the set- covering problem. Mathematics of operations re- search, 4(3):233-235.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Amie: association rule mining under incomplete evidence in ontological knowledge bases", |
|
"authors": [ |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Antonio Gal\u00e1rraga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christina", |
|
"middle": [], |
|
"last": "Teflioudi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Hose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "413--422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luis Antonio Gal\u00e1rraga, Christina Teflioudi, Katja Hose, and Fabian Suchanek. 2013. Amie: associa- tion rule mining under incomplete evidence in onto- logical knowledge bases. In Proceedings of the 22nd international conference on World Wide Web, pages 413-422.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Mining frequent patterns without candidate generation", |
|
"authors": [ |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Pei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiwen", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "ACM sigmod record", |
|
"volume": "29", |
|
"issue": "2", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiawei Han, Jian Pei, and Yiwen Yin. 2000. Mining fre- quent patterns without candidate generation. ACM sigmod record, 29(2):1-12.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Skip n-grams and ranking functions for predicting script events", |
|
"authors": [ |
|
{ |
|
"first": "Bram", |
|
"middle": [], |
|
"last": "Jans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [ |
|
"Francine" |
|
], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "336--344", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bram Jans, Steven Bethard, Ivan Vuli\u0107, and Marie Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computa- tional Linguistics, pages 336-344. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Chapter 4: Naive Bayes and Sentiment Classification (Draft of October 2", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Jurafsky and James H Martin. 2019. Speech and language processing (draft). Chapter 4: Naive Bayes and Sentiment Classification (Draft of Octo- ber 2, 2019).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Reducibility among combinatorial problems", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Karp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "In Complexity of computer computations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard M Karp. 1972. Reducibility among combina- torial problems. In Complexity of computer compu- tations, pages 85-103. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Integrating classification and association rule mining", |
|
"authors": [ |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wynne", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "KDD", |
|
"volume": "98", |
|
"issue": "", |
|
"pages": "80--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bing Liu, Wynne Hsu, Yiming Ma, et al. 1998. In- tegrating classification and association rule mining. In KDD, volume 98, pages 80-86.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A framework for representing knowledge. mit-ai laboratory memo 306. Massachusetts Institute of Technology", |
|
"authors": [ |
|
{ |
|
"first": "Marvin", |
|
"middle": [], |
|
"last": "Minsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marvin Minsky. 1974. A framework for represent- ing knowledge. mit-ai laboratory memo 306. Mas- sachusetts Institute of Technology.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning schemata for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Raymond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerald", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dejong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "681--687", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raymond J Mooney and Gerald DeJong. 1985. Learn- ing schemata for natural language processing. In IJ- CAI, pages 681-687.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Annotated gigaword", |
|
"authors": [ |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Pro- ceedings of the Joint Workshop on Automatic Knowl- edge Base Construction and Web-scale Knowledge Extraction, pages 95-100. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Unsupervised detection of annotation inconsistencies using apriori algorithm", |
|
"authors": [ |
|
{ |
|
"first": "V\u00e1clav", |
|
"middle": [], |
|
"last": "Nov\u00e1k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magda", |
|
"middle": [], |
|
"last": "Raz\u00edmov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Third Linguistic Annotation Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "138--141", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V\u00e1clav Nov\u00e1k and Magda Raz\u00edmov\u00e1. 2009. Unsuper- vised detection of annotation inconsistencies using apriori algorithm. In Proceedings of the Third Lin- guistic Annotation Workshop, pages 138-141. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Two discourse driven language models for semantics", |
|
"authors": [ |
|
{ |
|
"first": "Haoruo", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.05679" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haoruo Peng and Dan Roth. 2016. Two discourse driven language models for semantics. arXiv preprint arXiv:1606.05679.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Statistical script learning with multi-argument events", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Pichotta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "220--229", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Pichotta and Raymond Mooney. 2014. Statisti- cal script learning with multi-argument events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 220-229.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Learning statistical scripts with lstm recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Pichotta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Raymond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Thirtieth AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Pichotta and Raymond J Mooney. 2016. Learn- ing statistical scripts with lstm recurrent neural net- works. In Thirtieth AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Learning to predict script events from domainspecific text", |
|
"authors": [ |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Rudinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashutosh", |
|
"middle": [], |
|
"last": "Modi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Pinkal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "205--210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rachel Rudinger, Vera Demberg, Ashutosh Modi, Ben- jamin Van Durme, and Manfred Pinkal. 2015a. Learning to predict script events from domain- specific text. In Proceedings of the Fourth Joint Conference on Lexical and Computational Seman- tics, pages 205-210.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Script induction as language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Rudinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Ferraro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1681--1686", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, and Benjamin Van Durme. 2015b. Script induction as language modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1681-1686.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Scripts, plans, goals and understanding: An inquiry into human knowledge structures", |
|
"authors": [ |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Schank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Abelson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roger Schank and Robert Abelson. 1977. Scripts, plans, goals and understanding: An inquiry into hu- man knowledge structures.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Acquiring strongly-related events using predicate-argument co-occurring statistics and case frames", |
|
"authors": [ |
|
{ |
|
"first": "Tomohide", |
|
"middle": [], |
|
"last": "Shibata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1028--1036", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomohide Shibata and Sadao Kurohashi. 2011. Acquir- ing strongly-related events using predicate-argument co-occurring statistics and case frames. In Proceed- ings of 5th International Joint Conference on Natu- ral Language Processing, pages 1028-1036.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Mcar: multi-class classification based on association rule", |
|
"authors": [ |
|
{ |
|
"first": "Fadi", |
|
"middle": [], |
|
"last": "Thabtah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Cowling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghong", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "The 3rd ACS/IEEE International Conference onComputer Systems and Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fadi Thabtah, Peter Cowling, and Yonghong Peng. 2005. Mcar: multi-class classification based on as- sociation rule. In The 3rd ACS/IEEE International Conference onComputer Systems and Applications, 2005., page 33. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Niranjan Balasubramanian, and Nathanael Chambers", |
|
"authors": [ |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leena", |
|
"middle": [], |
|
"last": "Shekhar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Hierarchical quantized representations for script generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1808.09542" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noah Weber, Leena Shekhar, Niranjan Balasubrama- nian, and Nathanael Chambers. 2018. Hierarchi- cal quantized representations for script generation. arXiv preprint arXiv:1808.09542.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Scalable algorithms for association mining", |
|
"authors": [ |
|
{ |
|
"first": "Mohammed Javeed", |
|
"middle": [], |
|
"last": "Zaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "IEEE transactions on knowledge and data engineering", |
|
"volume": "12", |
|
"issue": "3", |
|
"pages": "372--390", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammed Javeed Zaki. 2000. Scalable algorithms for association mining. IEEE transactions on knowl- edge and data engineering, 12(3):372-390.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Graphical depiction of a Prosecution narrative chain learned byChambers and Jurafsky (2008). Arrows indicate partial temporal ordering.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "(e i |e) \u2022 (P (e)) k = arg max e\u2208E log L i=1 P (e i |e) \u2022 (P (e)) k = arg max e\u2208E log P (e 1 , . . . , e L |e) + k \u2022 log P (e),", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "e\u2208E log P (e 1 , . . . , e L |e) + log P (e) = arg max e\u2208E log (P (e 1 , . . . , e L |e) \u2022 P (e)) = arg max e\u2208E log P (e|e 1 , . . . , e L ).", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "{S 1 ,...,S t }\u2208Se score(S 1 , . . . , S t ; e).", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "sup(S \u222a I) = wsup k (I), where S ranges over all itemsets of size k, disjoint with I.Proof. By definition of support from Eq. 1, Mining interesting rules 1: Input: A set of high-support itemsets I, 2: Output: A set of interesting rules S. 3: Initialization: S = \u2205 4: for I \u2208 I do", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"text": "(e i |e) = C(e i ,e) C( * ,e) if C(e i , e) \u2265 t ARM , 1 |E|+1 otherwise, where t ARM = max (t sup , C( * , e) \u2022 t int ). t sup & t int pruning, (4)) 0.28 UOP (only t sup pruning, (4)) 0.28 UOP (only t int pruning, (4)) 0.03 UOP (no t int & t sup pruning, (4)) 0.03", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Ablation experiments on NYTimes dev set. R@50 stands for Recall@50.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Narrative cloze results bucketed by incomplete narrative chain length for each model and scoring function with best results in bold. The models are Unigram Model (UNI), Unordered PMI (UOP), Bigram Probability Model (BG), and proposed ARM model (ARM).", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |