{ "paper_id": "N03-2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:07:45.902003Z" }, "title": "Story Link Detection and New Event Detection are Asymmetric", "authors": [ { "first": "Francine", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "PARC", "location": { "addrLine": "3333 Coyote Hill Rd", "postCode": "94304", "settlement": "Palo Alto", "region": "CA" } }, "email": "fchen@parc.com" }, { "first": "Ayman", "middle": [], "last": "Farahat", "suffix": "", "affiliation": {}, "email": "farahat@parc.com" }, { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "", "affiliation": {}, "email": "thorsten@brants.net" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Story link detection has been regarded as a core technology for other Topic Detection and Tracking tasks such as new event detection. In this paper we analyze story link detection and new event detection in a retrieval framework and examine the effect of a number of techniques, including part of speech tagging, new similarity measures, and an expanded stop list, on the performance of the two detection tasks. We present experimental results that show that the utility of the techniques on the two tasks differs, as is consistent with our analysis.", "pdf_parse": { "paper_id": "N03-2005", "_pdf_hash": "", "abstract": [ { "text": "Story link detection has been regarded as a core technology for other Topic Detection and Tracking tasks such as new event detection. In this paper we analyze story link detection and new event detection in a retrieval framework and examine the effect of a number of techniques, including part of speech tagging, new similarity measures, and an expanded stop list, on the performance of the two detection tasks. We present experimental results that show that the utility of the techniques on the two tasks differs, as is consistent with our analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Topic Detection and Tracking (TDT) research is sponsored by the DARPA TIDES program. The research has five tasks related to organizing streams of data such as newswire and broadcast news (Wayne, 2000) . A link detection (LNK) system detects whether two stories are \"linked\", or discuss the same event. A story about a plane crash and another story about the funeral of the crash victims are considered to be linked. In contrast, a story about hurricane Andrew and a story about hurricane Agnes are not linked because they are two different events. A new event detection (NED) system detects when a story discusses a previously unseen event. Link detection is considered to be a core technology for new event detection and the other tasks.", "cite_spans": [ { "start": 187, "end": 200, "text": "(Wayne, 2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several groups are performing research on the TDT tasks of link detection and new event detection (e.g., (Carbonell et al., 2001 ) (Allan et al., 2000) ). In this paper, we compare the link detection and new event detection tasks in an information retrieval framework, examining the criteria for improving a NED system based on a LNK system, and give specific directions for improving each system separately. We also investigate the utility of a number of techniques for improving the systems.", "cite_spans": [ { "start": 105, "end": 128, "text": "(Carbonell et al., 2001", "ref_id": "BIBREF3" }, { "start": 131, "end": 151, "text": "(Allan et al., 2000)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Link Detection and New Event Detection systems that we developed for TDT2002 share many processing steps in common. This includes preprocessing to tokenize the data, recognize abbreviations, normalize abbreviations, remove stop-words, replace spelledout numbers by digits, add part-of-speech tags, replace the tokens by their stems, and then generating termfrequency vectors. Document frequency counts are incrementally updated as new sources of stories are presented to the system. Additionally, separate sourcespecific counts are used, so that, for example, the term frequencies for the New York Times are computed separately from stories from CNN. The sourcespecific, incremental, document frequency counts are used to compute a TF-IDF term vector for each story. Stories are compared using either the cosine distance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 \u00a7 \" ! # % $ ' & \u00a4 () 1 0 3 2 \u00a4 4# % $ ' & \u00a4 () 6 5 3 2 7 8 ! 9 # % $ ' & \u00a4 () 1 0 3 2 5 4 \" ! 9 # @ $ A & \u00a4 () 6 5 B 2 5 or Hellinger distance \u00a2 \u00a1 C \u00a3 D \u00a5 E \u00a7 \u00a9 \u00a7 F & G # % $ H ) 1 0 I (& P 2 \" ! # % $ H ) 1 0 I (& P 2 R Q # % $ H ) 6 5 S (& P 2 \" ! # @ $ T ) 6 5 6 (& P 2 for terms U in documents \u00a7 \u00a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": "and \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": ". To help compensate for stylistic differences between various sources, e.g., news paper vs. broadcast news, translation errors, and automatic speech recognition errors (Allan et al., 1999) , we subtract the average observed similarity values, in similar spirit to the use of thresholds conditioned on the sources (Carbonell et al., 2001) 3 New Event Detection", "cite_spans": [ { "start": 169, "end": 189, "text": "(Allan et al., 1999)", "ref_id": "BIBREF0" }, { "start": 314, "end": 338, "text": "(Carbonell et al., 2001)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": "In order to decide whether a new document \u00a7 describes a new event, it is compared to all previous documents and the document \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": "W V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": "with highest similarity is identified. If the score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": "X 1 Y 9 a \u00a5 E \u00a7 b d c f e g \u00a2 \u00a1 \u00a4 \u00a3 D \u00a5 E \u00a7 W \u00a7 h V \u00a2 exceeds a thresh- old i % p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": ", then there is no sufficiently similar previous document, and \u00a7 is classified as a new event.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": "In order to decide whether a pair of stories \u00a7 q \u00a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Detection", "sec_num": "4" }, { "text": "and \u00a7 are linked, we compute the similarity between the two documents using the cosine and Hellinger metrics. The similarity metrics are combined using a support vector machine and the margin is used as a confidence measure that is thresholded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Detection", "sec_num": "4" }, { "text": "TDT system evaluation is based on the number of false alarms and misses produced by a system. In link detection, the system should detect linked story pairs; in new event detection, the system should detect new stories. A detection cost", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r R s u t & r w v y x p 3 p Q C v y x p 3 p Q & E S r R Q Q % & E S", "eq_num": "(1)" } ], "section": "Evaluation Metric", "sec_num": "5" }, { "text": "is computed where the costs are the a priori target and non-target probabilities, set to 0.02 and 0.98, respectively. The detection cost is normalized by dividing by min", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "5" }, { "text": "r v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "5" }, { "text": "\u00a5 r v y x p 3 p Q & E S r Q % & E S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "5" }, { "text": "so that a perfect system scores 0, and a random baseline scores 1. Equal weight is given to each topic by accumulating error probabilities separately for each topic and then averaged. The minimum detection cost is the decision cost when the decision threshold is set to the optimal confidence score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "5" }, { "text": "The conditions for false alarms and misses are reversed for the LNK and NED tasks. In the LNK task, incorrectly flagging two stories as being on the same event is considered a false alarm. In contrast, in the NED task, incorrectly flagging two stories as being on the same event will cause a true first story to be missed. Conversely, incorrectly labeling two stories that are on the same event as not linked is a miss, but for the NED task, incorrectly labeling two stories on the same event as not linked may result in a false alarm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences between LNK and NED", "sec_num": "6" }, { "text": "In this section, we analyze the utility of a number of techniques for the LNK and NED tasks in an information retrieval framework. The detection cost in Eqn. 1 assigns a higher cost to false alarms since", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences between LNK and NED", "sec_num": "6" }, { "text": "r v y x p 3 p Q % & E 1 and r Q B & E 1 d e % f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences between LNK and NED", "sec_num": "6" }, { "text": ". A LNK system should minimize false alarms by identifying only linked stories, which results in high precision for LNK. In contrast, a NED system will minimize false alarms by identifying all stories that are linked, which translates to high recall for LNK. Based on this observation, we investigated a number of precision and recall enhancing techniques for the LNK and NED systems, namely, part-of-speech tagging, an expanded stoplist, and normalizing abbreviations and transforming spelled out numbers into numbers. We also investigated the use of different similarity measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences between LNK and NED", "sec_num": "6" }, { "text": "The systems developed for TDT primarily use cosine similarity as the similarity measure. In work on text segmentation (Brants et al., 2002) , better performance was observed with the Hellinger measure. Table 1 shows that for LNK, the system based on cosine similarity performed better; in contrast, for NED, the system based on Hellinger similarity performed better.", "cite_spans": [ { "start": 118, "end": 139, "text": "(Brants et al., 2002)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Similarity Measures", "sec_num": "6.1" }, { "text": "The LNK task requires high precision, which corresponds to a large separation between the on-topic and off-topic distributions, as shown for the cosine metric in Figure 1 . The NED task requires high recall (low CDF values for on-topic). Figure 2 , which is based on pairs that contain the current story and its most similar story in the story history, shows a greater separation in this region with the Hellinger metric. For example, at 10% recall, the Hellinger metric has 71% false alarm rate as compared to 75% for the cosine metric.", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 170, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 238, "end": 246, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Similarity Measures", "sec_num": "6.1" }, { "text": "To reduce confusion among some word senses, we tagged the terms as one of five categories: adjective, noun, proper nouns, verb, or other, and then combined the stem and part-of-speech to create a \"tagged term\". For example, 'N train' represents the term 'train' when used as a noun. The LNK and NED systems were tested using the tagged terms. Table 2 shows the opposite effect PoS tagging has on LNK and NED.", "cite_spans": [], "ref_spans": [ { "start": 343, "end": 350, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Part-of-Speech (PoS) Tagging", "sec_num": "6.2" }, { "text": "The broadcast news documents in the TDT collection have been transcribed using Automatic Speech Recognition (ASR). There are systematic differences between ASR and manually transcribed text. For example \"30\" will be spelled out as \"thirty\" and 'CNN\" is represented as three separate tokens \"C\", \"N\", and \"N\". To handle these differences, an \"ASR stoplist\" was created by identifying terms with statistically different distributions in a parallel corpus of manually and automatically transcribed documents, the TDT2 corpus. Table 3 shows that use of an ASR stoplist on the topic-weighted minimum detection costs improves results for LNK but not for NED. We also performed \"enhanced preprocessing\" to normalize abbreviations and transform spelled-out numbers into numerals, which improves both precision and recall. Table 3 shows that enhanced preprocessing exhibits worse performance than the ASR stoplist for Link Detection, but yields best results for New Event Detection. ", "cite_spans": [], "ref_spans": [ { "start": 523, "end": 530, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 814, "end": 821, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Stop Words", "sec_num": "6.3" }, { "text": "We have presented a comparison of story link detection and new event detection in a retrieval framework, showing that the two tasks are asymmetric in the optimization of precision and recall. We performed experiments comparing the effect of several techniques on the performance of LNK and NED systems. Although many of the processing techniques used by our systems are the same, the results of our experiments indicate that some techniques affect the performance of LNK and NED systems differently. These differences may be due in part to the asymmetry in the tasks and the corresponding differences in whether improving precision or recall for the link task is more important.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Conclusions", "sec_num": "7" } ], "back_matter": [ { "text": "We thank James Allan of UMass for suggesting that precision and recall may partially explain the asymmetry of LNK and NED.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "8" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Topic-based novelty detection", "authors": [ { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Hubert", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Rajman", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Wayne", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "Rose", "middle": [], "last": "Hoberman", "suffix": "" }, { "first": "David", "middle": [], "last": "Caputo", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Allan, Hubert Jin, Martin Rajman, Charles Wayne, Dan Gildea, Victor Lavrenko, Rose Hoberman, and David Caputo. 1999. Topic-based novelty detection. Summer workshop final report, Center for Language and Speech Processing, Johns Hopkins University.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "First story detection in TDT is hard", "authors": [ { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "Hubert", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2000, "venue": "CIKM", "volume": "", "issue": "", "pages": "374--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Allan, Victor Lavrenko, and Hubert Jin. 2000. First story detection in TDT is hard. In CIKM, pages 374-381.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Topic-based document segmentation with probabilistic latent semantic analysis", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Francine", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Tsochantaridis", "suffix": "" } ], "year": 2002, "venue": "CIKM", "volume": "", "issue": "", "pages": "211--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants, Francine Chen, and Ioannis Tsochan- taridis. 2002. Topic-based document segmentation with probabilistic latent semantic analysis. In CIKM, pages 211-218, McLean, VA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Cmu tdt report. Slides at the TDT-2001 meeting", "authors": [ { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Chun", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaime Carbonell, Yiming Yang, Ralf Brown, Chun Jin, and Jian Zhang. 2001. Cmu tdt report. Slides at the TDT-2001 meeting, CMU.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multilingual topic detection and tracking: Successful research enabled by corpora and evaluation", "authors": [ { "first": "Charles", "middle": [], "last": "Wayne", "suffix": "" } ], "year": 2000, "venue": "LREC", "volume": "", "issue": "", "pages": "1487--1494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Wayne. 2000. Multilingual topic detection and tracking: Successful research enabled by corpora and evaluation. In LREC, pages 1487-1494, Athens, Greece.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "CDF for cosine and Hellinger similarity on the LNK task for on-topic and off-topic pairs." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "CDF for cosine and Hellinger similarity on the NED task for on-topic and off-topic pairs." }, "TABREF1": { "num": null, "type_str": "table", "text": "Effect of different similarity measures on topicweighted minimum normalized detection costs on the TDT 2002 dry run data.", "html": null, "content": "
System Cosine HellingerChange(%)
LNK0.31800.3777-0.0597(-18.8)
NED0.70590.5873 +0.1186(+16.3)
" }, "TABREF2": { "num": null, "type_str": "table", "text": "", "html": null, "content": "
: Effect of using part-of-speech on minimum nor-
malized detection costs on the TDT 2002 dry run data.
Systeme PoSPoSChange (%)
LNK0.3180 0.3334 -0.0154 (e h gf %)
NED0.6403 0.5873 +0.0530 (fi%)
" }, "TABREF3": { "num": null, "type_str": "table", "text": "Effect of using an \"ASR stoplist\" and \"enhanced preprocessing\" for handling ASR differences on the TDT 2001 evaluation data.", "html": null, "content": "
ASRstopNoYesNo
PreprocStdStdEnh
LNK0.312 0.299 (+4.4%) 0.301 (+3.3%)
NED0.606 0.641 (-5.5%) 0.587 (+3.1%)
" } } } }