{ "paper_id": "P03-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:14:25.115582Z" }, "title": "Optimizing Story Link Detection is not Equivalent to Optimizing New Event Detection", "authors": [ { "first": "Ayman", "middle": [], "last": "Farahat", "suffix": "", "affiliation": { "laboratory": "", "institution": "PARC", "location": { "addrLine": "3333 Coyote Hill Rd", "postCode": "94304", "settlement": "Palo Alto", "region": "CA" } }, "email": "farahat@parc.com" }, { "first": "Francine", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "fchen@parc.com" }, { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "", "affiliation": {}, "email": "thorsten@brants.net" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Link detection has been regarded as a core technology for the Topic Detection and Tracking tasks of new event detection. In this paper we formulate story link detection and new event detection as information retrieval task and hypothesize on the impact of precision and recall on both systems. Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists. Experimental results validate our hypothesis.", "pdf_parse": { "paper_id": "P03-1030", "_pdf_hash": "", "abstract": [ { "text": "Link detection has been regarded as a core technology for the Topic Detection and Tracking tasks of new event detection. In this paper we formulate story link detection and new event detection as information retrieval task and hypothesize on the impact of precision and recall on both systems. Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists. Experimental results validate our hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Topic Detection and Tracking (TDT) research is sponsored by the DARPA Translingual Information Detection, Extraction, and Summarization (TIDES) program. The research has five tasks related to organizing streams of data such as newswire and broadcast news (Wayne, 2000) : story segmentation, topic tracking, topic detection, new event detection (NED), and link detection (LNK). A link detection system detects whether two stories are \"linked\", or discuss the same event. A story about a plane crash and another story about the funeral of the crash victims are considered to be linked. In contrast, a story about hurricane Andrew and a story about hurricane Agnes are not linked because they are two different events. A new event detection system detects when a story discusses a previously unseen or \"not linked\" event. Link detection is considered to be a core technology for new event detection and the other tasks.", "cite_spans": [ { "start": 255, "end": 268, "text": "(Wayne, 2000)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several groups are performing research in the TDT tasks of link detection and new event detection. Based on their findings, we incorporated a number of their ideas into our baseline system. CMU (Yang et al., 1998) and UMass (Allan et al., 2000a) found that for new event detection it was better to compare a new story against all previously seen stories than to cluster previously seen stories and compare a new story against the clusters. CMU (Carbonell et al., 2001) found that NED results could be improved by developing separate models for different news sources to that could capture idiosyncrasies of different sources, which we also extended to link detection. UMass reported on adapting a tracking system for NED detection (Allan et al., 2000b) . Allan et. al , (Allan et al., 2000b ) developed a NED system based upon a tracking technology and showed that to achieve high-quality first story detection, tracking effectiveness must improve to a degree that experience suggests is unlikely. In this paper, while we reach a similar conclusion as (Allan et al., 2000b) for LNK and NED systems , we give specific directions for improving each system separately. We compare the link detection and new event detection tasks and discuss ways in which we have observed that techniques developed for one task do not always perform similarly for the other task.", "cite_spans": [ { "start": 194, "end": 213, "text": "(Yang et al., 1998)", "ref_id": "BIBREF15" }, { "start": 224, "end": 245, "text": "(Allan et al., 2000a)", "ref_id": "BIBREF2" }, { "start": 444, "end": 468, "text": "(Carbonell et al., 2001)", "ref_id": "BIBREF5" }, { "start": 731, "end": 752, "text": "(Allan et al., 2000b)", "ref_id": "BIBREF3" }, { "start": 755, "end": 790, "text": "Allan et. al , (Allan et al., 2000b", "ref_id": "BIBREF3" }, { "start": 1052, "end": 1073, "text": "(Allan et al., 2000b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This section describes those parts of the processing steps and the models that are the same for New Event Detection and for Link Detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Processing and Models", "sec_num": "2" }, { "text": "For pre-processing, we tokenize the data, recognize abbreviations, normalize abbreviations, remove stop-words, replace spelled-out numbers by digits, add part-of-speech tags, replace the tokens by their stems, and then generate term-frequency vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-Processing", "sec_num": "2.1" }, { "text": "Our similarity calculations of documents are based on an incremental TF-IDF model. In a TF-IDF model, the frequency of a term in a document (TF) is weighted by the inverse document frequency (IDF). In the incremental model, document frequencies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "are not static but change in time steps is added to the model by updating the frequencies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "\u00a2 \u00a1 \u00a3 \u00a6 \u00a5 \u00a7 \u00a2 \u00a1 \u00a3 \u00a6 \u00a5 \u00a7 ! \" # \u00a1 % $ \u00a3 \u00a6 \u00a5 \u00a7 (1) where # \u00a1 % $", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "denote the document frequencies in the newly added set of documents . The initial document frequencies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "\u00a2 \u00a1 ' & ' \u00a3 \u00a6 \u00a5 ( \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "are generated from a (possibly emtpy) training set. In a static TF-IDF model, new words (i.e., those words, that did not occur in the training set) are ignored in further computations. An incremental TF-IDF model uses the new vocabulary in similarity calculations. This is an advantage because new events often contain new vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "Very low frequency terms \u00a5 tend to be uninformative. We therefore set a threshold", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": ") ' 0 . Only terms with # \u00a1 \u00a3 \u00a6 \u00a5 ( \u00a7 2 1 ) 3 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "are used at time \u00a9 . We use ) % 0 5 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental TF-IDF Model", "sec_num": "2.2" }, { "text": "The document frequencies as described in the previous section are used to calculate weights for the terms \u00a5 in the documents . At time \u00a9 , we use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term Weighting", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00a5 ( 6 8 7 @ 9 B A \u00a9 \u00a3 D C E \u00a5 ( \u00a7 F G H \u00a3 I \u00a7 \u00a1 \u00a4 \u00a3 D C E \u00a5 \u00a7 Q P S R U T % V W \u00a2 \u00a1 \u00a3 \u00a6 \u00a5 ( \u00a7", "eq_num": "(2)" } ], "section": "Term Weighting", "sec_num": "2.3" }, { "text": "where W is the total number of documents at time", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term Weighting", "sec_num": "2.3" }, { "text": "\u00a9 . H \u00a3 B \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term Weighting", "sec_num": "2.3" }, { "text": "is a normalization value such that either the weights sum to 1 (if we use Hellinger distance, KL-divergence, or Clarity-based distance), or their squares sum to 1 (if we use cosine distance).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Term Weighting", "sec_num": "2.3" }, { "text": "The vectors consisting of normalized term weights \u00a5 X 6 S 7 @ 9 B A \u00a9 are used to calculate the similarity between two documents and Y . In our current implementation, we use the the Clarity metric which was introduced by (Croft et al., 2001; Lavrenko et al., 2002) and gets its name from the distance to general English, which is called Clarity. We used a symmetric version that is computed as:", "cite_spans": [ { "start": 222, "end": 242, "text": "(Croft et al., 2001;", "ref_id": "BIBREF7" }, { "start": 243, "end": 265, "text": "Lavrenko et al., 2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Calculation", "sec_num": "2.4" }, { "text": "7 b a c \u00a3 d C Y \u00a7 e g f i h q p X \u00a3 s r U r Y \u00a7 ! \" h t p X \u00a3 s r U r v u ( w x \u00a7 f 2 h t p X \u00a3 Y r U r I \u00a7 ! \" h t p y \u00a3 Y r U r v u ( w x \u00a7 (3) h t p y \u00a3 d C Y \u00a7 \u00a5 X 6 S 7 @ 9 B A \u00a9 \u00a3 d C E \u00a5 ( \u00a7 8 P 9 s \u00a3 \u00a5 X 6 S 7 @ 9 B A \u00a9 \u00a3 D C E \u00a5 \u00a7 \u00a5 ( 6 8 7 @ 9 B A \u00a9 \u00a2 \u00a3 Y C E \u00a5 ( \u00a7 D (4) where \" h t p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Calculation", "sec_num": "2.4" }, { "text": "\" is the Kullback-Leibler divergence, u ( w is the probability distribution of words for \"general English\" as derived from the training corpus. The idea behind this metric is that we want to give credit to similar pairs of documents that are very different from general English, and we want to discount similar pairs of documents that are close to general English (which can be interpreted as being the noise). The motivation for using the clarity metric will given in section 6.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Calculation", "sec_num": "2.4" }, { "text": "Another metric is Hellinger distanc\u00e8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Calculation", "sec_num": "2.4" }, { "text": "7 b a \u00a3 D C Y \u00a7 \u00a5 X 6 S 7 @ 9 B A \u00a9 \u00a3 D C E \u00a5 ( \u00a7 \u00a4 P 8 \u00a5 X 6 S 7 @ 9 B A \u00a9 \u00a3 Y C E \u00a5 \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Calculation", "sec_num": "2.4" }, { "text": "(5) Other possible similarity metrics are the cosine distance, the Kullback-Leibler divergence, or the symmetric form of it, Jensen-Shannon distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Calculation", "sec_num": "2.4" }, { "text": "Documents in the stream of news stories may stem from different sources, e.g., there are 20 different sources in the data for TDT 2002 (ABC News, Associated Press, New York Times, etc). Each source might use the vocabulary differently. For example, the names of the sources, names of shows, or names of news anchors are much more frequent in their own source than in the other ones. In order to reflect the source-specific differences we do not build one incremental TF-IDF model, but as many as we have different sources and use frequencies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "# \u00a1 % E \u00a3 \u00a6 \u00a5 ( \u00a7", "eq_num": "(6)" } ], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "for source`at time \u00a9 . The frequencies are updated according to equation (1), but only using those documents in that are from the same source`. As a consequence, a term like \"CNN\" receives a high document frequency (thus low weight) in the model for the source CNN and a low document frequency (thus high weight) in the New York Times model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "Instead of the overall document frequencies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "\u00a2 \u00a1 \u00a3 \u00a6 \u00a5 \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": ", we now use the source specific", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "\u00a2 \u00a1 \u00a3 \u00a6 \u00a5 \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "when calculating the term weights in equation 2. Sources`for which no training data is available (i.e., no data to generate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "\u00a2 \u00a1 E & % \u00a3 \u00a6 \u00a5 \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "is available) might be initialized in two different ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "1. Use an empty model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "\u00a2 \u00a1 & \u00a3 \u00a6 \u00a5 ( \u00a7 \u00a1 for all \u00a5 ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "2. Identify one or more other but similar sources \u00a3 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "for which training data is available and use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "# \u00a1 E & % \u00a3 \u00a6 \u00a5 \u00a7 \u00a5 \u00a4 \u00a2 \u00a1 \u00a6 \u00a4 & \u00a3 \u00a6 \u00a5 \u00a7 (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-Specific TF-IDF Model", "sec_num": "2.5" }, { "text": "Due to stylistic differences between various sources, e.g., news paper vs. broadcast news, translation errors, and automatic speech recognition errors (Allan et al., 1999) , the similarity measures for both ontopic and off-topic pairs will in general depend on the source pair. Errors due to these differences can be reduced by using thresholds conditioned on the sources (Carbonell et al., 2001) , or, as we do, by normalizing the similarity values based on similarities for the source pairs found in the story history.", "cite_spans": [ { "start": 151, "end": 171, "text": "(Allan et al., 1999)", "ref_id": "BIBREF1" }, { "start": 372, "end": 396, "text": "(Carbonell et al., 2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Source-Pair-Specific Normalization", "sec_num": "2.6" }, { "text": "In order to decide whether a new document Y that is added to the collection at time \u00a9 describes a new event, it is individually compared to all previous documents using the steps described in section 2. We identify the document \u00a7 with highest similarity: \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Event Detection", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00a9 V \u00a9 0 7 a \u00a3 Y C I \u00a7", "eq_num": "(8)" } ], "section": "New Event Detection", "sec_num": "3" }, { "text": "The value`\u00a3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Event Detection", "sec_num": "3" }, { "text": "' 6 \u00a3 Y \u00a7 \u00a4 G f 7 b a \u00a3 Y C \u00a7 \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Event Detection", "sec_num": "3" }, { "text": "is used to determine whether a document Y is about a new event and at the same time is an indication of the confidence in our decision. If the score exceeds a threshold", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Event Detection", "sec_num": "3" }, { "text": ", then there is no sufficiently similar previous document, thus Y describes a new event (decision YES). If the score is smaller than )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ")", "sec_num": null }, { "text": ", then \u00a7 is sufficiently similar, thus Y describes an old event (decision NO). The threshold ) can be determined by using labeled training data and calculating similarity scores for document pairs on the same event and on different events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ")", "sec_num": null }, { "text": "In order to decide whether a pair of stories and Y are linked, we identify a set of similarity metrics ! that capture the similarity between the two documents using Clarity and Hellinger metrics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Detection", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "! \u00a3 D C Y \u00a7 # \" 7 b a % $ S \u00a3 d C Y \u00a7 C 7 b a % & d \u00a3 d C Y \u00a7 ( '", "eq_num": "(9)" } ], "section": "Link Detection", "sec_num": "4" }, { "text": "The value", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Detection", "sec_num": "4" }, { "text": "! \u00a4 \u00a3 d C Y \u00a7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Detection", "sec_num": "4" }, { "text": "is used to determine whether stories \"q\" and \"d\" are linked. If the similarity exceeds a threshold ) ) 1 0 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Detection", "sec_num": "4" }, { "text": "we the two stories are sufficiently similar (decision YES). If the similarity is smaller than ) ) 1 0 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Detection", "sec_num": "4" }, { "text": "we the two stories are sufficiently different (decision NO). The Threshold ) ) 1 0 2 can be determined using labeled training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Link Detection", "sec_num": "4" }, { "text": "All TDT systems are evaluated by calculating a Detection Cost: are the conditional probabilities of a miss and a false alarm in the system output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "4 3 6 5 8 7 @ 9 \u00a2 P 1 A 7 8 9 \u00a2 P 1 A C B E D G F 5 I H Q P P 1 A H Q P P 1 A 0 R S 0 C B T D G F 5", "eq_num": "(10)" } ], "section": "Evaluation", "sec_num": "5" }, { "text": "and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A C B T D G F 5", "sec_num": null }, { "text": "A 0 R S 0 C B T D G F 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A C B T D G F 5", "sec_num": null }, { "text": "a the a priori target and non-target probabilities. They are set to 0.02 and 0.98 for LNK and NED. The detection cost is normalized such that a perfect system scores 0, and a random baseline scores 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A C B T D G F 5", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00a3 4 3 6 5 \u00a7 G U R D 7 4 3 6 5 min \u00a3 I 7 @ 9 \u00a2 P \u00a3 A C B T D G F 5 C I H Q P P \u00a3 A 0 R S 0 C B T D G F 5 \u00a7", "eq_num": "(11)" } ], "section": "A C B T D G F 5", "sec_num": null }, { "text": "TDT evaluates all systems with a topic-weighted method: error probabilities are accumulated separately for each topic and then averaged. This is motivated by the different sizes of the topics. The evaluation yields two costs: the detection cost is the cost when using the actual decisions made by the system; the minimum detection cost is the cost when using the confidence scores that each system has to emit with each decision and selecting the optimal threshold based on the score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A C B T D G F 5", "sec_num": null }, { "text": "In the TDT-2002 evaluation, our Link Detection system was the best of three systems, yielding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A C B T D G F 5", "sec_num": null }, { "text": "\u00a3 I )0 2 \u00a7 G U R D 7 \u00a1 G \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 and a 7 \u00a7 \u00a6 \u00a3 I )0 2 \u00a7 G U R D 7 \u00a1 G \u00a1 4 \u00a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A C B T D G F 5", "sec_num": null }, { "text": ". Our New Event Detection system was ranked second of four with costs of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A C B T D G F 5", "sec_num": null }, { "text": "\u00a3 0 5 0 \u00a7 G U R D 7 \u00a1 \u00a1 G and a 7 \u00a6 \u00a3 8 0 5 0 \u00a7 U R D 7 \u00a1 \u00a9 \u00a1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A C B T D G F 5", "sec_num": null }, { "text": "In this section, we draw on Information retrieval tools to analyze LNK and NED tasks. Motivated by the results of this analysis, we compare a number of techniques in the LNK and NED tasks in particular we compare the utility of two similarity measures, part-of-speech tagging, stop wording, and normalizing abbreviations and numerals. The comparisons were performed on corpora developed for TDT, including TDT2 and TDT3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences between LNK and NED", "sec_num": "6" }, { "text": "The conditions for false alarms and misses are reversed for LNK and NED tasks. In the LNK task, incorrectly flagging two stories as being on the same event is considered a false alarm. In contrast in the NED task, incorrectly flagging two stories as being on the same event will cause the true first story to be missed. Conversely, in LNK incorrectly labeling two stories that are on the same event as not linked is a miss, but in the NED task, incorrectly labeling two stories on the same event as not linked can result in a false alarm where a story is incorrectly identified as a new event. The detection cost in Eqn.10 which assigns a higher cost to false alarm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval and TDT", "sec_num": "6.1" }, { "text": "7 @ 9 # P A C B T D G F 5 \u00a1 \u00a1 4 B C H Q P P A 0 R S 0 C B T D G F 5 \u00a1 \u00a1 \u00a1 \u00a9 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval and TDT", "sec_num": "6.1" }, { "text": "A LNK system wants to minimize false alarms and to do this it should identify stories as being linked only if they are linked, which translates to high precision. In contrast a NED system, will minimize false alarms by identifying all stories that are linked which translates to high recall. Motivated by this discussion, we investigated the use of number of precision and recall enhancing techniques with the LNK and NED system. We investigated the use of the Clarity metric (Lavrenko et al., 2002) which was shown to correlate positively with precision. We investigated the use of part-of-speech tagging which was shown by Allan and Raghavan (Allan and Raghavan, 2002) to improve query clarity. In section 6.2.1 we will show how POS helps recall. We also investigated the use of expanded stop-list which improves precision. We also investigated normalizing abbreviations and transforming spelled out numbers into numbers. On the one hand the enhanced processing list includes most of the term in the ASR stop-list and removing these terms will improve precision. On the other hand normalizing these terms will have the same effect as stemming a recall enhancing device (Xu and Croft, 1998) , (Kraaij and Pohlmann, 1996) . In addition to these techniques, we also investigated the use of different similarity measures.", "cite_spans": [ { "start": 476, "end": 499, "text": "(Lavrenko et al., 2002)", "ref_id": "BIBREF10" }, { "start": 644, "end": 670, "text": "(Allan and Raghavan, 2002)", "ref_id": "BIBREF0" }, { "start": 1171, "end": 1191, "text": "(Xu and Croft, 1998)", "ref_id": "BIBREF14" }, { "start": 1194, "end": 1221, "text": "(Kraaij and Pohlmann, 1996)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval and TDT", "sec_num": "6.1" }, { "text": "The systems developed for TDT primarily use cosine similarity as the similarity measure. We have developed systems based on cosine similarity (Chen et al., 2003) . In work on text segmentation, (Brants et al., 2002 ) observed that the system performance was much better when the Hellinger measure was used instead. In this work, we decided to use the clarity metric, a precision enhancing device (Croft et al., 2001) . For both our LNK and NED systems, we compared the performance of the systems using each of the similarity measures separately. Table 1 shows that for LNK, the system based on Clarity similarity performed better the system based on Hellinger similarity; in contrast, for NED, the system based on Hellinger similarity performed better. Figure 1 shows the cumulative density function for the Hellinger and Clarity similarities for on-topic (about the same event) and off-topic (about different events) pairs for the LNK task. While there are a number of statistics to measure the overall difference between tow cumulative distribution functions, we used the Kolmogorov-Smirnov distance (K-S distance; the largest difference between two cumulative distributions) for two reasons. First, the K-S distance is invariant under re-parametrization. Second, the significance of the K-S distance in case of the null hypothesis (data sets are drawn from same distribution) can be calculated (Press et al., 1993) . The K-S distance between the on-topic and off-topic similarities is larger for Clarity similarity (cf. table 2), indicating that it is the better metric for LNK. Figure 2 shows the cumulative distribution functions for Hellinger and Clarity similarities in the NED task. The plot is based on pairs that contain the current story and its most similar story in the story history. When the most similar story is on the same event (approx. 75% of the cases), its similarity is part of the on-topic distribution, otherwise (approx. 25% of the cases) it is plotted as off-topic. The K-S distance between the Hellinger on-topic and off-topic CDFs is larger than those for Clarity (cf. table 2). For both NED and LNK, we can reject the null hypothesis for both metrics with over 99.99 % confidence.", "cite_spans": [ { "start": 142, "end": 161, "text": "(Chen et al., 2003)", "ref_id": "BIBREF6" }, { "start": 194, "end": 214, "text": "(Brants et al., 2002", "ref_id": "BIBREF4" }, { "start": 396, "end": 416, "text": "(Croft et al., 2001)", "ref_id": "BIBREF7" }, { "start": 1397, "end": 1417, "text": "(Press et al., 1993)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 546, "end": 553, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 753, "end": 761, "text": "Figure 1", "ref_id": "FIGREF2" }, { "start": 1582, "end": 1590, "text": "Figure 2", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Similarity Measures", "sec_num": "6.2" }, { "text": "\u00a1 \u00a1 \u00a2 4 \u00a1 ( f B \u00a2 \u00a1 ) NED 0.5353 0.6055 \u00a1 \u00a1 \u00a4 \u00a1 4 ( G B G \u00a2 \u00a3 \u00a1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Measures", "sec_num": "6.2" }, { "text": "To get the high precision required for LNK system, we need to have a large separation between the on-topic and off-topic distributions. Examining Figure 1 and Table 2 , indicates that the Clarity metric has a larger separation than the Hellinger metric. At high recall required by NED system (low CDF values for on-topic), there is a greater separation with the Hellinger metric. For example, at 10% recall, the Hellinger metric has 71 % false alarm rate as compared to 75 % for the Clarity metric.", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 166, "text": "Figure 1 and Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Similarity Measures", "sec_num": "6.2" }, { "text": "We explored the idea that noting the part-ofspeech of the terms in a document may help to reduce confusion among some of the senses of a word. During pre-processing, we tagged the terms as one of five categories: adjective, noun, proper nouns, verb, or other. A \"tagged term\" was then created by combining the stem and part-of-speech. For example, 'N train' represents the term 'train' when used as a noun, and 'V train' represents the term 'train' when used as a verb. We then ran our NED and LNK systems using the tagged terms. The systems were tested in the Dry Run 2002 TDT data. A comparison of the performance of the systems when part-of-speech is used against a baseline sys- Table 3 . For Story Link Detection, performance decreases by 38.3%, while for New Event Detection, performance improves by 8.3%. Since POS tagging helps differentiates between the different senses of the same root, it also reduces the number of matching terms between two documents. In the LNK task for example, the total number of matches drops from 177,550 to 151,132. This has the effect of placing a higher weight on terms that match, i.e. terms that have the same sense and for the TDT corpus will increase recall and decrease. Consider for example matching \"food server to \"food service\" and \"java server\". When using POS both terms will have the same similarity to the query and the use of POS will retrieve the relevant documents but will also retrieve other documents that share the same sense.", "cite_spans": [], "ref_spans": [ { "start": 683, "end": 690, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Part-of-Speech (PoS) Tagging", "sec_num": "6.2.1" }, { "text": "A large portion of the documents in the TDT collection has been automatically transcribed using Automatic Speech Recognition (ASR) systems which can achieve over 95% accuracies. However, some of the words not recognized by the ASR tend to be very informative words that can significantly impact the detection performance (Allan et al., 1999) . Furthermore, there are systematic differences between ASR and manually transcribed text, e.g., numbers are often spelled out thus \"30\" will be spelled out \"thirty\". Another situation where ASR is different from transcribed text is abbreviations, e.g. ASR system will recognize 'CNN\" as three separate tokens \"C\", \"N\", and \"N\".", "cite_spans": [ { "start": 321, "end": 341, "text": "(Allan et al., 1999)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Stop Words", "sec_num": "6.2.2" }, { "text": "In order to account for these differences, we identified the set of tokens that are problematic for ASR. Our approach was to identify a parallel corpus of manually and automatically transcribed documents, the TDT2 corpus, and then use a statistical approach (Dunning, 1993) to identify tokens with significantly In (Chen et al., 2003) we investigated normalizing abbreviations and transforming spelled-out numbers into numerals, \"enhanced preprocessing\", and then compared this approach with using an \"ASR stoplist\".", "cite_spans": [ { "start": 258, "end": 273, "text": "(Dunning, 1993)", "ref_id": "BIBREF8" }, { "start": 315, "end": 334, "text": "(Chen et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Stop Words", "sec_num": "6.2.2" }, { "text": "The previous two sections examined the impact of four different techniques on the performance of LNK and NED systems. The Part-of-speech is a recall enhancing devices while the ASR stop-list is a precision enhancing device. The enhanced preprocessing improves precision and recall. The results which are summarized in Table 5 indicate that precision enhancing devices improved the performance of the LNK task while recall enhancing devices improved the NED task.", "cite_spans": [], "ref_spans": [ { "start": 318, "end": 325, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Impact of Recall and Precision", "sec_num": "6.2.3" }, { "text": "In the extreme case, a perfect link detection system performs perfectly on the NED task. We gave empirical evidence that there is not necessarily such a correlation at lower accuracies. These findings are in accordance with the results reported in (Allan et al., 2000b) for topic tracking and first story detection.", "cite_spans": [ { "start": 248, "end": 269, "text": "(Allan et al., 2000b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Final Remarks on Differences", "sec_num": "6.3" }, { "text": "To test the impact of the cost function on the performance of LNK and NED systems, we repeated the evaluation with 4 7 8 9 # and \u00a1 B both set to 1, and we found that the difference between the two re- Table 6 : Topic-weighted minimum normalized detection cost for NED when using parameter settings that are best for NED (1) and those that are best for LNK (2). Columns (3) and (4) show the detection costs using uniform costs for misses and false alarms.", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Final Remarks on Differences", "sec_num": "6.3" }, { "text": "(1) PoS, ASRstop) is better at precision (identifying different-event stories).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Remarks on Differences", "sec_num": "6.3" }, { "text": "In addition to the different costs assigned to misses and false alarms, there is a difference in the number of positives and negatives in the data set (the TDT cost function uses", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Remarks on Differences", "sec_num": "6.3" }, { "text": "\u00a2 C B T D G F 5 \u00a1 \u00a1 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Remarks on Differences", "sec_num": "6.3" }, { "text": "). This might explain part of the remaining difference of 14.73%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Remarks on Differences", "sec_num": "6.3" }, { "text": "Another view on the differences is that a NED system must perform very well on the higher penalized first stories when it does not have any training data for the new event, event though it may perform worse on follow-up stories. A LNK system, however, can afford to perform worse on the first story if it compensates by performing well on follow-up stories (because here not flagged follow-up stories are considered misses and thus higher penalized than in NED). This view explains the benefits of using partof-speech information and the negative effect of the ASR stop-list on NED : different part-of-speech tags help discriminate new events from old events; removing words by using the ASR stoplist makes it harder to discriminate new events. We conjecture that the Hellinger metric helps improve recall, and in a study similar to (Allan et al., 2000b) we plan to further evaluate the impact of the Hellinger metric on a closed collection e.g. TREC.", "cite_spans": [ { "start": 833, "end": 854, "text": "(Allan et al., 2000b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Final Remarks on Differences", "sec_num": "6.3" }, { "text": "We have compared the effect of several techniques on the performance of a story link detection system and a new event detection system. Although many of the processing techniques used by our systems are the same, a number of core technologies affect the performance of the LNK and NED systems differently. The Clarity similarity measure was more effective for LNK, Hellinger similarity measure was more effective for NED, part-of-speech was more useful for NED, and stop-list adjustment was more useful for LNK. These differences may be due in part to a reversal in the tasks: a miss in LNK means the system does not flag two stories as being on the same event when they actually are, while a miss in NED means the system does flag two stories as being on the same event when actually they are not. In future work, we plan to evaluate the impact of the Hellinger metric on recall. In addition, we plan to use Anaphora resolution which was shown to improve recall (Pirkola and Jrvelin, 1996) to enhance the NED system.", "cite_spans": [ { "start": 963, "end": 990, "text": "(Pirkola and Jrvelin, 1996)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "7" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Using part-ofspeech patterns to reduce query ambiguity", "authors": [ { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Hema", "middle": [], "last": "Raghavan", "suffix": "" } ], "year": 2002, "venue": "ACM SIGIR2002", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Allan and Hema Raghavan. 2002. Using part-of- speech patterns to reduce query ambiguity. In ACM SIGIR2002, Tampere, Finland.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Topic-based novelty detection. Summer workshop final report", "authors": [ { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Hubert", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Rajman", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Wayne", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Allan, Hubert Jin, Martin Rajman, Charles Wayne, and et. al. 1999. Topic-based novelty detection. Sum- mer workshop final report, Center for Language and Speech Processing, Johns Hopkins University.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Detections, bounds, and timelines: Umass and tdt-3", "authors": [ { "first": "J", "middle": [], "last": "Allan", "suffix": "" }, { "first": "V", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "D", "middle": [], "last": "Malin", "suffix": "" }, { "first": "R", "middle": [], "last": "Swan", "suffix": "" } ], "year": 2000, "venue": "Proceedings of Topic Detection and Tracking Workshop (TDT-3)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Allan, V. Lavrenko, D. Malin, and R. Swan. 2000a. Detections, bounds, and timelines: Umass and tdt-3. In Proceedings of Topic Detection and Tracking Work- shop (TDT-3), Vienna, VA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "First story detection in TDT is hard", "authors": [ { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "Hubert", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2000, "venue": "CIKM", "volume": "", "issue": "", "pages": "374--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Allan, Victor Lavrenko, and Hubert Jin. 2000b. First story detection in TDT is hard. In CIKM, pages 374-381.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Topic-based document segmentation with probabilistic latent semantic analysis", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Francine", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Tsochantaridis", "suffix": "" } ], "year": 2002, "venue": "International Conference on Information and Knowledge Management (CIKM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants, Francine Chen, and Ioannis Tsochan- taridis. 2002. Topic-based document segmentation with probabilistic latent semantic analysis. In Inter- national Conference on Information and Knowledge Management (CIKM), McLean, VA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Cmu tdt report. Slides at the TDT-2001 meeting", "authors": [ { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Chun", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaime Carbonell, Yiming Yang, Ralf Brown, Chun Jin, and Jian Zhang. 2001. Cmu tdt report. Slides at the TDT-2001 meeting, CMU.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Story link detection and new event detection are asymmetric", "authors": [ { "first": "Francine", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ayman", "middle": [], "last": "Farahat", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NAACL-HLT-2002", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francine Chen, Ayman Farahat, and Thorsten Brants. 2003. Story link detection and new event detection are asymmetric. In Proceedings of NAACL-HLT-2002, Edmonton, AL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Relevance feedback and personalization: A language modeling perspective", "authors": [ { "first": "W", "middle": [], "last": "", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Cronen-Townsend", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Larvrenko", "suffix": "" } ], "year": 2001, "venue": "DE-LOS Workshop: Personalisation and Recommender Systems in Digital Libraries", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Bruce Croft, Stephen Cronen-Townsend, and Victor Larvrenko. 2001. Relevance feedback and person- alization: A language modeling perspective. In DE- LOS Workshop: Personalisation and Recommender Systems in Digital Libraries.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [ "E" ], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted E. Dunning. 1993. Accurate methods for the statis- tics of surprise and coincidence. Computational Lin- guistics, 19(1):61-74.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Viewing stemming as recall enhancement", "authors": [ { "first": "Wessel", "middle": [], "last": "Kraaij", "suffix": "" }, { "first": "Renee", "middle": [], "last": "Pohlmann", "suffix": "" } ], "year": 1996, "venue": "ACM SIGIR1996", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wessel Kraaij and Renee Pohlmann. 1996. Viewing stemming as recall enhancement. In ACM SIGIR1996.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Relevance models for topic detection and tracking", "authors": [ { "first": "Victor", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Deguzman", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Laflamme", "suffix": "" }, { "first": "Veera", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Thomas", "suffix": "" } ], "year": 2002, "venue": "Proceedings of HLT-2002", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Lavrenko, James Allan, Edward DeGuzman, Daniel LaFlamme, Veera Pollard, and Stephen Thomas. 2002. Relevance models for topic detection and tracking. In Proceedings of HLT-2002, San Diego, CA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The effect of anaphora and ellipsis resolution on proximity searching in a text database", "authors": [ { "first": "A", "middle": [], "last": "Pirkola", "suffix": "" }, { "first": "K", "middle": [], "last": "Jrvelin", "suffix": "" } ], "year": 1996, "venue": "Information Processing and Management", "volume": "32", "issue": "2", "pages": "199--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Pirkola and K. Jrvelin. 1996. The effect of anaphora and ellipsis resolution on proximity searching in a text database. Information Processing and Management, 32(2):199-216.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Numerical Recipes", "authors": [ { "first": "H", "middle": [], "last": "William", "suffix": "" }, { "first": "Saul", "middle": [ "A" ], "last": "Press", "suffix": "" }, { "first": "William", "middle": [], "last": "Teukolsky", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Vetterling", "suffix": "" }, { "first": "", "middle": [], "last": "Flannery", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William H. Press, Saul A. Teukolsky, William Vetterling, and Brian Flannery. 1993. Numerical Recipes. Cam- bridge Unv. Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multilingual topic detection and tracking: Successful research enabled by corpora and evaluation", "authors": [ { "first": "Charles", "middle": [], "last": "Wayne", "suffix": "" } ], "year": 2000, "venue": "Language Resources and Evaluation Conference (LREC)", "volume": "", "issue": "", "pages": "1487--1494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Wayne. 2000. Multilingual topic detection and tracking: Successful research enabled by corpora and evaluation. In Language Resources and Evalu- ation Conference (LREC), pages 1487-1494, Athens, Greece.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Corpus-based stemming using cooccurrence of word variants", "authors": [ { "first": "Jinxi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "W. Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 1998, "venue": "ACM Transactions on Information Systems", "volume": "16", "issue": "1", "pages": "61--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinxi Xu and W. Bruce Croft. 1998. Corpus-based stemming using cooccurrence of word variants. ACM Transactions on Information Systems, 16(1):61-81.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A study on retrospective and on-line event detection", "authors": [ { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Pierce", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 1998, "venue": "Proceedings of SIGIR-98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiming Yang, Tom Pierce, and Jaime Carbonell. 1998. A study on retrospective and on-line event detection. In Proceedings of SIGIR-98, Melbourne, Australia.", "links": null } }, "ref_entries": { "FIGREF2": { "text": "CDF for Clarity and Hellinger similarity on the LNK task for on-topic and off-topic pairs.", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "CDF for Clarity and Hellinger similarity on the NED task for on-topic and off-topic pairs.", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "content": "
Effect of different similarity measures
on topic-weighted minimum normalized detection
costs for LNK and NED on the TDT 2002 dry run
data.
System Clarity Hellinger% Chg
LNK0.30540.3777 -0.0597-19.2
NED0.84190.5873 +0.2546 +30.24
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF2": { "content": "
: K-S distance between on-topic and off-
topic story pairs.
Clarity HellingerChange (%)
LNK 0.76800.7251
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF3": { "content": "
f PoSPoSChange (%)
LNK0.3054 0.4224 -0.117 (f%)
NED0.6403 0.5873 +0.0530 (%)
", "type_str": "table", "html": null, "num": null, "text": "Effect of using part-of-speech on minimum normalized detection costs for LNK and NED on the TDT 2002 dry run data. System" }, "TABREF4": { "content": "
Comparison of using an \"ASR stop-list\"
and \"enhanced preprocessing\" for handling ASR
differences.
No ASR stopASR stop
Std Preproc Std Preproc
LNK0.31530.3054
NED0.60620.6407
tem when part-of-speech is not used is shown in
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF5": { "content": "
: Impact of recall and precision enhancing
devices.
DeviceImpactLNKNED
ASR stop precision +3.1% -5.5 %
POSrecall-38.8 % 8.3 %
Clarityprecision +19 % -30 %
different distributions in the two corpora. We com-
piled the problematic ASR terms into an \"ASR stop-
list\". This list was primarily composed of spelled-
out numbers, numerals and a few other terms. Ta-
ble 4 shows the topic-weighted minimum detection
costs for LNK and NED on the TDT 2002 dry run
data. The table shows results for standard pre-
processing without an ASR stop-list and with and
ASR stop-list. For Link Detection, the ASR stop-
list improves results, while the same list decreases
performance for New Event Detection.
", "type_str": "table", "html": null, "num": null, "text": "" } } } }