{ "paper_id": "I08-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:41:39.091487Z" }, "title": "Identifying Cross-Document Relations between Sentences", "authors": [ { "first": "Yasunari", "middle": [], "last": "Miyabe", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": { "country": "Japan" } }, "email": "miyabe@lr.pi.titech.ac.jp" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "", "affiliation": { "laboratory": "Precision and Intelligence Laboratory", "institution": "Tokyo Institute of Technology", "location": { "country": "Japan" } }, "email": "takamura@pi.titech.ac.jp" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "", "affiliation": { "laboratory": "Precision and Intelligence Laboratory", "institution": "Tokyo Institute of Technology", "location": { "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A pair of sentences in different newspaper articles on an event can have one of several relations. Of these, we have focused on two, i.e., equivalence and transition. Equivalence is the relation between two sentences that have the same information on an event. Transition is the relation between two sentences that have the same information except for values of numeric attributes. We propose methods of identifying these relations. We first split a dataset consisting of pairs of sentences into clusters according to their similarities, and then construct a classifier for each cluster to identify equivalence relations. We also adopt a \"coarse-to-fine\" approach. We further propose using the identified equivalence relations to address the task of identifying transition relations.", "pdf_parse": { "paper_id": "I08-1019", "_pdf_hash": "", "abstract": [ { "text": "A pair of sentences in different newspaper articles on an event can have one of several relations. Of these, we have focused on two, i.e., equivalence and transition. Equivalence is the relation between two sentences that have the same information on an event. Transition is the relation between two sentences that have the same information except for values of numeric attributes. We propose methods of identifying these relations. We first split a dataset consisting of pairs of sentences into clusters according to their similarities, and then construct a classifier for each cluster to identify equivalence relations. We also adopt a \"coarse-to-fine\" approach. We further propose using the identified equivalence relations to address the task of identifying transition relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A document generally consists of semantic units called sentences and various relations hold between them. The analysis of the structure of a document by identifying the relations between sentences is called discourse analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The discourse structure of one document has been the target of the traditional discourse analysis (Marcu, 2000; Marcu and Echihabi, 2002; Yokoyama et al., 2003) , based on rhetorical structure theory (RST) (Mann and Thompson, 1987) . \u00a7 Yasunari Miyabe currently works at Toshiba Solutions Corporation.", "cite_spans": [ { "start": 98, "end": 111, "text": "(Marcu, 2000;", "ref_id": "BIBREF10" }, { "start": 112, "end": 137, "text": "Marcu and Echihabi, 2002;", "ref_id": "BIBREF9" }, { "start": 138, "end": 160, "text": "Yokoyama et al., 2003)", "ref_id": "BIBREF18" }, { "start": 206, "end": 231, "text": "(Mann and Thompson, 1987)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inspired by RST, Radev (2000) proposed the cross-document structure theory (CST) for multidocument analysis, such as multi-document summarization, and topic detection and tracking. CST takes the structure of a set of related documents into account. Radev defined relations that hold between sentences across the documents on an event (e.g., an earthquake or a traffic accident).", "cite_spans": [ { "start": 17, "end": 29, "text": "Radev (2000)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Radev presented a taxonomy of cross-document relations, consisting of 24 types. In Japanese, Etoh et al. (2005) redefined 14 CST types based on Radev's taxonomy. For example, a pair of sentences with an \"equivalence relation\" (EQ) has the same information on an event. EQ can be considered to correspond to the identity and equivalence relations in Radev's taxonomy. A sentence pair with a \"transition relation\" (TR) contains the same numeric attributes with different values. TR roughly corresponds to the follow-up and fulfilment relations in Radev's taxonomy. We will provide examples of CST relations:", "cite_spans": [ { "start": 93, "end": 111, "text": "Etoh et al. (2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. ABC telephone company announced on the 9th that the number of users of its mobile-phone service had reached one million. Users can access the Internet, reserve train tickets, as well as make phone calls through this service.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. ABC said on the 18th that the number of users of its mobile-phone service had reached 1,500,000. This service includes Internet access, and enables train-ticket reservations and telephone calls.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The pair of the first sentence in 1 and the first sentence in 2 is in TR, because the number of users has changed from one million to 1.5 millions, while other things remain unchanged. The pair of the second sentence in 1 and the second sentence in 2 is in EQ, because these two sentences have the same information. Identification of CST relations has attracted more attention since the study of multi-document discourse emerged. Identified CST types are helpful in various applications such as multi-document summarization and information extraction. For example, EQ is useful for detecting and eliminating redundant information in multi-document summarization. TR can be used to visualize time-series trends.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on the two relations EQ and TR in the Japanese CST taxonomy, and present methods for their identification. For the identification of EQ pairs, we first split a dataset consisting of sentence pairs into clusters according to their similarities, and then construct a classifier for each cluster. In addition, we adopt a coarse-to-fine approach, in which a more general (coarse) class is first identified before the target fine class (EQ). For the identification of TR pairs, we use variable noun phrases (VNPs), which are defined as noun phrases representing a variable with a number as its value (e.g., stock prices, and population). Hatzivassiloglou et al. (1999; proposed a method based on supervised machine learning to identify whether two paragraphs contain similar information. However, we found it was difficult to accurately identify EQ pairs between two sentences simply by using similarities as features. Zhang et al. (2003) presented a method of classifying CST relations between sentence pairs. However, their method used the same features for every type of CST, resulting in low recall and precision. We thus select better features for each CST type, and for each cluster of EQ.", "cite_spans": [ { "start": 642, "end": 672, "text": "Hatzivassiloglou et al. (1999;", "ref_id": "BIBREF4" }, { "start": 923, "end": 942, "text": "Zhang et al. (2003)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The EQ identification task is apparently related to Textual Entailment task (Dagan et al., 2005) . Entailment is asymmetrical while EQ is symmetrical, in the sense that if a sentence entails and is entailed by another sentence, then this sentence pair is in EQ. However in the EQ identification, we usually need to find EQ pairs from an extremely biased dataset of sentence pairs, most of which have no relation at all.", "cite_spans": [ { "start": 76, "end": 96, "text": "(Dagan et al., 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This section explains a method of identifying EQ pairs. We regarded the identification of a CST relation as a standard binary classification task. Given a pair of sentences that are from two different but related documents, we determine whether the pair is in EQ or not. We use Support Vector Machines (SVMs) (Vapnik, 1998) as a supervised classifier. Please note that one instance consists of a pair of two sentences. Therefore, a similarity value between two sentences is only given to one instance, not two.", "cite_spans": [ { "start": 309, "end": 323, "text": "(Vapnik, 1998)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Identification of EQ pairs", "sec_num": "3" }, { "text": "Although some pairs in EQ have quite high similarity values, others do not. Simultaneously using both of these two types of pairs for training will adversely affect the accuracy of classification. Therefore, we propose splitting the dataset first according to similarities of pairs, and then constructing a classifier for each cluster (sub-dataset). We call this method clusterwise classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clusterwise Classification", "sec_num": "3.1" }, { "text": "We use the following similarity in the cosine measure between two sentences (s 1 , s 2 ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clusterwise Classification", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cos(s1, s2) = u1 \u2022 u2/|u1||u2|,", "eq_num": "(1)" } ], "section": "Clusterwise Classification", "sec_num": "3.1" }, { "text": "where u 1 and u 2 denote the frequency vectors of content words (nouns, verbs, adjectives) for respective s 1 and s 2 . The distribution of the sentence pairs according to the cosine measure is summarized in Table 1 . From the table, we can see a large difference in distributions of EQ and no-relation pairs. This difference suggests that the clusterwise classification approach is reasonable. We split the dataset into three clusters: highsimilarity cluster, intermediate-similarity cluster, and low-similarity cluster. Intuitively, we expected that a pair in the high-similarity cluster would have many common bigrams, that a pair in the intermediate-similarity cluster would have many common unigrams but few common bigrams, and that a pair in the low-similarity cluster would have few common unigrams or bigrams.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Clusterwise Classification", "sec_num": "3.1" }, { "text": "The number of sentence pairs in EQ in the intermediate-or low-similarity clusters is much 13 21 25 37 61 73 61 69 426 summary 5 5 25 19 22 13 16 6 6 0 refinement 3 4 15 11 12 15 6 6 3 2 NO 194938 162221 68283 28152 11306 4214 1379 460 178 455", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 273, "text": "13 21 25 37 61 73 61 69 426 summary 5 5 25 19 22 13 16 6 6 0 refinement 3 4 15 11 12 15 6 6 3 2 NO 194938 162221 68283 28152 11306 4214 1379 460", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Two-Stage Identification Method", "sec_num": "3.2" }, { "text": "Figure 1: Method of identifying EQ pairs smaller than the total number of sentence pairs as shown in Table 1 . These two clusters also contain many pairs that belong to a \"summary\" and a \"refinement\" relation, which are very much akin to EQ. This may cause difficulties in identifying EQ pairs. We gave a generic name, GEN(general)-EQ, to the union of EQ, \"summary\", and \"refinement\" relations. For pairs in the intermediate-or lowsimilarity clusters, we propose a two-stage method using GEN-EQ on the basis of the above observations, which first identifies GEN-EQ pairs between sentences, and then identifies EQ pairs from GEN-EQ pairs. This two-stage method can be regarded as a coarse-to-fine approach (Vanderburg and Rosenfeld, 1977; Rosenfeld and Vanderbrug, 1977) , which first identifies a coarse class and then finds the target fine class. We used the coarse-to-fine approach on top of the clusterwise classification method as in Fig. 1 .", "cite_spans": [ { "start": 705, "end": 737, "text": "(Vanderburg and Rosenfeld, 1977;", "ref_id": "BIBREF16" }, { "start": 738, "end": 769, "text": "Rosenfeld and Vanderbrug, 1977)", "ref_id": null } ], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 938, "end": 944, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Two-Stage Identification Method", "sec_num": "3.2" }, { "text": "There are by far less EQ pairs than pairs without relation. This coarse-to-fine approach will reduce this bias, since GEN-EQ pairs outnumber EQ pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Identification Method", "sec_num": "3.2" }, { "text": "Instances (i.e., pairs of sentences) are represented as binary vectors. Numeric features ranging from 0.0 to 1.0 are discretized and represented by 10 binary features (e.g., a feature value of 0.65 is transformed into the vector 0000001000). Let us first explain basic features used in all clusters. We will then explain other features that are specific to a cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for identifying EQ pairs", "sec_num": "3.3" }, { "text": "1. Cosine similarity measures: We use unigram, bigram, trigram, bunsetsu-chunk 1 similarities at all the sentence levels, and unigram similarities at the paragraph and the document levels. These similarities are calculated by replacing u 1 and u 2 in Eq. 1with the frequency vectors of each sentence level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic features", "sec_num": "3.3.1" }, { "text": "2. Normalized lengths of sentences: Given an instance of sentence pair s 1 and s 2 , we can define features normL(s 1 ) and normL(s 2 ), which represent (normalized) lengths of sentences, as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic features", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "normL(s) = len(s)/EventM ax(s),", "eq_num": "(2)" } ], "section": "Basic features", "sec_num": "3.3.1" }, { "text": "where len(s) is the number of characters in s. EventM ax(s) is max s \u2208event(s) len(s ), where event(s) is the set of sentences in the event that doc(s) describes. doc(s) is the document containing s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic features", "sec_num": "3.3.1" }, { "text": "3. Difference in publication dates: This feature depends on the interval between the publication dates of doc(s 1 ) and doc(s 2 ) and is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic features", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "DateDif f (s1, s2) = 1 \u2212 |Date(s1) \u2212 Date(s2)| EventSpan(s1, s2) ,", "eq_num": "(3)" } ], "section": "Basic features", "sec_num": "3.3.1" }, { "text": "where Date(s) is the publication date of an article containing s, and EventSpan(s 1 , s 2 ) is the time span of the event, i.e., the difference between the publication dates for the first and the last articles that are on the same event. For example, if doc(s 1 ) is published on 1/15/99 and doc(s 2 ) on 1/17/99, and if the time span of the event ranges from 1/1/99 to 1/21/99, then the feature value is 1-2/20 = 0.9. son, 1969) : This feature is defined as", "cite_spans": [ { "start": 419, "end": 429, "text": "son, 1969)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Basic features", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P osit(s) = lenBef (s)/len(doc(s)),", "eq_num": "(4)" } ], "section": "Positions of sentences in documents (Edmund", "sec_num": "4." }, { "text": "where lenBef (s) is the number of characters before s in the document, and len(doc(s)) is the total number of characters in doc(s).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positions of sentences in documents (Edmund", "sec_num": "4." }, { "text": "5. Semantic similarities: This feature is measured by Eq. (1) with u 1 and u 2 being the frequency vectors of semantic classes of nouns, verbs, and adjectives. We used the semantic classes in a Japanese thesaurus called 'Goi-taikei' (Ikehara et al., 1997) .", "cite_spans": [ { "start": 233, "end": 255, "text": "(Ikehara et al., 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Positions of sentences in documents (Edmund", "sec_num": "4." }, { "text": "6. Conjunction (Yokoyama et al., 2003) : Each of 55 conjunctions corresponds to one feature. If a conjunction appears at the beginning of the sentence, the feature value is 1, otherwise 0. 9. Types of named entities with particle: This feature represents the occurrence of types of named entities accompanied by a case marker (particle). We used 11 different case markers.", "cite_spans": [ { "start": 15, "end": 38, "text": "(Yokoyama et al., 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Positions of sentences in documents (Edmund", "sec_num": "4." }, { "text": "We will next explain additional features used in identifying EQ pairs from GEN-EQ pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features to identify fine class", "sec_num": "3.3.2" }, { "text": "These features represent the closeness of the numbers of words and bunsetsu-chunks in the two sentences. This feature is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Numbers of words (morphemes) and phrases:", "sec_num": "1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 http://chasen.naist.jp/\u02dcmasayu-a/p/bar/ N umW (s1, s2) = 1 \u2212 |f rqW (s1) \u2212 f rqW (s2)| max(f rqW (s1), f rqW (s2)) ,", "eq_num": "(5)" } ], "section": "Numbers of words (morphemes) and phrases:", "sec_num": "1." }, { "text": "where f rqW (s) indicates the number of words in s. Similarly, N umP (s 1 , s 2 ) is obtained by replacing f rqW in Eq. 5 Hatayama (2001) .", "cite_spans": [ { "start": 122, "end": 137, "text": "Hatayama (2001)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Numbers of words (morphemes) and phrases:", "sec_num": "1." }, { "text": "3. Salient words: This feature indicates whether the salient words of the two sentences are the same or not. We approximate the salient word with the gaor the wa-case word that appears first. 4. Numeric expressions and units (Nanba et al., 2005) : The first feature indicates whether the two sentences share a numeric expression or not. The second feature is similarly defined for numeric units.", "cite_spans": [ { "start": 225, "end": 245, "text": "(Nanba et al., 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Numbers of words (morphemes) and phrases:", "sec_num": "1." }, { "text": "We used the Text Summarization Challenge (TSC) 2 and 3 corpora and the Workshop on Multimodal Summarization for Trend Information (Must) corpus (Kato et al., 2005) . These two corpora contained 115 sets of related news articles (10 documents per set on average) on various events. A document contained 9.9 sentences on average. Etoh et al. (2005) annotated these two corpora with CST types. There were 471,586 pairs of sentences and 798 pairs of these had EQ. We conducted the experiments with 10-fold cross-validation (i.e., approximately 425,000 pairs on average, out of which approximately 700 pairs are in EQ, are in the training dataset for each fold). The average, maximum, and minimum lengths of the sentences in the whole datset are shown in Table 2 . We used precision, recall, and F-measure as evaluation measures. We used a Japanese morphological analyzer ChaSen 3 to extract parts-of-speech. and a dependency analyzer CaboCha 4 to extract bunsetsu-chunks.", "cite_spans": [ { "start": 144, "end": 163, "text": "(Kato et al., 2005)", "ref_id": "BIBREF7" }, { "start": 328, "end": 346, "text": "Etoh et al. (2005)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 750, "end": 757, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments on identifying EQ pairs", "sec_num": "4" }, { "text": "We split the set of sentence pairs into clusters according to their similarities in identifying EQ pairs as explained. We used 10-fold cross validation again within the training data (i.e., the approximately 425,000 pairs above are split into a temporary training dataset and a temporary test dataset 10 times) to estimate the threshold to split the set, to select the best feature set, and to determine the degree of the polynomial kernel function and the value for softmargin parameter C in SVMs. No training instances are used in the estimation of these parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of threshold", "sec_num": "4.1" }, { "text": "We will first explain how to estimate the threshold between high-and intermediate-similarity clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Threshold between high-and intermediate-similarity clusters", "sec_num": "4.1.1" }, { "text": "We expected that a pair in high-similarity cluster would have many common bigrams, and that a pair in intermediate-similarity cluster would have many common unigrams but few common bigrams. We therefore assumed that bigram similarity would be ineffective in intermediate-similarity cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Threshold between high-and intermediate-similarity clusters", "sec_num": "4.1.1" }, { "text": "We determined the threshold in the following way for each fold of cross-validation. We decreased the threshold by 0.01 from 1.0. We carried out 10-fold cross-validation within the training data, excluding one of the 14 features (6 cosine similarities and other basic features) for each value of the threshold. If the exclusion of a feature type deteriorates both average precision and recall obtained by the crossvalidation within the training data, we call it ineffective. We set the threshold to the minimum value for which bigram similarity is not ineffective. We obtain a threshold value for each fold of cross-validation. The average value of threshold was 0.87. 4 http://chasen.naist.jp/\u02dctaku/software/cabocha/ As an example, we show the table of obtained ineffective feature types for one fold of crossvalidation ( Table 3 ). The threshold was set to 0.90 in this fold.", "cite_spans": [ { "start": 668, "end": 669, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 822, "end": 829, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Threshold between high-and intermediate-similarity clusters", "sec_num": "4.1.1" }, { "text": "We will next explain how to estimate the threshold between intermediate-and low-similarity clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Threshold between intermediate-and low-similarity clusters", "sec_num": "4.1.2" }, { "text": "There are numerous no-relation pairs in lowsimilarity pairs. We expected that this imbalance would adversely affect classification. We therefore simply attemted to exclude low-similarity pairs. We decreased the threshold by 0.01 from the threshold between high-and intermediate-similarity clusters. We chose a value that yielded the best average Fmeasure calculated by the cross-validation within the training data. The average value of the threshold was 0.57. Table 4 is an example of thresholds and F-measures for one fold.", "cite_spans": [], "ref_spans": [ { "start": 461, "end": 468, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Threshold between intermediate-and low-similarity clusters", "sec_num": "4.1.2" }, { "text": "The results of EQ identification are shown in Table 5. We tested the following models: Bow-cos: This is the simplest baseline we used. We represented sentences with bag-of-words model. Instances with the cosine similarity in Eq. (1) larger than a threshold were classified as EQ. The threshold that yielded the best F-measure in the test data was chosen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of identifying EQ pairs", "sec_num": "4.2" }, { "text": "Non-Clusterwise: This is a supervised method without the clusterwise approach. One classifier was constructed regardless of the similarity of the instance. We used the second degree polynomial kernel. Soft margin parameter C was set to 0.01.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of identifying EQ pairs", "sec_num": "4.2" }, { "text": "Clusterwise: This is a clusterwise method without the coarseto-fine approach. The second degree polynomial kernel was used. Soft margin parameter C was set to 0.1 for high-similarity cluster and 0.01 for the other clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of identifying EQ pairs", "sec_num": "4.2" }, { "text": "ClusterC2F: This is our model, which integrates clusterwise classification with the coarse-to-fine approach (Figure 1 ). Table 5 shows that ClusterC2F yielded the best F-measure regardless of presence of additional features. The difference between ClusterC2F and the others was statistically significant in the Wilcoxon signed rank sum test with 5% significance level.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 117, "text": "(Figure 1", "ref_id": null }, { "start": 121, "end": 128, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results of identifying EQ pairs", "sec_num": "4.2" }, { "text": "We examined the results for each cluster. The results with basic features are summarized in Table 6 and those with basic features plus additional features are in Table 7 . The tables show that there are no significant differences among the models for high-similarity cluster. However, there are significant differences for intermediate-similarity cluster. We thus concluded that the proposed model (ClusterC2F) works especially well in intermediatesimilarity cluster. ", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 162, "end": 169, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Results for each cluster", "sec_num": "4.3" }, { "text": "We regarded the identification of the relations between sentences as binary classification, whether a pair of sentences is classified into TR or not. We used SVMs (Vapnik, 1998) . The sentence pairs in TR have the same numeric attributes with different values, as mentioned in Introduction. Therefore, VNPs will be good clues for the identification.", "cite_spans": [ { "start": 163, "end": 177, "text": "(Vapnik, 1998)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Identification of TR pairs", "sec_num": "5" }, { "text": "We extract VNPs in the following way. 1. Search for noun phrases that have numeric expressions (we call them numeric phrases). 2. Search for the phrases that the numeric phrases depend on (we call them predicate phrases). 3. Search for the noun phrases that depend on the predicate phrases. 4. Extract the noun phrases that depend on the noun phrases found in step 3, except for date expressions. Both the extracted noun phrases and the noun phrases found in step 3 were regarded as VNPs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of VNPs", "sec_num": "5.1" }, { "text": "In the example in Introduction, \"one million\" and \"1,500,000\" are numeric phrases, and \"had reached\" is a predicate phrase. Then, \"the number of users of its mobile-phone service\" is a VNP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of VNPs", "sec_num": "5.1" }, { "text": "We used some features used in EQ identification: sentence-level uni-, bi-, tirgrams, and bunsetsuchunk unigrams, normalized lengths of sentences, difference in publication dates, position of sentences in documents, semantic similarities, conjunctions, expressions at the end of sentences, and named entities. In addition, we use the following features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for identifying TR pairs", "sec_num": "5.2" }, { "text": "1. Similarities through VNPs: The cosine similarity of the frequency vectors of nouns in the VNPs in s 1 and s 2 is used. If there are more than one VNP, the largest cosine similarity is chosen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for identifying TR pairs", "sec_num": "5.2" }, { "text": "2. Similarities through bigrams and trigrams in VNPs: These features are defined similarly to the previous feature, but each VNP is represented by the frequency vector of word bi-and trigrams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for identifying TR pairs", "sec_num": "5.2" }, { "text": "3. Similarities of noun phrases in nominative case: Instances in TR often have similar subjects. A noun phrase containing a ga-, wa-, or mo-case is regarded as the subject phrase of a sentence. The similarity is calculated by Eq. (1) with the frequency vectors of nouns in the phrase. 4. Changes in value of numeric attributes: This feature is 1 if the values of the numeric phrases in the two sentences are different, otherwise 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for identifying TR pairs", "sec_num": "5.2" }, { "text": "5. Presence of numerical units: If a numerical unit is present in both sentences, the value of the feature is 1, otherwise 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for identifying TR pairs", "sec_num": "5.2" }, { "text": "6. Expressions that mean changes in value: Instances in TR often contain those expressions, such as 'reduce' and 'increase' (Nanba et al., 2005) . We have three features for each of these expressions. The first feature is 1 if both sentences have the expression, otherwise 0. The second is 1 if s 1 has the expression, otherwise 0. The third is 1 if s 2 has the expression, otherwise 0.", "cite_spans": [ { "start": 124, "end": 144, "text": "(Nanba et al., 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Features for identifying TR pairs", "sec_num": "5.2" }, { "text": "We define one feature for a predicate. The value of this feature is 1 if the predicate appears in the two sentences, otherwise 0. 8. Reporter: This feature represents who is reporting the incident. This feature is represented by the cosine similarity between the frequency vectors of nouns in phrases respectively expressing reporters in s 1 and s 2 . The subjects of verbs such as 'report' and 'announce' are regarded as phrases of the reporter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicates:", "sec_num": "7." }, { "text": "A pair of sentences in TR often has a high degree of similarity. Such pairs are likely to be confused with pairs in EQ. We used the identified EQ pairs for the identification of TR in order to circumvent this confusion. Pairs classified as EQ with our method were excluded from candidates for TR. We used precision, recall and F-measure for evaluation. We employed 10-fold cross validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Use of EQ", "sec_num": "5.3" }, { "text": "The results of the experiments are summarized in Table 8 . We compared four following models with ours. A linear kernel was used in SVMs and soft margin parameter C was set to 1.0 for all models:", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 8", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results of identifying TR pairs", "sec_num": "6.1" }, { "text": "Bow-cos (baseline): We calculated the similarity through VPNs. If the similarity was larger than a threshold and the two sentences had the same expressions meaning changes in value and had different values, then this pair was classified as TR. The threshold was set to 0.7, which yielded the best F-measure in the test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of identifying TR pairs", "sec_num": "6.1" }, { "text": "NANBA (Nanba et al., 2005) : If the unigram cosine similarity between the two sentences was larger than a threshold and the two sentences had expressions meaning changes in value, then this pair was classified as TR. The value of the threshold was set to 0.42, which yielded the best F-measure in the test data. The results in Table 8 show that bow-cos is better than NANBA in F-measure. This result suggests that focusing on VNPs is more effective than a simple bag-of-words approach.", "cite_spans": [ { "start": 6, "end": 26, "text": "(Nanba et al., 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 327, "end": 334, "text": "Table 8", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results of identifying TR pairs", "sec_num": "6.1" }, { "text": "WithEq and WithEqActual were better than With-outEq. This suggests that we successfully excluded EQ pairs, which are TR look-alikes. WithEq and WithEqActual yielded almost the same F-measure. This means that our EQ identifier was good enough to improve the identification of TR pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of identifying TR pairs", "sec_num": "6.1" }, { "text": "We proposed methods for identifying EQ and TR pairs in different newspaper articles on an event. We empirically demonstrated that the methods work well in this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Although we focused on resolving a bias in the dataset, we can expect that the classification performance will improve by making use of methods developed in different but related tasks such as Textual Entailment recognition on top of our method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Bunsetsu-chunks are Japanese phrasal units usually consisting of a pair of a noun phrase and a case marker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://chasen.naist.jp/hiki/Chasen/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The pascal recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment", "volume": "", "issue": "", "pages": "177--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment chal- lenge. In Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment, pages 177-190.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "New methods in automatic extracting", "authors": [ { "first": "Harold", "middle": [], "last": "Edmundson", "suffix": "" } ], "year": 1969, "venue": "Journal of ACM", "volume": "16", "issue": "2", "pages": "246--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harold Edmundson. 1969. New methods in automatic extracting. Journal of ACM, 16(2):246-285.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Making cross-document relationship between sentences corpus", "authors": [ { "first": "Junji", "middle": [], "last": "Etoh", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Eleventh Annual Meeting of the Association for Natural Language Processing", "volume": "", "issue": "", "pages": "482--485", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junji Etoh and Manabu Okumura. 2005. Making cross-document relationship between sentences cor- pus. In Proceedings of the Eleventh Annual Meeting of the Association for Natural Language Processing (in Japanese), pages 482-485.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Summarizing newspaper articles using extracted information and functional words", "authors": [ { "first": "Mamiko", "middle": [], "last": "Hatayama", "suffix": "" }, { "first": "Yoshihiro", "middle": [], "last": "Matsuo", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Shirai", "suffix": "" } ], "year": 2001, "venue": "6th Natural Language Processing Pacific Rim Symposium (NL-PRS2001)", "volume": "", "issue": "", "pages": "593--600", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mamiko Hatayama, Yoshihiro Matsuo, and Satoshi Shi- rai. 2001. Summarizing newspaper articles using ex- tracted information and functional words. In 6th Natu- ral Language Processing Pacific Rim Symposium (NL- PRS2001), pages 593-600.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Detecting text similarity over short passages: Exploring linguistic feature combinations via machine learning", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Judith", "middle": [ "L" ], "last": "Klavans", "suffix": "" }, { "first": "Eleazar", "middle": [], "last": "Eskin", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Empirical Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "203--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasileios Hatzivassiloglou, Judith L. Klavans, and Eleazar Eskin. 1999. Detecting text similarity over short passages: Exploring linguistic feature combi- nations via machine learning. In Proceedings of the Empirical Methods for Natural Language Processing, pages 203-212.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Simfinder: A flexible clustering tool for summarization", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Judith", "middle": [ "L" ], "last": "Klavans", "suffix": "" }, { "first": "Melissa", "middle": [ "L" ], "last": "Holcombe", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Workshop on Automatic Summarization", "volume": "", "issue": "", "pages": "41--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasileios Hatzivassiloglou, Judith L. Klavans, Melissa L. Holcombe, Regina Barzilay, Min-Yen Kan, and Kath- leen R. McKeown. 2001. Simfinder: A flexible clus- tering tool for summarization. In Proceedings of the Workshop on Automatic Summarization, pages 41-49.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Must:a workshop on multimodal summarization for trend information", "authors": [ { "first": "Tsuneaki", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Mitsunori", "middle": [], "last": "Matsushita", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Kando", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the NTCIR-5 Workshop Meeting", "volume": "", "issue": "", "pages": "556--563", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsuneaki Kato, Mitsunori Matsushita, and Noriko Kando. 2005. Must:a workshop on multimodal sum- marization for trend information. In Proceedings of the NTCIR-5 Workshop Meeting, pages 556-563.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Rhetorical structure theory: Description and construction of text structures", "authors": [ { "first": "William", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1987, "venue": "Natural Language Generation: New Results in Artificial Intelligence", "volume": "", "issue": "", "pages": "85--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Mann and Sandra Thompson. 1987. Rhetorical structure theory: Description and construction of text structures. In Gerard Kempen, editor, Natural Lan- guage Generation: New Results in Artificial Intelli- gence, Psychology, and Linguistics, pages 85-96. Ni- jhoff, Dordrecht.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An unsupervised approach to recognizing discourse relations", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Abdessamad", "middle": [], "last": "Echihabi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "368--375", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse rela- tions. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 368-375.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The rhetorical parsing of unrestricted texts a surface-based approach", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "3", "pages": "395--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2000. The rhetorical parsing of un- restricted texts a surface-based approach. Computa- tional Linguistics, 26(3):395-448.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Extraction and visualization of trend information based on the cross-document structure", "authors": [ { "first": "Hidetsugu", "middle": [], "last": "Nanba", "suffix": "" }, { "first": "Yoshinobu", "middle": [], "last": "Kunimasa", "suffix": "" }, { "first": "Shiho", "middle": [], "last": "Fukushima", "suffix": "" }, { "first": "Teruaki", "middle": [], "last": "Aizawa", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2005, "venue": "Information Processing Society of Japan, Special Interest Group on Natural Language Processing (IPSJ-SIGNL)", "volume": "", "issue": "", "pages": "67--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hidetsugu Nanba, Yoshinobu Kunimasa, Shiho Fukushima, Teruaki Aizawa, and Manabu Oku- mura. 2005. Extraction and visualization of trend information based on the cross-document structure. In Information Processing Society of Japan, Special Interest Group on Natural Language Processing (IPSJ-SIGNL), NL-168 (in Japanese), pages 67-74.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Text summarization challenge 2 -text summarization evaluation at ntcir workshop 3", "authors": [ { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" }, { "first": "Takahiro", "middle": [], "last": "Fukushima", "suffix": "" }, { "first": "Hidetsugu", "middle": [], "last": "Nanba", "suffix": "" } ], "year": 2003, "venue": "HLT-NAACL 2003 Workshop: Text Summarization (DUC03)", "volume": "", "issue": "", "pages": "49--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manabu Okumura, Takahiro Fukushima, and Hidetsugu Nanba. 2003. Text summarization challenge 2 - text summarization evaluation at ntcir workshop 3. In HLT-NAACL 2003 Workshop: Text Summarization (DUC03), pages 49-56.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A common theory of information fusion from multiple text sources, step one: Cross-document structure", "authors": [ { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 1st ACL SIGDIAL Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "74--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragomir Radev. 2000. A common theory of infor- mation fusion from multiple text sources, step one: Cross-document structure. In Proceedings of the 1st ACL SIGDIAL Workshop on Discourse and Dialogue, pages 74-83.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Coarse-fine template matching", "authors": [], "year": null, "venue": "IEEE transactions Systems, Man, and Cybernetics", "volume": "7", "issue": "", "pages": "104--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Coarse-fine template matching. IEEE transactions Systems, Man, and Cybernetics, 7:104-107.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Twostage template matching", "authors": [ { "first": "Gorden", "middle": [], "last": "Vanderburg", "suffix": "" }, { "first": "Azriel", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1977, "venue": "IEEE transactions on computers", "volume": "26", "issue": "4", "pages": "384--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gorden Vanderburg and Azriel Rosenfeld. 1977. Two- stage template matching. IEEE transactions on com- puters, 26(4):384-393.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Statistical Learning Theory", "authors": [ { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Vapnik. 1998. Statistical Learning Theory. John Wiley, New York.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Discourse analysis using support vector machine", "authors": [ { "first": "Kenji", "middle": [], "last": "Yokoyama", "suffix": "" }, { "first": "Hidetsugu", "middle": [], "last": "Nanba", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2003, "venue": "Information Processing Society of Japan, Special Interest Group on Natural Language Processing (IPSJ-SIGNL)", "volume": "", "issue": "", "pages": "193--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Yokoyama, Hidetsugu Nanba, and Manabu Oku- mura. 2003. Discourse analysis using support vector machine. In Information Processing Society of Japan, Special Interest Group on Natural Language Process- ing (IPSJ-SIGNL), 2003-NL-155 (in Japanese), pages 193-200.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning cross-document structural relationships using boosting", "authors": [ { "first": "Zhu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jahna", "middle": [], "last": "Otterbacher", "suffix": "" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 12th International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "124--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhu Zhang, Jahna Otterbacher, and Dragomir R.Radev. 2003. Learning cross-document structural relation- ships using boosting. In Proceedings of the 12th Inter- national Conference on Information and Knowledge Management, pages 124-130.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "WithEq (Our method): This model uses the identified EQ pairs. WithoutEq: This model uses no information on EQ. WithEqActual: This model uses the actual EQ pairs given by oracle." }, "TABREF0": { "type_str": "table", "num": null, "html": null, "content": "
cos(0.0, 0.1](0.1, 0.2](0.2, 0.3](0.3, 0.4](0.4, 0.5](0.5, 0.6](0.6, 0.7](0.7, 0.8](0.8, 0.9](0.9, 1.0]
EQ12
", "text": "The distribution of sentence pairs according to the cosine measure (NO indicates pairs with no relation. The pairs with other relations are not on the table due to the space limitation)" }, "TABREF1": { "type_str": "table", "num": null, "html": null, "content": "", "text": "7. Expressions at the end of sentences:Yokoyama et al. (2003) created rules that map sentence endings to their functions. Each function corresponds to a feature. If a function appears in the sentence, the value of the feature for the function is 1, otherwise 0. Functions of sentence endings are past, present, assertion, existence, conjecture, interrogation, judgement, possibility, reason, request, description, duty, opinion, continuation, causation, hearsay, and mode.8. Named entity: This feature represents similarities measured through named entities in the sentences. Its value is measured by Eq. (1) with u 1 and u 2 being the frequency vectors of the named entities. We used the named-entity chunker bar" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "content": "
", "text": "with f rqP , where f rqP (s) indicates the number of phrases in s.2. Head verb: There are three features of this kind. The first indicates whether the two sentences have the same head verb or not. The second indicates whether the two sentences have a semantically similar head verb or not. If the two verbs have the same semantic class in a thesaurus, they are regarded as being semantically similar. The last indicates whether both sentences have a verb or not." }, "TABREF3": { "type_str": "table", "num": null, "html": null, "content": "
in the dataset
average max min
# of words33.274581
# of characters 111.22 11072
", "text": "Average, max, min lengths of the sentences" }, "TABREF4": { "type_str": "table", "num": null, "html": null, "content": "
thresholdineffective features
0.90particle, bunsetsu-chunk similarity, semantic similarity
semantic similarity, expression at end of sentences,
0.89bigram similarity, particle
0.88bigram similarity
difference in publication dates, similarity between documents,
0.87expression at end of sentences, number of tokens, bigram similarity, similarity between paragraphs,
positions of sentences, particle
0.86particle, similarity between documents, bigram similarity
", "text": "Ineffective feature types for each threshold" }, "TABREF5": { "type_str": "table", "num": null, "html": null, "content": "
: F-measure calculated by cross-validation
within the training data for each threshold in
\"intermediate-similarity cluster\"
thresholdprecisionrecallF-measure
0.6049,7114.9522.99
0.5952.9215.0523.44
0.5855.0816.6425.56
0.5752.8116.9325.64
0.5649.1514.4522.34
0.5551.5114.8423.04
0.5451.8915.2123.52
0.5354.5913.6121.78
", "text": "" }, "TABREF6": { "type_str": "table", "num": null, "html": null, "content": "
precisionrecallF-measure
Bow-cos87.2957.3569.22
basic features
Clusterwise81.9859.4068.88
Non-Clusterwise86.1059.4970.36
ClusterC2F94.9662.2775.22
with additional features
Clusterwise80.9359.7468.63
Non-Clusterwise86.1160.1670.84
ClusterC2F94.9962.6575.50
", "text": "Results of identifying EQ pairs" }, "TABREF7": { "type_str": "table", "num": null, "html": null, "content": "
Results for \"high-similarity cluster\"
precisionrecallF-measure
Clusterwise94.2396.8395.51
Non-clusterwise95.5196.2995.90
ClusterC2F94.2396.8395.51
Results for \"intermediate-similarity cluster\"
Clusterwise42.7723.0329.94
Non-clusterwise53.4625.3134.36
ClusterC2F100.0036.2953.25
", "text": "Results with basic features" }, "TABREF8": { "type_str": "table", "num": null, "html": null, "content": "
Results for \"high-similarity cluster\"
precisionrecallF-measure
Clusterwise94.2396.8395.51
Non-clusterwise95.7096.7696.23
ClusterC2F94.2396.8395.51
Results for \"intermediate-similarity cluster\"
Clusterwise39.7722.9329.09
Non-clusterwise55.6126.8136.18
ClusterC2F100.0038.0655.13
", "text": "Results with additional features" }, "TABREF9": { "type_str": "table", "num": null, "html": null, "content": "
precisionrecallF-measure
Bow-cos27.4441.2632.96
NANBA19.8545.9627.73
WithoutEq42.4147.0644.61
WithEq43.1348.5145.67
WithEqActual43.0648.5545.64
6 Experiments on identifying TR pairs
", "text": "Results of identifying TR pairsMost experimental settings are the same as in the experiments of EQ identification. Sentence pairs without numeric expressions were excluded in advance and 55,547 pairs were left. This exclusion process does not degrade recall at all, because TR pairs by definition contain numberic expressions." } } } }