{ "paper_id": "S13-1029", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:41:39.965020Z" }, "title": "CLaC-CORE: Exhaustive Feature Combination for Measuring Textual Similarity", "authors": [ { "first": "Ehsan", "middle": [], "last": "Shareghi", "suffix": "", "affiliation": { "laboratory": "", "institution": "CLaC Laboratory Concordia University Montreal", "location": { "postCode": "H3G 1M8", "region": "QC", "country": "CANADA" } }, "email": "" }, { "first": "Sabine", "middle": [], "last": "Bergler", "suffix": "", "affiliation": { "laboratory": "", "institution": "Concordia University Montreal", "location": { "postCode": "H3G 1M8", "region": "QC", "country": "CANADA" } }, "email": "bergler@cse.concordia.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "CLaC-CORE, an exhaustive feature combination system ranked 4th among 34 teams in the Semantic Textual Similarity shared task STS 2013. Using a core set of 11 lexical features of the most basic kind, it uses a support vector regressor which uses a combination of these lexical features to train a model for predicting similarity between sentences in a two phase method, which in turn uses all combinations of the features in the feature space and trains separate models based on each combination. Then it creates a meta-feature space and trains a final model based on that. This two step process improves the results achieved by singlelayer standard learning methodology over the same simple features. We analyze the correlation of feature combinations with the data sets over which they are effective.", "pdf_parse": { "paper_id": "S13-1029", "_pdf_hash": "", "abstract": [ { "text": "CLaC-CORE, an exhaustive feature combination system ranked 4th among 34 teams in the Semantic Textual Similarity shared task STS 2013. Using a core set of 11 lexical features of the most basic kind, it uses a support vector regressor which uses a combination of these lexical features to train a model for predicting similarity between sentences in a two phase method, which in turn uses all combinations of the features in the feature space and trains separate models based on each combination. Then it creates a meta-feature space and trains a final model based on that. This two step process improves the results achieved by singlelayer standard learning methodology over the same simple features. We analyze the correlation of feature combinations with the data sets over which they are effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The Semantic Textual Similarity (STS) shared task aims to find a unified way of measuring similarity between sentences. In fact, sentence similarity is a core element of tasks trying to establish how two pieces of text are related, such as Textual Entailment (RTE) (Dagan et al., 2006) , and Paraphrase Recognition (Dolan et al., 2004 ). The STS shared task was introduced for SemEval-2012 and was selected as its first shared task. Similar in spirit, STS differs from the well-known RTE shared tasks in two important points: it defines a graded similarity scale to measure similarity of two texts, instead of RTE's binary yes/no decision and the similarity relation is consid-ered to be symmetrical, whereas the entailment relation of RTE is inherently unidirectional.", "cite_spans": [ { "start": 265, "end": 285, "text": "(Dagan et al., 2006)", "ref_id": "BIBREF12" }, { "start": 315, "end": 334, "text": "(Dolan et al., 2004", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The leading systems in the 2012 competition used a variety of very simple lexical features. Each system combines a different set of related features. CLaC Labs investigated the different combination possibilities of these simple lexical features and measured their performance on the different data sets. Originally conceived to explore the space of all possible feature combinations for 'feature combination selection', a two-step method emerged that deliberately compiles and trains all feature combinations exhaustively and then trains an SVM regressor using all combination models as its input features. It turns out that this technique is not nearly as prohibitive as imagined and achieves statistically significant improvements over the alternative of feature selection or of using any one single combination individually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose the method as a viable approach when the characteristics of the data are not well understood and no satisfactory training set is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, systems started to approach measuring similarity by combining different resources and methods. For example, the STS-2012 shared task's leading UKP (B\u00e4r et al., 2012) system uses n-grams, string similarity, WordNet, and ESA, and a regressor. In addition, they use MOSES, a statistical machine translation system (Koehn et al., 2007) , to translate each English sentence into Dutch, German, and Spanish and back into English in an effort to increase their training set of similar text pairs. TakeLab (\u0160aric et al., 2012) , in place two of the 2012 STS shared task, uses n-gram models, two WordNet-based measures, LSA, and dependencies to align subject-verb-object predicate structures. Including named-entities and number matching in the feature space improved performance of their support vector regressor. (Shareghi and Bergler, 2013) illustrates two experiments with STS-2012 training and test sets using the basic core features of these systems, outperforming the STS-2012 task's highest ranking systems. The STS-2013 submission CLaC-CORE uses the same two-step approach.", "cite_spans": [ { "start": 157, "end": 175, "text": "(B\u00e4r et al., 2012)", "ref_id": "BIBREF6" }, { "start": 321, "end": 341, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF18" }, { "start": 500, "end": 528, "text": "TakeLab (\u0160aric et al., 2012)", "ref_id": null }, { "start": 816, "end": 844, "text": "(Shareghi and Bergler, 2013)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Preprocessing consists of tokenizing, lemmatizing, sentence splitting, and part of speech (POS) tagging. We extract two main categories of lexical features: explicit and implicit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLaC Methodology", "sec_num": "3" }, { "text": "Sentence similarity at the explicit level is based solely on the input text and measures the similarity between two sentences either by using an n-gram model (ROUGE-1, ROUGE-2, ROUGE-SU4) or by reverting to string similarity (longest common subsequence, jaro, ROUGE-W):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explicit Lexical Features", "sec_num": "3.1" }, { "text": "Longest Common Subsequence (Allison and Trevor, 1986) compare the length of the longest sequence of characters, not necessarily consecutive ones, in order to detect similarities Jaro (Jaro, 1989) identifies spelling variation between two inputs based on the occurrence of common characters between two text segments at a certain distance ROUGE-W (Lin et al., 2004a) , a weighted version of longest common subsequence, takes into account the number of the consecutive characters in each match, giving higher score for those matches that have larger number of consecutive characters in common. This metric was developed to measure the similarity between machine generated text summaries and a manually generated gold standard ROUGE-1 unigrams (Lin et al., 2004a) ROUGE-2 bigrams (Lin et al., 2004a) ROUGE-SU4 4-Skip bigrams (including Unigrams) (Lin et al., 2004a) 3.2 Implicit Lexical Features Sentence similarity at the implicit level uses external resources to make up for the lexical gaps that go otherwise undetected at the explicit level. The synonymy of bag and suitcase is an example of an implicit similarity. This type of implicit similarity can be detected using knowledge sources such as WordNet or Roget's Thesaurus based on the Word-Net::Similarity package (Pedersen et al., 2004) and combination techniques (Mihalcea et al., 2006) . For the more semantically challenging non-ontologigal relations, for example sanction and Iran, which lexica do not provide, co-occurrence-based measures like ESA are more robust. We use:", "cite_spans": [ { "start": 27, "end": 53, "text": "(Allison and Trevor, 1986)", "ref_id": "BIBREF14" }, { "start": 183, "end": 195, "text": "(Jaro, 1989)", "ref_id": "BIBREF17" }, { "start": 346, "end": 365, "text": "(Lin et al., 2004a)", "ref_id": "BIBREF3" }, { "start": 741, "end": 760, "text": "(Lin et al., 2004a)", "ref_id": "BIBREF3" }, { "start": 777, "end": 796, "text": "(Lin et al., 2004a)", "ref_id": "BIBREF3" }, { "start": 843, "end": 862, "text": "(Lin et al., 2004a)", "ref_id": "BIBREF3" }, { "start": 1269, "end": 1292, "text": "(Pedersen et al., 2004)", "ref_id": "BIBREF20" }, { "start": 1320, "end": 1343, "text": "(Mihalcea et al., 2006)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Explicit Lexical Features", "sec_num": "3.1" }, { "text": "Lin (Lin, 1998) uses the Brown Corpus of American English to calculate information content of two concepts' least common subsumer. Then he scales it using the sum of the information content of the compared concepts", "cite_spans": [ { "start": 4, "end": 15, "text": "(Lin, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Explicit Lexical Features", "sec_num": "3.1" }, { "text": "Jiang-Conrath (Jiang and Conrath, 1997) uses the conditional probability of encountering a concept given an instance of its parent to calculate the information content. Then they define the distance between two concepts to be the sum of the difference between the information content of each of the two given concepts and their least common subsumer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explicit Lexical Features", "sec_num": "3.1" }, { "text": "Roget's Thesaurus is another lexical resource and is based on well-crafted concept classification and was created by professional lexicographers. It has a nine-level ontology and doesn't have one of the major drawbacks of WordNet, which is lack of links between part of speeches. According to the schema proposed by (Jarmasz and Szpakowicz, 2003) the distance of two terms decreases within the interval of [0, 16] , as the the common head that subsumes them moves from top to the bottom and becomes more specific. The electronic version of Roget's Thesaurus which was developed by (Jarmasz and Szpakowicz, 2003) was used for extracting this score", "cite_spans": [ { "start": 316, "end": 346, "text": "(Jarmasz and Szpakowicz, 2003)", "ref_id": "BIBREF15" }, { "start": 406, "end": 409, "text": "[0,", "ref_id": null }, { "start": 410, "end": 413, "text": "16]", "ref_id": null }, { "start": 581, "end": 611, "text": "(Jarmasz and Szpakowicz, 2003)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Explicit Lexical Features", "sec_num": "3.1" }, { "text": "Explicit Semantic Analyzer (Gabrilovich and Markovitch, 2007) In order to have broader coverage on word types not represented in lexical resources, specifically for named entities, we add explicit semantic analyzer (ESA) generated features to our feature space", "cite_spans": [ { "start": 27, "end": 61, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Explicit Lexical Features", "sec_num": "3.1" }, { "text": "CLaC-CORE first generates all combinations of the 11 basic features (jaro, Lemma, lcsq, ROUGE-W, ROUGE-1, ROUGE-2, ROUGE-SU4, roget, lin, jcn, esa), that is 2 11 \u2212 1 = 2047 non-empty combinations. The Two Phase Model Training step trains a separate Support Vector Regressor (SVR) for each combination creating 2047 Phase One Models. These 2 N \u2212 1 predicted scores per text data item form a new feature vector called Phase Two Features, which feed into a SVR to train our Phase Two Model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLaC-CORE", "sec_num": "3.3" }, { "text": "On a standard 2 core computer with \u2264100 GB of RAM using multi-threading (thread pool of size 200, a training process per thread) it took roughly 15 hours to train the 2047 Phase One Models on 5342 text pairs and another 17 hours to build the Phase Two Feature Space for the training data. Building the Phase Two Feature Space for the test sets took roughly 7.5 hours for 2250 test pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLaC-CORE", "sec_num": "3.3" }, { "text": "For the current submissions we combine all training sets into one single training set used in all of our submissions for the STS 2013 task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLaC-CORE", "sec_num": "3.3" }, { "text": "Our three submission for STS-2013 compare a baseline of Standard Learning (RUN-1)with two versions of our Two Phase Learning (RUN-2, RUN-3 ). For the Standard Learning baseline, one regressor was trained on the training set on all 11 Basic Features and tested on the test sets. For the remaining runs the Two Phase Learning method was used. All our submissions use the same 11 Basic Features. RUN-2 is our main contribution. RUN-3 is identical to RUN-2 except for reducing the number of support vectors and allowing larger training errors in an effort to assess the potential for speedup. This was done by decreasing the value of \u03b3 (in the RBF kernel) from 0.01 to 0.0001, and decreasing the value of C (error weight) from 1 to 0.01. These parameters resulted in a smoother and simpler decision surface but negatively affected the performance for RUN-3 as shown in Table 1 2, was successful in improving the results achieved by our baseline RUN-1 ever so slightly (the confidence invervals at 5% differ to .016 at the upper end) and far exceeds the reduced computation version of RUN-3.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 138, "text": "(RUN-2, RUN-3", "ref_id": null }, { "start": 865, "end": 872, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Analysis of Results", "sec_num": "4" }, { "text": "Having trained separate models based on each subset of features we can use the predicted scores generated by each of these models to calculate their correlations to assess which of the feature combinations were more effective in making predictions and how this most successful combination varies bewteen the different datasets. Table 2 lists the best and worst feature combinations on each test set. ROUGE-1 (denoted by RO-1), unigram overlap, is part of all four best performing subsets. The features ROUGE-SU4 and Roget's appear in three of the best four feature combinations, making Roget's the best performing lexiconbased feature outperforming WordNet features on this task. esa, lin, jcn are part of two of the best subsets, where lin and jcn occur together both times, suggesting synergy. Looking at the worst performing feature combinations is also instructive and suggests that lcsq was not an effective feature (despite being at the heart of the more successful ROUGE-W measure).", "cite_spans": [], "ref_spans": [ { "start": 328, "end": 335, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Successful Feature Combinations", "sec_num": "4.1" }, { "text": "We also analyze performance of individual features over different datasets. Table 3 lists all the features and, instead of looking at only the best combination, takes the top three best combinations for each test and compares how many times each feature has occurred in the resulting 12 combinations (first column). Three clear classes of effectiveness emerge, high (10-7), medium (6-4), and low (3-0). Next, we observe that the test sets differ in the average length of the data: headlines and OnWN glosses are very short, in contrast to the other two. Table 3 shows in fact contrastive feature behavior for these two categories (denoted by short and long). The last column reports the number of time a feature has occurred in the best combinations (out of 4). Again, ROUGE-1, ROUGE-SU4, and roget prove effective across different test sets. esa and lem seem most reliable when we deal with short text fragments, while roget and ROUGE-SU4 are most valuable on longer texts. The individual most valuable features overall are ROUGE-1, ROUGE-SU4, and roget.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 3", "ref_id": null }, { "start": 554, "end": 561, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Successful Feature Combinations", "sec_num": "4.1" }, { "text": "Features total (/12) short (/6) long (/6) best (/4) esa 6 6 0 2 lin 6 3 3 2 jcn 4 1 3 2 roget 9 3 6 3 lem 6 6 0 2 jaro 0 0 0 0 lcsq 3 3 0 1 ROUGE-W 7 4 3 1 ROUGE-1 10 6 4 4 ROUGE-2 3 1 2 0 ROUGE-SU4 10 5 5 3 Table 3 : Feature contribution to the three best results over four datasets", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 270, "text": "6 6 0 2 lin 6 3 3 2 jcn 4 1 3 2 roget 9 3 6 3 lem 6 6 0 2 jaro 0 0 0 0 lcsq 3 3 0 1 ROUGE-W 7 4 3 1 ROUGE-1 10 6 4 4 ROUGE-2 3 1 2 0 ROUGE-SU4 10 5 5 3 Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Successful Feature Combinations", "sec_num": "4.1" }, { "text": "CLaC-CORE investigated the performance possibilities of different feature combinations for 11 basic lexical features that are frequently used in semantic distance measures. By exhaustively training all combinations in a two-phase regressor, we were able to establish a few interesting observations. First, our own baseline of simply training a SVM regressor on all 11 basic features achieves rank 10 and outperforms the baseline used for the shared task. It should probably become the new standard baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Second, our two-phase exhaustive model, while resource intensive, is not at all prohibitive. If the knowledge to pick appropriate features is not available and if not enough training data exists to perform feature selection, the exhaustive method can produce results that outperform our baseline and one that is competitive in the current field (rank 7 of 88 submissions). But more importantly, this method allows us to forensically analyze feature combination behavior contrastively. We were able to establish that unigrams and 4-skip bigrams are most versatile, but surprisingly that Roget's Thesaurus outperforms the two leading WordNet-based distance measures. In addition, ROUGE-W, a weighted longest common subsequence algorithm that to our knowledge has not previously been used for similarity measurements shows to be a fairly reliable measure for all data sets, in contrast to longest common subsequence, which is among the lowest performers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We feel that the insight we gained well justified the expense of our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "We are grateful to Michelle Khalife and Jona Schuman for their comments and feedback on this work. This work was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "ITRI-04-08 The Sketch Engine. Information Technology", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Rychly", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Smrz", "suffix": "" }, { "first": "David", "middle": [], "last": "Tugwell", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff, Pavel Rychly, Pavel Smrz, and David Tugwell. 2004. ITRI-04-08 The Sketch Engine. In- formation Technology.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluating WordNet-based Measures of Lexical Semantic Relatedness", "authors": [ { "first": "Alexander", "middle": [], "last": "Budanitsky", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Budanitsky and Graeme Hirst. 2006. Eval- uating WordNet-based Measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "ROUGE: A Package for Automatic Evaluation of Summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004a. ROUGE: A Package for Auto- matic Evaluation of Summaries. In Text Summariza- tion Branches Out: Proceedings of the ACL-04 Work- shop.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin and Franz Josef Och. 2004b. Auto- matic Evaluation of Machine Translation Quality Us- ing Longest Common Subsequence and Skip-Bigram Statistics. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Christiane Fellbaum 2010. WordNet. Theory and Applications of Ontology: Computer Applications", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum 2010. WordNet. Theory and Applications of Ontology: Computer Applications. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "UKP: Computing Semantic Textual Similarity by Combining Multiple Content Similarity Measures", "authors": [ { "first": "Daniel", "middle": [], "last": "B\u00e4r", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" } ], "year": 2012, "venue": "conjunction with the First Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. UKP: Computing Semantic Textual Similarity by Combining Multiple Content Similarity Measures. In Proceedings of the 6th International Workshop on Semantic Evaluation (SemEval 2012), in conjunction with the First Joint Conference on Lexical and Computational Semantics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An Information-Theoretic Definition of Similarity", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 15th International Conference on Machine Learning", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. An Information-Theoretic Definition of Similarity. In Proceedings of the 15th International Conference on Machine Learning, volume 1.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Feature Combination for Sentence Similarity. To appear in Proceedings of the 26st Conference of the Canadian Society for Computational Studies of Intelligence (Canadian AI'13)", "authors": [ { "first": "Ehsan", "middle": [], "last": "Shareghi", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Bergler", "suffix": "" } ], "year": 2013, "venue": "Advances in Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehsan Shareghi, Sabine Bergler. 2013. Feature Combi- nation for Sentence Similarity. To appear in Proceed- ings of the 26st Conference of the Canadian Society for Computational Studies of Intelligence (Canadian AI'13). Advances in Artificial Intelligence, Regina, SK, Canada. Springer-Verlag Berlin Heidelberg.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semeval-2012 Task 6: A Pilot on Semantic Textual Similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "conjunction with the First Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 Task 6: A Pilot on Semantic Textual Similarity. In Proceedings of the 6th International Workshop on Semantic Eval- uation (SemEval 2012), in conjunction with the First Joint Conference on Lexical and Computational Se- mantics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Computing Semantic Relatedness Using Wikipedia-based Explicit Semantic Analysis", "authors": [ { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2007. Com- puting Semantic Relatedness Using Wikipedia-based Explicit Semantic Analysis. In Proceedings of the 20th International Joint Conference on Artificial In- telligence.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "TakeLab: Systems for Measuring Semantic Text Similarity", "authors": [ { "first": "Goran", "middle": [], "last": "Frane\u0161aric", "suffix": "" }, { "first": "Mladen", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Jan\u0161najder", "middle": [], "last": "Karan", "suffix": "" }, { "first": "Bojana Dalbelo", "middle": [], "last": "Ba\u0161ic", "suffix": "" } ], "year": 2012, "venue": "conjunction with the First Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frane\u0160aric, Goran Glava\u0161, Mladen Karan, Jan\u0160najder, and Bojana Dalbelo Ba\u0161ic. 2012. TakeLab: Systems for Measuring Semantic Text Similarity. In Proceed- ings of the 6th International Workshop on Semantic Evaluation (SemEval 2012), in conjunction with the First Joint Conference on Lexical and Computational Semantics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Pascal Recognising Textual Entailment Challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The Pascal Recognising Textual Entailment Challenge. Machine Learning Challenges. Evaluat- ing Predictive Uncertainty, Visual Object Classifica- tion, and Recognising Tectual Entailment.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy", "authors": [ { "first": "J", "middle": [], "last": "Jay", "suffix": "" }, { "first": "David", "middle": [ "W" ], "last": "Jiang", "suffix": "" }, { "first": "", "middle": [], "last": "Conrath", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 10th International Conference on Research on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay J. Jiang and David W. Conrath. 1997. Semantic Sim- ilarity Based on Corpus Statistics and Lexical Taxon- omy. Proceedings of the 10th International Confer- ence on Research on Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A Bit-String Longest-Common-Subsequence Algorithm. Information Processing Letters", "authors": [ { "first": "Lloyd", "middle": [], "last": "Allison", "suffix": "" }, { "first": "Trevor", "middle": [ "I" ], "last": "Dix", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lloyd Allison and Trevor I. Dix. 1986. A Bit-String Longest-Common-Subsequence Algorithm. Informa- tion Processing Letters, 23(5).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Rogets Thesaurus and Semantic Similarity", "authors": [ { "first": "Mario", "middle": [], "last": "Jarmasz", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Jarmasz and Stan Szpakowicz. 2003. Rogets The- saurus and Semantic Similarity. In Proceedings of the Conference on Recent Advances in Natural Language Processing.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The WEKA Data Mining Software: an Update", "authors": [ { "first": "Mark", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Holmes", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Pfahringer", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Reutemann", "suffix": "" }, { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 2009, "venue": "ACM SIGKDD Explorations Newsletter", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: an Update. ACM SIGKDD Explorations Newsletter, 11(1).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Advances in Record-Linkage Methodology as Applied to Matching the 1985 Census of Tampa", "authors": [ { "first": "Matthew", "middle": [ "A" ], "last": "Jaro", "suffix": "" } ], "year": 1989, "venue": "Florida. Journal of the American Statistical Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew A. Jaro. 1989. Advances in Record-Linkage Methodology as Applied to Matching the 1985 Census of Tampa, Florida. Journal of the American Statistical Association.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Moses: Open Source Toolkit for Statistical Machine Translation", "authors": [ { "first": "Philip", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcelo", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcelo Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Corpus-based and Knowledge-based Measures of Text Semantic Similarity", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Corley", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and Knowledge-based Measures of Text Semantic Similarity. In Proceedings of the Na- tional Conference on Artificial Intelligence.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "WordNet:: Similarity: Measuring the Relatedness of Concepts", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Michelizzi", "suffix": "" } ], "year": 2004, "venue": "Demonstration Papers at North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Pedersen, Siddharth Patwardhan and Jason Miche- lizzi. 2004. WordNet:: Similarity: Measuring the Relatedness of Concepts. In Demonstration Papers at North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources", "authors": [ { "first": "William", "middle": [ "B" ], "last": "Dolan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William B. Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources. In Proceedings of the 20th International Conference on Computational Linguistics. Association for Com- putational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
rank headlines OnWN FNWNSMT
RUN-1100.67740.7667 0.3793 0.3068
RUN-270.69210.7367 0.3793 0.3375
RUN-3460.52760.6495 0.4158 0.3082
STS-bl730.53990.2828 0.2146 0.2861
", "html": null, "num": null, "text": ". The STS shared task-2013 used the Pearson Correlation Coefficient as the evaluation metric. The results of our experiments are presented inTable 1. The results indicate that the proposed method, RUN-", "type_str": "table" }, "TABREF1": { "content": "", "html": null, "num": null, "text": "CLaC-CORE runs and STS baseline performance", "type_str": "table" }, "TABREF3": { "content": "
: Best and worst feature combination performance
on test set
", "html": null, "num": null, "text": "", "type_str": "table" } } } }