{ "paper_id": "S17-2007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:28:41.503262Z" }, "title": "BIT at SemEval-2017 Task 1: Using Semantic Information Space to Evaluate Semantic Textual Similarity", "authors": [ { "first": "Hao", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Heyan", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Ping", "middle": [], "last": "Jian", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "pjian@bit.edu.cn" }, { "first": "Yuhang", "middle": [], "last": "Guo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "guoyuhang@bit.edu.cn" }, { "first": "Chao", "middle": [], "last": "Su", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "suchao@bit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents three systems for semantic textual similarity (STS) evaluation at SemEval-2017 STS task. One is an unsupervised system and the other two are supervised systems which simply employ the unsupervised one. All our systems mainly depend on the semantic information space (SIS), which is constructed based on the semantic hierarchical taxonomy in WordNet, to compute non-overlapping information content (IC) of sentences. Our team ranked 2nd among 31 participating teams by the primary score of Pearson correlation coefficient (PCC) mean of 7 tracks and achieved the best performance on Track 1 (AR-AR) dataset.", "pdf_parse": { "paper_id": "S17-2007", "_pdf_hash": "", "abstract": [ { "text": "This paper presents three systems for semantic textual similarity (STS) evaluation at SemEval-2017 STS task. One is an unsupervised system and the other two are supervised systems which simply employ the unsupervised one. All our systems mainly depend on the semantic information space (SIS), which is constructed based on the semantic hierarchical taxonomy in WordNet, to compute non-overlapping information content (IC) of sentences. Our team ranked 2nd among 31 participating teams by the primary score of Pearson correlation coefficient (PCC) mean of 7 tracks and achieved the best performance on Track 1 (AR-AR) dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Given two snippets of text, semantic textual similarity (STS) measures the degree of equivalence in the underlying semantics. STS is a basic but important issue with multitude of application areas in natural language processing (NLP) such as example based machine translation (EBMT), machine translation evaluation, information retrieval (IR), question answering (QA), text summarization and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The SemEval STS task has become the most famous activity for STS evaluation in recent years and the STS shared task has been held annually since 2012 (Agirre et al., 2012 (Agirre et al., , 2013 (Agirre et al., , 2014 (Agirre et al., , 2015 (Agirre et al., , 2016 Cer et al., 2017) , as part of the SemEval/*SEM family of workshops. The organizers have set up publicly available datasets of sentence pairs with similarity scores from human annotators, which are up to more than 16,000 sentence pairs for training and evaluation, and attracted a large number of teams with a variety of systems to participate the competitions.", "cite_spans": [ { "start": 150, "end": 170, "text": "(Agirre et al., 2012", "ref_id": "BIBREF3" }, { "start": 171, "end": 193, "text": "(Agirre et al., , 2013", "ref_id": "BIBREF4" }, { "start": 194, "end": 216, "text": "(Agirre et al., , 2014", "ref_id": "BIBREF1" }, { "start": 217, "end": 239, "text": "(Agirre et al., , 2015", "ref_id": "BIBREF0" }, { "start": 240, "end": 262, "text": "(Agirre et al., , 2016", "ref_id": "BIBREF2" }, { "start": 263, "end": 280, "text": "Cer et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generally, STS systems could be divided into two categories: One kind is unsupervised systems (Li et al., 2006; Mihalcea et al., 2006; Islam and Inkpen, 2008; Han et al., 2013; Sultan et al., 2014b; Wu and Huang, 2016) , some of which are appeared for a long time when there wasn't enough training data; The other kind is supervised systems (B\u00e4r et al., 2012; \u0160ari\u0107 et al., 2012; Sultan et al., 2015; Rychalska et al., 2016; Brychc\u00edn and Svoboda, 2016) applying machine learning algorithms, including deep learning, after adequate training data has been constructed. Each kind of methods has its advantages and application areas. In this paper, we present three systems, one unsupervised system and two supervised systems which simply make use of the unsupervised one.", "cite_spans": [ { "start": 94, "end": 111, "text": "(Li et al., 2006;", "ref_id": "BIBREF17" }, { "start": 112, "end": 134, "text": "Mihalcea et al., 2006;", "ref_id": "BIBREF21" }, { "start": 135, "end": 158, "text": "Islam and Inkpen, 2008;", "ref_id": "BIBREF13" }, { "start": 159, "end": 176, "text": "Han et al., 2013;", "ref_id": "BIBREF12" }, { "start": 177, "end": 198, "text": "Sultan et al., 2014b;", "ref_id": "BIBREF30" }, { "start": 199, "end": 218, "text": "Wu and Huang, 2016)", "ref_id": "BIBREF33" }, { "start": 341, "end": 359, "text": "(B\u00e4r et al., 2012;", "ref_id": "BIBREF5" }, { "start": 360, "end": 379, "text": "\u0160ari\u0107 et al., 2012;", "ref_id": null }, { "start": 380, "end": 400, "text": "Sultan et al., 2015;", "ref_id": "BIBREF32" }, { "start": 401, "end": 424, "text": "Rychalska et al., 2016;", "ref_id": "BIBREF25" }, { "start": 425, "end": 452, "text": "Brychc\u00edn and Svoboda, 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Following the standard argumentation of information theory, Resnik (1995) proposed the definition of the information content (IC) of a concept as follows:", "cite_spans": [ { "start": 60, "end": 73, "text": "Resnik (1995)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "IC (c) = \u2212 log P(c),", "eq_num": "(1)" } ], "section": "Preliminaries", "sec_num": "2" }, { "text": "where P(c) refers to statistical frequency of concept c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "Since information content (IC) for multiple words, which sums the non-overlapping concepts IC, is a computational difficulties for knowledge based methods. For a long time, IC related methods were usually used as word similarity (Resnik, 1995; Jiang and Conrath, 1997; Lin, 1997) or word weight (Li et al., 2006; Han et al., 2013) rather than the core evaluation modules of sentence similarity methods (Wu and Huang, 2016) .", "cite_spans": [ { "start": 229, "end": 243, "text": "(Resnik, 1995;", "ref_id": "BIBREF24" }, { "start": 244, "end": 268, "text": "Jiang and Conrath, 1997;", "ref_id": "BIBREF15" }, { "start": 269, "end": 279, "text": "Lin, 1997)", "ref_id": "BIBREF18" }, { "start": 295, "end": 312, "text": "(Li et al., 2006;", "ref_id": "BIBREF17" }, { "start": 313, "end": 330, "text": "Han et al., 2013)", "ref_id": "BIBREF12" }, { "start": 402, "end": 422, "text": "(Wu and Huang, 2016)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "To apply non-overlapping IC of sentences in STS evaluation, we construct the semantic information space (SIS), which employs the super-subordinate (is-a) relation from the hierarchical taxonomy of WordNet (Wu and Huang, 2016) . The space size of a concept is the information content of the concept. SIS is not a traditional orthogonality multidimensional space, while it is the space with inclusion relation among concepts. Sentences in SIS are represented as a real physical space instead of a point in vector space.", "cite_spans": [ { "start": 205, "end": 225, "text": "(Wu and Huang, 2016)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "STS evaluation using SIS", "sec_num": "2.1" }, { "text": "We have the intuitions about similarity: The similarity between A and B is related to their commonality and differences, the more commonality and the less differences they have, the more similar they are; The maximum similarity is reached when A and B are identical, no matter how much commonality they share (Lin, 1998) . The principle of Jaccard coefficient (Jaccard, 1908) is accordance with the intuitions about similarity and we define the similarity of two sentences S a and S b based on it:", "cite_spans": [ { "start": 309, "end": 320, "text": "(Lin, 1998)", "ref_id": "BIBREF19" }, { "start": 360, "end": 375, "text": "(Jaccard, 1908)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "STS evaluation using SIS", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim (s a , s b ) = IC (s a \u2229 s b ) IC (s a \u222a s b ) .", "eq_num": "(2)" } ], "section": "STS evaluation using SIS", "sec_num": "2.1" }, { "text": "The quantity of the intersection of the information provided by the two sentences can be obtained through that of the union of them:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STS evaluation using SIS", "sec_num": "2.1" }, { "text": "IC (s a \u2229 s b ) = IC (s a )+IC (s b )\u2212IC (s a \u222a s b ) . (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STS evaluation using SIS", "sec_num": "2.1" }, { "text": "So the remaining problem is how to compute the quantity of the union of non-overlapping information of sentences. We calculate it by employing the inclusion-exclusion principle from combinatorics for the total IC of sentence s a and the same way is used for sentence s b and both sentences:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STS evaluation using SIS", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "IC (s a ) = IC n i=1 c a i = n k=1 (\u22121) k\u22121 1\u2264i 1 <\u2022\u2022\u2022much better than our IC based systems of Run 1(0.0758) and Run 2 (0.0584),which are withoutembedding modules.", "text": "Performances on SemEval 2017 STS evaluation datasets.", "num": null, "type_str": "table", "html": null }, "TABREF2": { "content": "
SetSizeRun 1 Run 2 Run 3
Development 1500 0.8194 0.8240 0.8291
Test1379 0.7942 0.7962 0.8085
", "text": "http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark", "num": null, "type_str": "table", "html": null }, "TABREF3": { "content": "", "text": "Performances of runs on STS benchmark.", "num": null, "type_str": "table", "html": null } } } }