{ "paper_id": "S13-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:42:45.914786Z" }, "title": "SXUCFN-Core: STS Models Integrating FrameNet Parsing Information", "authors": [ { "first": "Sai", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanxi University", "location": { "settlement": "Taiyuan", "country": "China" } }, "email": "" }, { "first": "Ru", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanxi University", "location": { "settlement": "Taiyuan", "country": "China" } }, "email": "liru@sxu.edu.cn" }, { "first": "Xia", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanxi University", "location": { "settlement": "Taiyuan", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes our system submitted to *SEM 2013 Semantic Textual Similarity (STS) core task which aims to measure semantic similarity of two given text snippets. In this shared task, we propose an interpolation STS model named Model_LIM integrating Fra-meNet parsing information, which has a good performance with low time complexity compared with former submissions.", "pdf_parse": { "paper_id": "S13-1009", "_pdf_hash": "", "abstract": [ { "text": "This paper describes our system submitted to *SEM 2013 Semantic Textual Similarity (STS) core task which aims to measure semantic similarity of two given text snippets. In this shared task, we propose an interpolation STS model named Model_LIM integrating Fra-meNet parsing information, which has a good performance with low time complexity compared with former submissions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The goal of Semantic Textual Similarity (STS) is to measure semantic similarity of two given text snippets. STS has been recently proposed by Agirre et al. (2012) as a pilot task, which has close relationship with both tasks of Textual Entailment and Paraphrase, but not equivalent with them and it is more directly applicable to a number of NLP tasks such as Question Answering (Lin and Pantel, 2001) , Text Summarization (Hatzivassiloglou et al., 1999) , etc. And yet, the acquiring of sentence similarity has been the most important and basic task in STS. Therefore, the STS core task of *SEM 2013 conference, is formally defined as the degree of semantic equivalence between two sentences as follows:", "cite_spans": [ { "start": 142, "end": 162, "text": "Agirre et al. (2012)", "ref_id": "BIBREF0" }, { "start": 379, "end": 401, "text": "(Lin and Pantel, 2001)", "ref_id": "BIBREF1" }, { "start": 423, "end": 454, "text": "(Hatzivassiloglou et al., 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\uf0b7 5: completely equivalent, as they mean the same thing. \uf0b7 4: mostly equivalent, but some unimportant details differ.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\uf0b7 3: roughly equivalent, but some important information differs/missing. \uf0b7 2: not equivalent, but share some details. \uf0b7 1: not equivalent, but are on the same topic. \uf0b7 0: on different topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we attempt to integrate semantic information into STS task besides the lower-level word and syntactic information. Evaluation results show that our STS model could benefit from semantic parsing information of two text snippets. The rest of the paper is organized as follows: Section 2 reviews prior researches on STS. Section 3 illustrates three models measuring text similarity. Section 4 describes the linear interpolation model in detail. Section 5 provides the experimental results on the development set as well as the official results on all published datasets. Finally, Section 6 summarizes our paper with direction for future works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several techniques have been developed for STS. The typical approach to finding the similarity between two text segments is to use simple word matching method. In order to improve this simple method, Mihalcea et al. (2006) combine two corpus-based and six knowledge-based measures of word similarity, but the cost of their algorithm is expensive. In contrast, our method treats words and texts in essentially the same way.", "cite_spans": [ { "start": 200, "end": 222, "text": "Mihalcea et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In 2012 STS task, 35 teams participate and submit 88 runs. The two top scoring systems are UKP and Takelab. The former system (B\u00e4r et al., 2012) uses a simple log-linear regression model to combine multiple text similarity measures (related to content, structure and style) of varying complexity. While the latter system Takelab (\u0160ari\u0107 et al., 2012) uses a support vector regression model with multiple features measuring word-overlap similarity and syntax similarity.", "cite_spans": [ { "start": 126, "end": 144, "text": "(B\u00e4r et al., 2012)", "ref_id": "BIBREF4" }, { "start": 329, "end": 349, "text": "(\u0160ari\u0107 et al., 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The results of them score over 80%, far exceeding that of a simple lexical baseline. But both share one characteristic: they integrate lexical and syntax information without semantic information, especially FrameNet parsing information. In addition, the complexity of these algorithms is very high. Therefore, we propose a different and simple model integrating FrameNet parsing information in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this paper, we propose a combination interpolation model which is constructed by the results of three similarity models based on words, WordNet, FrameNet , which are called sim WD (\u2022), sim WN (\u2022) and sim FN (\u2022) respectively. The overall similarity sim LIM (S 1 , S 2 ) between a pair of texts S 1 , S 2 is computed in the following equation:", "cite_spans": [ { "start": 192, "end": 194, "text": "WN", "ref_id": null }, { "start": 195, "end": 198, "text": "(\u2022)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim LIM (S 1 , S 2 )= \u03c9 1 \u2022 sim WD (S 1 , S 2 ) +\u03c9 2 \u2022 sim WN (S 1 , S 2 ) +\u03c9 3 \u2022 sim FN (S 1 , S 2 )", "eq_num": "(1)" } ], "section": "Linear Interpolation Model", "sec_num": "3" }, { "text": "In which, \u03c9 1 , \u03c9 2 and \u03c9 3 are respectively the weights of the similarity models, i.e., \u03c9 1 +\u03c9 2 +\u03c9 3 = 1; and they are all positive hyperparameters. Now, we describe the three models used in this equation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Interpolation Model", "sec_num": "3" }, { "text": "This model is motivated by Vector Space Model (Salton et al., 1975) . We present each sentence as a vector in the multidimensional token space. Let S c denote the set of all words in the c-th text snippets (c = 1, 2); the words of bag is W = S 1 S 2 . Hence, the similarity of a pair of sentences, formally expressed as:", "cite_spans": [ { "start": 46, "end": 67, "text": "(Salton et al., 1975)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on Words", "sec_num": "3.1" }, { "text": "sim WD (S 1 , S 2 ) = \u2211 , \u2022 , | | \u2211 , | | \u2022 \u2211 , | | (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on Words", "sec_num": "3.1" }, { "text": "In which, we can find , 1,2, \u2026 , | |;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on Words", "sec_num": "3.1" }, { "text": "1,2 by solving:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on Words", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", 1, , 0,", "eq_num": "(3)" } ], "section": "Similarity Based on Words", "sec_num": "3.1" }, { "text": "From these two equations above, we can see the more identical words in a text pair, the more similar the two snippets are. Whereas, by intuition, many high-frequency functional words would not be helpful to the estimation of the similarity given in Eq.(2). Therefore, in the preprocessing stage, we compute the word frequencies per dataset, and then remove the high frequency words (top 1% in frequency list) in each segment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on Words", "sec_num": "3.1" }, { "text": "This model measures semantic similarity with the help of such resources that specifically encode relations between words or concepts like WordNet (Fellbaum, 1998) . We use the algorithms by Lin (1998) on WordNet to compute the similarity between two words a and b, which we call sim Lin (a, b) . Let S 1 , S 2 be the two word sets of two given text snippets, we use the method below:", "cite_spans": [ { "start": 146, "end": 162, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF10" }, { "start": 190, "end": 200, "text": "Lin (1998)", "ref_id": "BIBREF11" }, { "start": 283, "end": 293, "text": "Lin (a, b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on WordNet", "sec_num": "3.2" }, { "text": "sim WN (S 1 , S 2 ) = \u2211 , \u2022 , | |,| | | |,| | (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on WordNet", "sec_num": "3.2" }, { "text": "In which, , 1,2 . In the numerator of Eq.(4),we try to max(\u2022), avg(\u2022) and mid(\u2022) respectively, then we find the max(\u2022) is the best.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on WordNet", "sec_num": "3.2" }, { "text": "FrameNet lexicon (Fillmore et al., 2003) is a rich linguistic resource containing expert knowledge about lexical and predicate-argument semantics in English. In a sentence, word or phrase tokens that evoke a frame are known as targets. Each frame definition also includes a set of frame elements, or roles, corresponding to different aspects of the concept represented by the frame, such as participants, props, and attributes. We use the term argument to refer to a sequence of word tokens annotated as filling a frame role.", "cite_spans": [ { "start": 17, "end": 40, "text": "(Fillmore et al., 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on FrameNet", "sec_num": "3.3" }, { "text": "All the data are automatically parsed by SEMFOR 1 (Das and Smith, 2012; Das and Smith, 2011) . Figure 1 shows the parser output of a sentence pair given in Microsoft Research Video Description Corpus with annotated targets, frames and role argument pairs. It can be noticed that FrameNet parsing information could give some clues of the similarity of two given snippets and we think that integrating this information could improve the accuracy of STS task. For example, the sentences in the Figure 1 both illustrate \"somebody is moving\". However, our model depends on the precision of that parser. If it would be improved, the results in STS task would be better. The words in bold correspond to targets, which evoke semantic frames that are denoted in capital letters. Every frame is shown in a distinct color; the arguments of each frame are annotated with the same color, and marked below the sentence, at different levels; the spans marked in the block of dotted liens fulfill a specific role.", "cite_spans": [ { "start": 50, "end": 71, "text": "(Das and Smith, 2012;", "ref_id": "BIBREF8" }, { "start": 72, "end": 92, "text": "Das and Smith, 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 95, "end": 103, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 491, "end": 499, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Similarity Based on FrameNet", "sec_num": "3.3" }, { "text": "For a given sentence S c (c = 1,2) with a set of evoked frame F c = < f 1 ,f 2 , \u2026, f n > (n is the number of evoked frames), a set of target word with each frame T c = < t 1 , t 2 , \u2026, t n > and the set of roles (namely, frame elements) c = {R c,1 , R c,2 , \u2026,R c,n }, each frame contains one or more arguments R c,i = {r j } (i = 1, 2, \u2026, n; j is an integer that is greater or equal to zero). Take Figure 1 as an example,", "cite_spans": [], "ref_spans": [ { "start": 400, "end": 408, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Similarity Based on FrameNet", "sec_num": "3.3" }, { "text": "T 1 = , F 1 = , 1 = {R 1,1 , R 1,2 }, R 1,1 = {girls}, R 1,2 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Based on FrameNet", "sec_num": "3.3" }, { "text": "T 2 = , F 2 = ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "2 = {R 2,1 , R 2,2 , R 2,3 , R 2,4 }, R 2,1 = {women}, R 2,2 = {models}, R 2,3 = {women models}, R 2,4 = {down}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "In order to compute sim Fr (\u2022) simply, we also use a interpolation model to combine the similarities based on target words sim Tg (\u2022), frames sim Fr (\u2022) and frame relations sim Re (\u2022) . They are estimated as the following:", "cite_spans": [ { "start": 180, "end": 183, "text": "(\u2022)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "When computing the similarity on target word level sim Tg (S 1 , S 2 ), we also consider each sentence as a vector of target words as is seen in Eq. 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "T = T 1 T 2 ; sim Tg (S 1 , S 2 )= \u2211 , \u2022 , |T| \u2211 , | | \u2022 \u2211 , | | (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "In which, we can find t , 1,2, \u2026 , | |; 1,2 by solving:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", 1, , , 1,2, \u2026 , | | 0,", "eq_num": "(6)" } ], "section": "{girls, on the stage};", "sec_num": null }, { "text": "Let sim Fr (S 1 , S 2 ) be the similarity on frame level as shown in Eq.(7), with each sentence as a vector of frames. We define f 1,i , f 2,i like , in Eq.(3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "F = F 1 F 2 ; sim Fr (S 1 , S 2 )= \u2211 , \u2022 , | | \u2211 , | | \u2022 \u2211 , | | (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "Before computing the role relationship between the pair of sentences, we should find the containment relationship of each pair of frames in one sentence. We use a rule to define the containment relationship:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "Given two frames f c,i , f c,j in a sentence S c , if , ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": ", then f c,j contains f c,i -and that is f c,i is a child of f c,j . After that we add them into the set of frame relationship , , ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": ". We consider the relationship between two frames in a sentence as a 2-tuple, and again use Figure 1 as an example, Rlt 1 = ; Rlt 2 = , .", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 100, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "Besides, we do exactly the same with both frames, namely , c 1,2 the value of , is 1. The similarity on frame relationship level sim Re (S 1 , S 2 ) presents each sentence as a vector of roles as shown in Eq. 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Rlt = Rlt 1 Rlt 2 ; sim Re (S 1 , S 2 )= \u2211 , \u2022 , | | \u2211 , | | \u2022 \u2211 , | |", "eq_num": "(8)" } ], "section": "{girls, on the stage};", "sec_num": null }, { "text": "Lastly, the shallow semantic similarity between two given sentences is computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Sim FN (S 1 , S 2 )= \u03b1 \u2022 sim Tg (S 1 , S 2 ) +\u03b2 \u2022 sim Fr (S 1 , S 2 ) +\u03b3 \u2022 sim Re (S 1 , S 2 )", "eq_num": "(9)" } ], "section": "{girls, on the stage};", "sec_num": null }, { "text": "In which, \u03b1 + \u03b2 + \u03b3 =1, and they are all positive hyperparameters. As shown in Figure 2 , we plot the Pearson correlation (vertical axis) against the combination of parameters (horizontal axis) in all 2013 STS train data (2012 STS data). We notice that generally the Pearson correlation is fluctuates, and the correlation peak is found at 32, which in Table 1 Figure 2 . This table also apples to different combinations of \u03c9 1 , \u03c9 2 , \u03c9 3 (\u03c9 1 +\u03c9 2 +\u03c9 3 =1) with ID that is horizontal axis in Figure 3 . Table 1 and when the value of result confidence is 100. The effect values are represented by a vertical line (i.e. ID = 32).", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 87, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 352, "end": 359, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 360, "end": 368, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 493, "end": 501, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 504, "end": 511, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "{girls, on the stage};", "sec_num": null }, { "text": "Eq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning Hyperparameters", "sec_num": "4" }, { "text": "(1) is a very simple linear interpolation model, and we tune the hyperparameters on the whole 2012 STS data. As shown in Figure 3 ,we plot the Pearson correlation (vertical axis) for the different combination of parameters \u03c9 1 , \u03c9 2 and \u03c9 3 (horizontal axis). We notice that generally the Pearson correlation fluctuates with a dropping tendency in most cases, and the correlation peak presents at 13, which in Table 1 is \u03c9 1 =0.8, \u03c9 2 =0.1, \u03c9 3 =0.1. Table 1 and when the value of result confidence is 100. The effect values are represented by a vertical line (i.e. ID = 13).", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 129, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 410, "end": 418, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 452, "end": 459, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Tuning Hyperparameters", "sec_num": "4" }, { "text": "We submit four runs: the first one (Model_WD) is based on word similarity; the second one (Mod-el_WN) which is only using the similarity based on WordNet, is submitted with the team name of SXULLL; the third one (Model_FN) which uses FrameNet similarity defined in Section 3.3; and the last one in which we combine the three similarities described in Section 4 together with an interpolation model. In addition, we map our outputs multiply by five to the [0-5] range.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "It is worth notice that in the first model, we lowercase all words and remove all numbers and punctuations. And in the third model, we extract all frame-semantic roles with SEMFOR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In the experiment, we use eight datasets totallynamely MSRpar, MSRvid, SMTeuroparl, OnWN, SMTnews, headlines, FNWN and SMT -with their gold standard file to evaluate the performance of the submitted systems. Evaluation is carried out using the official scorer which computes Pearson correlation between the human rated similarity scores and the system's output. The final measure is the score that is weighted by the number of text pairs in each dataset (\"Mean\"). See Agirre et al. (2012) for a full description of the metrics.", "cite_spans": [ { "start": 468, "end": 488, "text": "Agirre et al. (2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "There is no new train data in 2013, so we use 2012 data as train data. From Table 2 , 3 we can see that the Model_LIM has better performance than the other three models. From Table 2 , we notice that all the models except Model_FN, are apt to handle the SMTeuroparl that involves long sentences. For Model_FN, it performs well in computing on short and similarly structured texts such as MSRvid (This will be confirmed in test data later). Although WordNet and FrameNet model has a mere weight of 20% in Model_LIM (i.e. \u03c9 1 +\u03c9 2 = 0.2), the run which integrate more semantic information displays a con-sistent performance across the three train sets (especially in SMTeuroparl, the Pearson correlation rises from 0.5178 to 0.66808), when compared to the other three. The 2012 STS test results obtained by first ranking UKP_run2 and baseline system are shown in Table 3 , it is interesting to notice that performance of Model_WD is similar with Model_LIM except on MSRvid, the text segments in which there are fewer identical words because of the semantic equivalence. For Model_FN, we can see it performs well on short and similarly structured texts (MSRvid and OnWN) as mentioned before. This is because the precision of FrameNet parser took effect on the FrameNet-based models performance. Compared to UKP_run2, the performance of Mod-el_LIM is obviously better on OnWN set, while on SMTeuroparl and SMTnews this model scores slightly lower than UKP_run2. Finally, Mod-el_LIM did not perform best on MSRpar and MSRvid compared with UKP_run2, but it has low time complexity and integrates semantic information. Table 4 provides the official results of our submitted systems, along with the rank on each dataset. Generally, all results outperform the baseline, based on simple word overlap. However, the performance of Model_LIM is not always the best in the three runs for each dataset. From the table we can note that a particular model always performs well on the dataset including the lexicon on which the model is based on e.g. As seen from the system rank in table, the optimal runs in the three submitted system remain with Model_LIM. Not only Model_LIM performs best on two occasions, but also Model_FN ranks top ten twice, in FNWN and SMT respectively, we owe this result to the contribution of FrameNet parsing information.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 175, "end": 182, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 861, "end": 868, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1612, "end": 1619, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Experiments on STS 2012 Data", "sec_num": "5.1" }, { "text": "We have tested all the models on published STS datasets. Compared with the official results, Mod-el_LIM system is apt to handle the SMT that involves long sentences. Moreover, this system just integrates words, WordNet and FrameNet semantic information, thus it has low time complexity. There is still much room for improvement in our work. For example, we will attempt to use multivariate regression software to tuning the hyperparameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "See http://www.ark.cs.cmu.edu/SEMAFOR/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 6th International Workshop on Semantic Evaluation, in conjunction with the 1st Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gon- zalez-Agirre. 2012. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity. In Proceedings of the 6th International Workshop on Semantic Evaluation, in conjunction with the 1st Joint Conference on Lexi- cal and Computational Semantics, 385-393.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Discovery of Inference Rules for Question Answering", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "4", "pages": "343--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin, Patrick Pantel. 2001. Discovery of Infe- rence Rules for Question Answering. Natural Lan- guage Engineering, 7(4):343-360.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Detecting Text Similarity over Short Passages: Exploring Linguistic Feature Combinations via Machine Learning", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Judith", "middle": [ "L" ], "last": "Klavans", "suffix": "" }, { "first": "Eleazar", "middle": [], "last": "Eskin", "suffix": "" } ], "year": 1999, "venue": "proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "224--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasileios Hatzivassiloglou, Judith L. Klavans, and Eleazar Eskin. 1999. Detecting Text Similarity over Short Passages: Exploring Linguistic Feature Combi- nations via Machine Learning. In proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Cor- pora, 224-231.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Corpus-based and Knowledge-based Measures of Text Semantic Similarity", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Corley", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the National Conference on Artificial Intelligence", "volume": "21", "issue": "", "pages": "775--780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and Knowledge-based Measures of Text Semantic Similarity. In Proceedings of the National Conference on Artificial Intelligence, 21(1): 775-780.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "UKP: Computing Semantic Textual Similarity by Combining Multiple Content Similarity Measures", "authors": [ { "first": "Daniel", "middle": [], "last": "B\u00e4r", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 6th International Workshop on Semantic Evaluation, in conjunction with the 1st Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "435--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Tors- ten Zesch. 2012. UKP: Computing Semantic Textual Similarity by Combining Multiple Content Similarity Measures. In Proceedings of the 6th International Workshop on Semantic Evaluation, in conjunction with the 1st Joint Conference on Lexical and Compu- tational Semantics, 435-440.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "TakeLab: Systems for Measuring Semantic Text Similarity", "authors": [ { "first": "Frane", "middle": [], "last": "\u0160ari\u0107", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Mladen", "middle": [], "last": "Karan", "suffix": "" }, { "first": "Jan", "middle": [], "last": "\u0160najder", "suffix": "" }, { "first": "Bojana Dalbelo", "middle": [], "last": "Ba\u0161i\u0107", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 6th International Workshop on Semantic Evaluation, in conjunction with the 1st Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "441--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frane \u0160ari\u0107, Goran Glava\u0161, Mladen Karan, Jan \u0160najder, and Bojana Dalbelo Ba\u0161i\u0107. 2012. TakeLab: Systems for Measuring Semantic Text Similarity. In Proceed- ings of the 6th International Workshop on Semantic Evaluation, in conjunction with the 1st Joint Confe- rence on Lexical and Computational Semantics, 441- 448.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Vector Space Model for Automatic Indexing", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "A", "middle": [], "last": "Wong", "suffix": "" }, { "first": "C", "middle": [ "S" ], "last": "Yang", "suffix": "" } ], "year": 1975, "venue": "Communications of the ACM", "volume": "18", "issue": "11", "pages": "613--620", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Salton, A. Wong, C.S. Yang. 1975. A Vector Space Model for Automatic Indexing. Communications of the ACM, 18(11):613-620.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Background to FrameNet", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Fillmore", "suffix": "" }, { "first": "C", "middle": [ "R" ], "last": "Johnson", "suffix": "" }, { "first": "M", "middle": [ "R L" ], "last": "Petruck", "suffix": "" } ], "year": 2003, "venue": "International Journal of Lexicography", "volume": "16", "issue": "", "pages": "235--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. J. Fillmore, C. R. Johnson and M. R.L. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16: 235-250.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Graph-Based Lexicon Expansion with Sparsity-Inducing Penalties", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "677--687", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipanjan Das and Noah A. Smith. 2012. Graph-Based Lexicon Expansion with Sparsity-Inducing Penalties. In Proceedings of the Conference of the North Amer- ican Chapter of the Association for Computational Linguistics, 677-687.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semi-Supervised Frame-Semantic Parsing for Unknown Predicates", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1435--1444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipanjan Das and Noah A. Smith. 2011. Semi- Supervised Frame-Semantic Parsing for Unknown Predicates. In Proceedings of Annual Meeting of the Association for Computational Linguistics, 1435- 1444.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An information-theoretic definition of similarity", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of International Conference on Machine Learning", "volume": "", "issue": "", "pages": "296--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of International Confe- rence on Machine Learning, 296-340.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "This is a pair of sentences in 2013 STS training data: (a) Girls are walking on the stage; (b) Women models are walking down a catwalk.", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "This graph shows the variation of Pearson correlation (vertical axis) in all 2013 STS train data (2012 STS data), with numbers (horizontal axis) indicating different combinations \u03b1, \u03b2, \u03b3 in", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "This graph shows the variation of Pearson correlation (vertical axis) in all 2013 STS train data (2012 STS data), with numbers (horizontal axis) indicating different combinations \u03c9 1 , \u03c9 2 , \u03c9 3 in", "type_str": "figure" }, "TABREF1": { "content": "", "type_str": "table", "html": null, "text": "Different combinations of \u03b1, \u03b2, \u03b3 (\u03b1 + \u03b2 + \u03b3 =1) with ID that is horizontal axis in", "num": null }, "TABREF3": { "content": "
", "type_str": "table", "html": null, "text": "Performances of the four models on 2012 train data. The highest correlation in each column is given in bold.", "num": null }, "TABREF5": { "content": "
", "type_str": "table", "html": null, "text": "Performances of our three models as well as the baseline and UKP_run2 (that is ranked 1 in last STS task) results on 2012 test data. The highest correlation in each column is given in bold.", "num": null }, "TABREF6": { "content": "
headlinesOnWNFNWNSMTMean
Baseline0.5399 (66)0.2828 (80)0.2146 (66)0.2861 (65)0.3639 (73)
Model_WD0.6806 (24)0.5355 (44)0.3181 (48)0.3980 (4)0.5198 (27)
Model_WN0.4840 (78)0.7146 (12)0.0415 (83)0.1543 (86)0.3944 (69)
Model_FN0.4881 (76)0.6146 (27)0.4237 (9)0.3844 (6)0.4797 (46)
Model_LIM0.6761 (29)0.6481 (23)0.3025 (51)0.4003 (3)0.5458 (14)
", "type_str": "table", "html": null, "text": "Model_WN in OnWN, Model_FN in FNWN. Besides, Model_WD and Model_LIM almost have same scores except in OnWN set, because in Model_LIM is included with WordNet resource.", "num": null }, "TABREF7": { "content": "", "type_str": "table", "html": null, "text": "Performances of our systems as well as baseline on STS 2013 individual test data, accompanied by their rank (out of 90) shown in brackets. Scores in bold denote significant improvements over the baseline.", "num": null } } } }