{ "paper_id": "S13-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:42:51.666263Z" }, "title": "UMCC_DLSI: Textual Similarity based on Lexical-Semantic features", "authors": [ { "first": "Alexander", "middle": [], "last": "Ch\u00e1vez", "suffix": "", "affiliation": {}, "email": "alexander.chavez@umcc.cu" }, { "first": "Antonio", "middle": [ "Fern\u00e1ndez" ], "last": "Orqu\u00edn", "suffix": "", "affiliation": {}, "email": "" }, { "first": "H\u00e9ctor", "middle": [], "last": "D\u00e1vila", "suffix": "", "affiliation": {}, "email": "hector.davila@umcc.cu" }, { "first": "Yoan", "middle": [], "last": "Guti\u00e9rrez", "suffix": "", "affiliation": {}, "email": "yoan.gutierrez@umcc.cu" }, { "first": "Armando", "middle": [], "last": "Collazo", "suffix": "", "affiliation": {}, "email": "armando.collazo@umcc.cu" }, { "first": "Jos\u00e9", "middle": [ "I" ], "last": "Abreu", "suffix": "", "affiliation": {}, "email": "jose.abreu@umcc.cu" }, { "first": "Andr\u00e9s", "middle": [], "last": "Montoyo", "suffix": "", "affiliation": {}, "email": "montoyo@dlsi.ua.es" }, { "first": "Rafael", "middle": [], "last": "Mu\u00f1oz", "suffix": "", "affiliation": {}, "email": "rafael@dlsi.ua.es" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the specifications and results of UMCC_DLSI system, which participated in the Semantic Textual Similarity task (STS) of SemEval-2013. Our supervised system uses different types of lexical and semantic features to train a Bagging classifier used to decide the correct option. Related to the different features we can highlight the resource ISR-WN used to extract semantic relations among words and the use of different algorithms to establish semantic and lexical similarities. In order to establish which features are the most appropriate to improve STS results we participated with three runs using different set of features. Our best run reached the position 44 in the official ranking, obtaining a general correlation coefficient of 0.61.", "pdf_parse": { "paper_id": "S13-1015", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the specifications and results of UMCC_DLSI system, which participated in the Semantic Textual Similarity task (STS) of SemEval-2013. Our supervised system uses different types of lexical and semantic features to train a Bagging classifier used to decide the correct option. Related to the different features we can highlight the resource ISR-WN used to extract semantic relations among words and the use of different algorithms to establish semantic and lexical similarities. In order to establish which features are the most appropriate to improve STS results we participated with three runs using different set of features. Our best run reached the position 44 in the official ranking, obtaining a general correlation coefficient of 0.61.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "SemEval-2013 (Agirre et al., 2013) presents the task Semantic Textual Similarity (STS) again. In STS, the participating systems must examine the degree of semantic equivalence between two sentences. The goal of this task is to create a unified framework for the evaluation of semantic textual similarity modules and to characterize their impact on NLP applications.", "cite_spans": [ { "start": 13, "end": 34, "text": "(Agirre et al., 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "STS is related to Textual Entailment (TE) and Paraphrase tasks. The main difference is that STS assumes bidirectional graded equivalence between the pair of textual snippets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In case of TE, the equivalence is directional (e.g. a student is a person, but a person is not necessarily a student). In addition, STS differs from TE and Paraphrase in that, rather than being a binary yes/no decision, STS is a similarity-graded notion (e.g. a student is more similar to a person than a dog to a person).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This graded bidirectional is useful for NLP tasks such as Machine Translation (MT), Information Extraction (IE), Question Answering (QA), and Summarization. Several semantic tasks could be added as modules in the STS framework, \"such as Word Sense Disambiguation and Induction, Lexical Substitution, Semantic Role Labeling, Multiword Expression detection and handling, Anaphora and Co-reference resolution, Time and Date resolution and Named Entity, among others\" 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This edition of SemEval-2013 remain with the same classification approaches that in their first version in 2012. The output of different systems was compared to the reference scores provided by SemEval-2013 gold standard file, which range from five to zero according to the next criterions 2 : (5) \"The two sentences are equivalent, as they mean the same thing\". (4) \"The two sentences are mostly equivalent, but some unimportant details differ\". (3) \"The two sentences are roughly equivalent, but some important information differs/missing\". (2) \"The two sentences are not equivalent, but share some details\". (1) \"The two sentences are not equivalent, but are on the same topic\". (0) \"The two sentences are on different topics\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of 2013 pilot task", "sec_num": "1.1" }, { "text": "After this introduction, the rest of the paper is organized as follows. Section 3 shows the Related Works. Section 4 presents our system architecture and description of the different runs. In section 4 we describe the different features used in our system. Results and a discussion are provided in Section 5 and finally we conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of 2013 pilot task", "sec_num": "1.1" }, { "text": "There are more extensive literature on measuring the similarity between documents than to between sentences. Perhaps the most recently scenario is constituted by the competition of SemEval-2012 task 6: A Pilot on Semantic Textual Similarity (Aguirre and Cerd, 2012) . In SemEval-2012, there were used different tools and resources like stop word list, multilingual corpora, dictionaries, acronyms, and tables of paraphrases, \"but WordNet was the most used resource, followed by monolingual corpora and Wikipedia\" (Aguirre and Cerd, 2012) .", "cite_spans": [ { "start": 241, "end": 265, "text": "(Aguirre and Cerd, 2012)", "ref_id": "BIBREF2" }, { "start": 513, "end": 537, "text": "(Aguirre and Cerd, 2012)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "According to Aguirre, Generic NLP tools were widely used. Among those that stand out were tools for lemmatization and POS-tagging (Aguirre and Cerd, 2012) . On a smaller scale word sense disambiguation, semantic role labeling and time and date resolution. In addition, Knowledge-based and distributional methods were highly used. Aguirre and Cerd remarked on (Aguirre and Cerd, 2012 ) that alignment and/or statistical machine translation software, lexical substitution, string similarity, textual entailment and machine translation evaluation software were used to a lesser extent. It can be noted that machine learning was widely used to combine and tune components.", "cite_spans": [ { "start": 130, "end": 154, "text": "(Aguirre and Cerd, 2012)", "ref_id": "BIBREF2" }, { "start": 359, "end": 382, "text": "(Aguirre and Cerd, 2012", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Most of the knowledge-based methods \"obtain a measure of relatedness by utilizing lexical resources and ontologies such as WordNet (Miller et al., 1990b) to measure definitional overlap, term distance within a graphical taxonomy, or term depth in the taxonomy as a measure of specificity\" (Banea et al., 2012) .", "cite_spans": [ { "start": 131, "end": 153, "text": "(Miller et al., 1990b)", "ref_id": "BIBREF22" }, { "start": 289, "end": 309, "text": "(Banea et al., 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Some scholars as in (Corley and Mihalcea, June 2005) have argue \"the fact that a comprehensive metric of text semantic similarity should take into account the relations between words, as well as the role played by the various entities involved in the interactions described by each of the two sentences\". This idea is resumed in the Principle of Compositionality, this principle posits that the meaning of a complex expression is determined by the meanings of its constituent expressions and the rules used to combine them (Werning et al., 2005) . Corley and Mihalcea in this article combined metrics of word-to-word similarity, and language models into a formula and they pose that this is a potentially good indicator of the semantic similarity of the two input texts sentences. They modeled the semantic similarity of a sentence as a function of the semantic similarity of the component words (Corley and Mihalcea, June 2005) .", "cite_spans": [ { "start": 20, "end": 52, "text": "(Corley and Mihalcea, June 2005)", "ref_id": "BIBREF9" }, { "start": 523, "end": 545, "text": "(Werning et al., 2005)", "ref_id": null }, { "start": 896, "end": 928, "text": "(Corley and Mihalcea, June 2005)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "One of the top scoring systems at SemEval-2012 (\u0160ari\u0107 et al., 2012) tended to use most of the aforementioned resources and tools. They predict the human ratings of sentence similarity using a support-vector regression model with multiple features measuring word-overlap similarity and syntax similarity. They also compute the similarity between sentences using the semantic alignment of lemmas. First, they compute the word similarity between all pairs of lemmas from first to second sentence, using either the knowledge-based or the corpus-based semantic similarity. They named this method Greedy Lemma Aligning Overlap.", "cite_spans": [ { "start": 47, "end": 67, "text": "(\u0160ari\u0107 et al., 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Daniel B\u00e4r presented the UKP system, which performed best in the Semantic Textual Similarity (STS) task at SemEval-2012 in two out of three metrics. It uses a simple log-linear regression model, trained on the training data, to combine multiple text similarity measures of varying complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "As we can see in Figure 1 , our three runs begin with the pre-processing of SemEval-2013's training set. Every sentence pair is tokenized, lemmatized and POS-tagged using Freeling 2.2 tool (Atserias et al., 2006) . Afterwards, several methods and algorithms are applied in order to extract all features for our Machine Learning System (MLS). Each run uses a particular group of features. The Run 1 (named MultiSemLex) is our main run. This takes into account all extracted features and trains a model with a Bagging classifier (Breiman, 1996 ) (using REPTree). The training corpus has been provided by SemEval-2013 competition, in concrete by the Semantic Textual Similarity task.", "cite_spans": [ { "start": 189, "end": 212, "text": "(Atserias et al., 2006)", "ref_id": null }, { "start": 527, "end": 541, "text": "(Breiman, 1996", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 17, "end": 25, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System architecture and description of the runs", "sec_num": "3" }, { "text": "The Run 2 (named MultiLex) and Run 3 (named MultiSem) use the same classifier, but including different features. Run 2 uses (see Figure 1 ) features extracted from Lexical-Semantic Metrics (LS-M) described in section 4.1, and Lexical-Semantic Alignment (LS-A) described in section 4.2.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 137, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System architecture and description of the runs", "sec_num": "3" }, { "text": "On the other hand, Run 3 uses features extracted only from Semantic Alignment (SA) described in section 4.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System architecture and description of the runs", "sec_num": "3" }, { "text": "As a result, we obtain three trained models capable to estimate the similarity value between two phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System architecture and description of the runs", "sec_num": "3" }, { "text": "Finally, we test our system with the SemEval-2013 test set (see Table 14 with the results of our three runs). The following section describes the features extraction process.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 72, "text": "Table 14", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "System architecture and description of the runs", "sec_num": "3" }, { "text": "Many times when two phrases are very similar, one sentence is in a high degree lexically overlapped by the other. Inspired in this fact we developed various algorithms, which measure the level of overlapping by computing a quantity of matching words in a pair of phrases. In our system, we used as features for a MLS lexical and semantic similarity measures. Other features were extracted from a lexical-semantic sentences alignment and a variant using only a semantic alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of the features used in the Machine Learning System", "sec_num": "4" }, { "text": "We have used well-known string based similarity measures like: Needleman-Wunch (sequence alignment), Smith-Waterman (sequence alignment), Smith-Waterman-Gotoh, Smith-Waterman-Gotoh-Windowed-Affine, Jaro, Jaro-Winkler, Chapman-Length-Deviation, Chapman-Mean-Length, QGram-Distance, Block-Distance, Cosine Similarity, Dice Similarity, Euclidean Distance, Jaccard Similarity, Matching Coefficient, Monge-Elkan and Overlap-Coefficient. These algorithms have been obtained from an API (Application Program Interface) SimMetrics library v1.5 for .NET 2.0 3 . We obtained 17 features for our MLS from these similarity measures. Using Levenshtein's edit distance (LED), we computed also two different algorithms in order to obtain the alignment of the phrases. In the first one, we considered a value of the alignment as the LED between two sentences. Contrary to (Tatu et al., 2006) , we do not remove the punctuation or stop words from the sentences, neither consider different cost for transformation operation, and we used all the operations (deletion, insertion and substitution).", "cite_spans": [ { "start": 856, "end": 875, "text": "(Tatu et al., 2006)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity measures", "sec_num": "4.1" }, { "text": "The second one is a variant that we named Double Levenshtein's Edit Distance (DLED) (see Table 9 for detail). For this algorithm, we used LED to measure the distance between the phrases, but in order to compare the words, we used LED again (Fern\u00e1ndez et al., 2012; Fern\u00e1ndez Orqu\u00edn et al., 2009) .", "cite_spans": [ { "start": 240, "end": 264, "text": "(Fern\u00e1ndez et al., 2012;", "ref_id": null }, { "start": 265, "end": 295, "text": "Fern\u00e1ndez Orqu\u00edn et al., 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Similarity measures", "sec_num": "4.1" }, { "text": "Another distance we used is an extension of LED named Extended Distance (in spanish distancia extendida (DEx)) (see (Fern\u00e1ndez et al., 2012; Fern\u00e1ndez Orqu\u00edn et al., 2009) for details). This algorithm is an extension of the Levenshtein's algorithm, with which penalties are applied by considering what kind of transformation (insertion, deletion, substitution, or non-operation) and the position it was carried out, along with the character involved in the operation. In addition to the cost matrixes used by Levenshtein's algorithm, DEx also obtains the Longest Common Subsequence (LCS) (Hirschberg, 1977) and other helpful attributes for determining similarity between strings in a single iteration. It is worth noting that the inclusion of all these penalizations makes the DEx algorithm a good candidate for our approach.", "cite_spans": [ { "start": 116, "end": 140, "text": "(Fern\u00e1ndez et al., 2012;", "ref_id": null }, { "start": 141, "end": 171, "text": "Fern\u00e1ndez Orqu\u00edn et al., 2009)", "ref_id": "BIBREF13" }, { "start": 588, "end": 606, "text": "(Hirschberg, 1977)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity measures", "sec_num": "4.1" }, { "text": "In our previous work (Fern\u00e1ndez Orqu\u00edn et al., 2009) , DEx demonstrated excellent results when it was compared with other distances as (Levenshtein, 1965) , (Neeedleman and Wunsch, 1970) , (Winkler, 1999) . We also used as a feature the Minimal Semantic Distances (Breadth First Search (BFS)) obtained between the most relevant concepts of both sentences. The relevant concepts pertain to semantic resources ISR-WN (Guti\u00e9rrez et al., 2011; 2010a) , as WordNet (Miller et al., 1990a) , WordNet Affect (Strapparava and Valitutti, 2004) , SUMO (Niles and Pease, 2001 ) and Semantic Classes (Izquierdo et al., 2007) . Those concepts were obtained after having applied the Association Ratio (AR) measure between concepts and words over each sentence. (We refer reader to (Guti\u00e9rrez et al., 2010b) for a further description).", "cite_spans": [ { "start": 21, "end": 52, "text": "(Fern\u00e1ndez Orqu\u00edn et al., 2009)", "ref_id": "BIBREF13" }, { "start": 135, "end": 154, "text": "(Levenshtein, 1965)", "ref_id": "BIBREF20" }, { "start": 157, "end": 186, "text": "(Neeedleman and Wunsch, 1970)", "ref_id": "BIBREF24" }, { "start": 189, "end": 204, "text": "(Winkler, 1999)", "ref_id": "BIBREF31" }, { "start": 415, "end": 439, "text": "(Guti\u00e9rrez et al., 2011;", "ref_id": "BIBREF16" }, { "start": 440, "end": 446, "text": "2010a)", "ref_id": "BIBREF14" }, { "start": 460, "end": 482, "text": "(Miller et al., 1990a)", "ref_id": null }, { "start": 500, "end": 533, "text": "(Strapparava and Valitutti, 2004)", "ref_id": "BIBREF28" }, { "start": 541, "end": 563, "text": "(Niles and Pease, 2001", "ref_id": "BIBREF25" }, { "start": 587, "end": 611, "text": "(Izquierdo et al., 2007)", "ref_id": "BIBREF18" }, { "start": 766, "end": 791, "text": "(Guti\u00e9rrez et al., 2010b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity measures", "sec_num": "4.1" }, { "text": "Another attribute obtained by the system was a value corresponding with the sum of the smaller distances (using QGram-Distance) between the words or the lemmas of the phrase one with each words of the phrase two.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity measures", "sec_num": "4.1" }, { "text": "As part of the attributes extracted by the system, was also the value of the sum of the smaller distances (using Levenshtein) among stems, chunks and entities of both phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity measures", "sec_num": "4.1" }, { "text": "Another algorithm that we created is the Lexical-Semantic Alignment. In this algorithm, we tried to align the phrases by its lemmas. If the lemmas coincide we look for coincidences among partsof-speech 4 (POS), and then the phrase is realigned using both. If the words do not share the same POS, they will not be aligned. To this point, we only have taken into account a lexical alignment. From now on, we are going to apply a semantic variant. After all the process, the nonaligned words will be analyzed taking into account its WordNet's relations (synonymy, hyponymy, hyperonymy, derivationally-relatedform, similar-to, verbal group, entailment and cause-to relation); and a set of equivalences like abbreviations of months, countries, capitals, days and currency. In case of hyperonymy and hyponymy relation, words are going to be aligned if there is a word in the first sentence that is in the same relation (hyperonymy or hyponymy) with another one in the second sentence. For the relations \"cause-to\" and \"implication\" the words will be aligned if there is a word in the first sentence that causes or implicates another one in the second sentence. All the other types of relations will be carried out in bidirectional way, that is, there is an alignment if a word of the first sentence is a synonymous of another one belonging to the second one or vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical-Semantic alignment", "sec_num": "4.2" }, { "text": "Finally, we obtain a value we called alignment relation. This value is calculated as = /", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical-Semantic alignment", "sec_num": "4.2" }, { "text": ". Where is the final alignment value, is the number of aligned words, and is the number of words of the shorter phrase. The value is also another feature for our system. Other extracted attributes they are the quantity of aligned words and the quantity of not aligned words. The core of the alignment is carried out in different ways, which are obtained from several attributes. Each way can be compared by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical-Semantic alignment", "sec_num": "4.2" }, { "text": "\uf0b7 the part-of-speech. \uf0b7 the morphology and the part-of-speech. \uf0b7 the lemma and the part-of-speech. \uf0b7 the morphology, part-of-speech, and relationships of WordNet. \uf0b7 the lemma, part-of-speech, and relationships of WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical-Semantic alignment", "sec_num": "4.2" }, { "text": "This alignment method depends on calculating the semantic similarity between sentences based on an analysis of the relations, in ISR-WN, of the words that fix them. First, the two sentences are pre-processed with Freeling and the words are classified according to their POS, creating different groups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Alignment", "sec_num": "4.3" }, { "text": "The distance between two words will be the distance, based on WordNet, of the most probable sense of each word in the pair, on the contrary of our previously system in SemEval 2012. In that version, we assumed the selected sense after apply a double Hungarian Algorithm (Kuhn, 1955) , for more details please refer to (Fern\u00e1ndez et al., 2012) . The distance is computed according to the equation 1:", "cite_spans": [ { "start": 270, "end": 282, "text": "(Kuhn, 1955)", "ref_id": "BIBREF19" }, { "start": 318, "end": 342, "text": "(Fern\u00e1ndez et al., 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Alignment", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) = \u2211 * ( [ ], [ + 1]) = =0 ;", "eq_num": "(1)" } ], "section": "Semantic Alignment", "sec_num": "4.3" }, { "text": "Where is the collection of synsets corresponding to the minimum path between nodes and , is the length of subtracting one, is a function that search the relation connecting and nodes, is a weight associated to the relation searched by (see Table 1 ). Let us see the following example: \uf0b7 We could take the pair 99 of corpus MSRvid (from training set of SemEval-2013) with a littler transformation in order to a better explanation of our method. Original pair A: A polar bear is running towards a group of walruses. B: A polar bear is chasing a group of walruses. Transformed pair: A1: A polar bear runs towards a group of cats. B1: A wale chases a group of dogs.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 247, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Semantic Alignment", "sec_num": "4.3" }, { "text": "Later on, using equation 1, a matrix with the distances between all groups of both phrases is created (see Table 2 ).", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Relation", "sec_num": null }, { "text": "runs towards group cats Using the Hungarian Algorithm (Kuhn, 1955) for Minimum Cost Assignment, each group of the first sentence is checked with each element of the second sentence, and the rest is marked as words that were not aligned.", "cite_spans": [ { "start": 54, "end": 66, "text": "(Kuhn, 1955)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "GROUPS polar bear", "sec_num": null }, { "text": "In the previous example the words \"toward\" and \"polar\" are the words that were not aligned, so the number of non-aligned words is two. There is only one perfect match: \"group-group\" (match with cost=0). The length of the shortest sentence is four. The Table 3 shows the results of this analysis. This process has to be repeated for nouns (see Table 4 ), verbs, adjective, adverbs, prepositions, conjunctions, pronouns, determinants, modifiers, digits and date times. On the contrary, the tables have to be created only with the similar groups of the sentences. Several attributes are extracted from the pair of sentences (see Table 3 and Table 5 ). Three attributes considering only verbs, only nouns, only adjectives, only adverbs, only prepositions, only conjunctions, only pronouns, only determinants, only modifiers, only digits, and only date times. These attributes are:", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 343, "end": 350, "text": "Table 4", "ref_id": null }, { "start": 626, "end": 633, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 638, "end": 645, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "GROUPS polar bear", "sec_num": null }, { "text": "\uf0b7 Number of exact coincidences \uf0b7 Total distance of matching \uf0b7 Number of words that do not match Many groups have particular features according to their parts-of-speech. The group of the nouns has one more feature that indicates if the two phrases have the same number (plural or singular). For this feature, we take the average of the number of each noun in the phrase like a number of the phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of exact coincidence", "sec_num": null }, { "text": "For the group of adjectives we added a feature indicating the distance between the nouns that modify it from the aligned adjectives, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of exact coincidence", "sec_num": null }, { "text": "For the verbs, we search the nouns that precede it, and the nouns that are next of the verb, and we define two groups. We calculated the distance to align each group with every pair of aligned verbs. The verbs have other feature that specifies if all verbs are in the same verbal time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of exact coincidence", "sec_num": null }, { "text": "With the adverbs, we search the verb that is modified by it, and we calculate their distance from all alignment pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of exact coincidence", "sec_num": null }, { "text": "With the determinants and the adverbs we detect if any of the alignment pairs are expressing negations (like don't, or do not) in both cases or not. Finally, we determine if the two phrases have the same principal action. For all this new features, we aid with Freeling tool.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number of exact coincidence", "sec_num": null }, { "text": "As a result, we finally obtain 42 attributes from this alignment method. It is important to remark that this alignment process searches to solve, for each word from the rows (see Table 4 ) it has a respectively word from the columns.", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 186, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Number of exact coincidence", "sec_num": null }, { "text": "From the alignment process, we extract different features that help us a better result of our MLS. Table 6 shows the group of features with lexical and semantic support, based on WordNet relation (named F1). Each of they were named with a prefix, a hyphen and a suffix. Table 7 describes the meaning of every prefix, and Table 8 shows the meaning of the suffixes. ", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 6", "ref_id": null }, { "start": 270, "end": 277, "text": "Table 7", "ref_id": null }, { "start": 321, "end": 329, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Description of the alignment feature", "sec_num": "4.4" }, { "text": "NWunch, SWaterman, SWGotoh, SWGAffine, Jaro, JaroW, CLDeviation, CMLength, QGramD, BlockD, CosineS, DiceS, EuclideanD, JaccardS, MaCoef, MongeElkan, OverlapCoef. Other features we extracted were obtained from the following similarity measures (named F2) (see Table 9 for detail). We used another group named F3, with lexical measure extracted from SimMetric library (see Table 10 for detail).", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 266, "text": "Table 9", "ref_id": null }, { "start": 371, "end": 379, "text": "Table 10", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Features", "sec_num": null }, { "text": "Finally we used a group of five feature (named F4), extracted from all against all alignment (see Table 11 for detail).", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 106, "text": "Table 11", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Features", "sec_num": null }, { "text": "For the training process, we used a supervised learning framework, including all the training set as a training corpus. Using ten-fold cross validation with the classifier mentioned in section 3 (experimentally selected).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of the training phase", "sec_num": "4.5" }, { "text": "As we can see in Table 12 , the attributes corresponding with the Test 1 (only lexical attributes) obtain 0.7534 of correlation. On the other side, the attributes of the Test 2 (lexical features with semantic support) obtain 0.7549 of correlation, and all features obtain 0.7987. Being demonstrated the necessity to tackle the problem of the similarity from a multidimensional point of view (see Test 3 in the Table 12 ). ", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 25, "text": "Table 12", "ref_id": "TABREF2" }, { "start": 410, "end": 418, "text": "Table 12", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Description of the training phase", "sec_num": "4.5" }, { "text": "Semantic Textual Similarity task of SemEval-2013 offered two official measures to rank the systems 5 : Mean-the main evaluation value, Rank-gives the rank of the submission as ordered by the \"mean\" result. SMT dataset comes from DARPA GALE HTER and HyTER. One sentence is a MT output and the other is a reference translation where a reference is generated based on human post editing. Table 13 . Test Core Datasets.", "cite_spans": [], "ref_spans": [ { "start": 385, "end": 393, "text": "Table 13", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "Using these measures, our second run (Run 2) obtained the best results (see Table 14 ). As we can see in Table 14 , our lexical run has obtained our best result, given at the same time worth result in our other runs. This demonstrates that tackling this problem with combining multiple lexical similarity measure produce better results in concordance to this specific test corpora.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 84, "text": "Table 14", "ref_id": "TABREF2" }, { "start": 105, "end": 113, "text": "Table 14", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "To explain Table 14 we present following descriptions: caption in top row mean: 1-Headlines, 2-OnWN, 3-FNWN, 4-SMT and 5mean. The Run 1 is our main run, which contains the junction of all attributes (lexical and semantic attributes). Table 14 shows the results of all the runs for a different corpus from test phase. As we can see, Run 1 did not obtain the best results among our runs.", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 19, "text": "Table 14", "ref_id": "TABREF2" }, { "start": 234, "end": 242, "text": "Table 14", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "Otherwise, Run 3 uses more semantic analysis than Run 2, from this; Run 3 should get better results than reached over the corpus of FNWN, because this corpus is extracted from FrameNet corpus (Baker et al., 1998 ) (a semantic network). FNWN provides examples with high semantic content than lexical.", "cite_spans": [ { "start": 192, "end": 211, "text": "(Baker et al., 1998", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "Run 3 obtained a correlation coefficient of 0.8137 for all training corpus of SemEval 2013, while Run 2 and Run 1 obtained 0.7976 and 0.8345 respectively with the same classifier (Bagging using REPTree, and cross validation with ten-folds). These results present a contradiction between test and train evaluation. We think it is consequence of some obstacles present in test corpora, for example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "In headlines corpus there are great quantity of entities, acronyms and gentilics that we not take into account in our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "The corpus FNWN presents a non-balance according to the length of the phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "In OnWN -test corpus-, we believe that some evaluations are not adequate in correspondence with the training corpus. For example, in line 7 the goal proposed was 0.6, however both phrases are semantically similar. The phrases are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "\uf0b7 the act of lifting something \uf0b7 the act of climbing something. We think that 0.6 are not a correct evaluation for this example. Our system result, for this particular case, was 4.794 for Run 3, and 3.814 for Run 2, finally 3.695 for Run 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result and discussion", "sec_num": "5" }, { "text": "This paper have introduced a new framework for recognizing Semantic Textual Similarity, which depends on the extraction of several features that can be inferred from a conventional interpretation of a text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future works", "sec_num": "6" }, { "text": "As mentioned in section 3 we have conducted three different runs, these runs only differ in the type of attributes used. We can see in Table 14 that all runs obtained encouraging results. Our best run was situated at 44 th position of 90 runs of the ranking of SemEval-2013. Table 12 and Table 14 show the reached positions for the three different runs and the ranking according to the rest of the teams.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 143, "text": "Table 14", "ref_id": "TABREF2" }, { "start": 275, "end": 297, "text": "Table 12 and Table 14", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Conclusion and future works", "sec_num": "6" }, { "text": "In our participation, we used a MLS that works with features extracted from five different strategies: String Based Similarity Measures, Semantic Similarity Measures, Lexical-Semantic Alignment and Semantic Alignment. We have conducted the semantic features extraction in a multidimensional context using the resource ISR-WN, the one that allowed us to navigate across several semantic resources (WordNet, WordNet Domains, WordNet Affect, SUMO, SentiWordNet and Semantic Classes).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future works", "sec_num": "6" }, { "text": "Finally, we can conclude that our system performs quite well. In our current work, we show that this approach can be used to correctly classify several examples from the STS task of SemEval-2013. Compared with the best run of the ranking (UMBC_EBIQUITY-ParingWords) (see Table 15 ) our main run has very close results in headlines (1), and SMT 4 As future work we are planning to enrich our semantic alignment method with Extended WordNet (Moldovan and Rus, 2001) , we think that with this improvement we can increase the results obtained with texts like those in OnWN test set.", "cite_spans": [ { "start": 439, "end": 463, "text": "(Moldovan and Rus, 2001)", "ref_id": null } ], "ref_spans": [ { "start": 271, "end": 279, "text": "Table 15", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Conclusion and future works", "sec_num": "6" }, { "text": "Is important to remark that our team has been working up in collaboration with INAOE (Instituto Nacional de Astrof\u00edsica, \u00d3ptica y Electr\u00f3nica) and LIPN (Laboratoire d'Informatique de Paris-Nord), Universit\u00e9 Paris 13 universities, in order to encourage the knowledge interchange and open shared technology. Supporting this collaboration, INAOE-UPV (Instituto Nacional de Astrof\u00edsica, \u00d3ptica y Electr\u00f3nica and Universitat Polit\u00e8cnica de Val\u00e8ncia) team, in concrete in INAOE-UPVrun 3 has used our semantic distances for nouns, adjectives, verbs and adverbs, as well as lexical attributes like LevDoble, NormLevF, NormLevL and Ext (see influence of these attributes in Table 12 ).", "cite_spans": [], "ref_spans": [ { "start": 665, "end": 673, "text": "Table 12", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Team Collaboration", "sec_num": "6.1" }, { "text": "http://www.cs.york.ac.uk/semeval-2012/task6/ 2 http://www.cs.york.ac.uk/semeval-2012/task6/data/uploads/datasets/train-readme.txt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Copyright (c) 2006 by Chris Parkinson, available in http://sourceforge.net/projects/simmetrics/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "(noun, verb, adjective, adverbs, prepositions, conjunctions, pronouns, determinants, modifiers, etc.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://ixa2.si.ehu.es/sts/index.php?option=com_content&vi ew=article&id=53&Itemid=61", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research work has been partially funded by the Spanish Government through the project TEXT-MESS 2.0 (TIN2009-13391-C04), \"An\u00e1lisis de Tendencias Mediante T\u00e9cnicas de Opini\u00f3n Sem\u00e1ntica\" (TIN2012-38536-C03-03) and \"T\u00e9cnicas de Deconstrucci\u00f3n en la Tecnolog\u00edas del Lenguaje Humano\" (TIN2012-31224); and by the Valencian Government through the project PROMETEO (PROMETEO/2009/199).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Shared Task: Semantic Textual Similarity including a Pilot on Typed-Similarity. *SEM 2013: The Second Joint Conference on Lexical and Computational Semantics", "authors": [ { "first": "M", "middle": [], "last": "Diab", "suffix": "" }, { "first": "W", "middle": [], "last": "Guo", "suffix": "" }, { "first": "", "middle": [], "last": "Sem", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Diab and W. Guo. *SEM 2013 Shared Task: Semantic Textual Similarity including a Pilot on Typed-Similarity. *SEM 2013: The Second Joint Conference on Lexical and Computational Semantics, Association for Computational Linguistics, 2013.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Task 6:A Pilot on Semantic Textual Similarity. First Join Conference on Lexical and Computational Semantic (*SEM)", "authors": [ { "first": "E", "middle": [], "last": "Aguirre", "suffix": "" }, { "first": "D", "middle": [], "last": "Cerd", "suffix": "" }, { "first": "", "middle": [], "last": "Semeval", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aguirre, E. and D. Cerd. SemEval 2012 Task 6:A Pilot on Semantic Textual Similarity. First Join Conference on Lexical and Computational Semantic (*SEM), Montr\u00e9al, Canada, Association for Computational Linguistics., 2012. 385-393 p.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "FreeLing 1.3: Syntactic and semantic services in an opensource NLP library", "authors": [ { "first": "M", "middle": [], "last": "Gonz\u00e1lez", "suffix": "" }, { "first": "; L", "middle": [], "last": "Padr\u00f3", "suffix": "" }, { "first": "M", "middle": [], "last": "Padr\u00f3", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC'06", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Gonz\u00e1lez; L. Padr\u00f3 and M. Padr\u00f3. FreeLing 1.3: Syntactic and semantic services in an opensource NLP library. Proceedings of LREC'06, Genoa, Italy, 2006.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The berkeley framenet project", "authors": [ { "first": "C", "middle": [ "F.; C J" ], "last": "Baker", "suffix": "" }, { "first": "J", "middle": [ "B" ], "last": "Fillmore", "suffix": "" }, { "first": "", "middle": [], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th international conference on Computational linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baker, C. F.; C. J. Fillmore and J. B. Lowe. The berkeley framenet project. Proceedings of the 17th international conference on Computational linguistics-Volume 1, Association for Computational Linguistics, 1998. 86-90 p.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "UNT:A Supervised Synergistic Approach to SemanticText Similarity", "authors": [ { "first": "M", "middle": [], "last": "Mohler", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2012, "venue": "First Joint Conference on Lexical and Computational Semantics (*SEM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Mohler and R. Mihalcea. UNT:A Supervised Synergistic Approach to SemanticText Similarity. First Joint Conference on Lexical and Computational Semantics (*SEM), Montr\u00e9al. Canada, Association for Computational Linguistics, 2012. 635-642 p.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bagging predictors Machine learning", "authors": [ { "first": "L", "middle": [], "last": "Breiman", "suffix": "" } ], "year": 1996, "venue": "", "volume": "24", "issue": "", "pages": "123--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Breiman, L. Bagging predictors Machine learning, 1996, 24(2): 123-140.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Measuring the Semantic Similarity of Texts, Association for Computational Linguistic", "authors": [ { "first": "C", "middle": [], "last": "Corley", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Work shop on Empirical Modeling of Semantic Equivalence and Entailment", "volume": "", "issue": "", "pages": "13--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Corley, C. and R. Mihalcea. Measuring the Semantic Similarity of Texts, Association for Computational Linguistic. Proceedings of the ACL Work shop on Empirical Modeling of Semantic Equivalence and Entailment, pages 13-18, June 2005.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "UMCC_DLSI: Multidimensional Lexical-Semantic Textual Similarity. {*SEM 2012}: The First Joint Conference on Lexical and Computational Semantics", "authors": [ { "first": "A", "middle": [], "last": "Gonz\u00e1lez; R. Estrada", "suffix": "" }, { "first": "; Y", "middle": [], "last": "Casta\u00f1eda", "suffix": "" }, { "first": ";", "middle": [ "S" ], "last": "V\u00e1zquez", "suffix": "" }, { "first": "; A", "middle": [], "last": "Montoyo", "suffix": "" }, { "first": "R", "middle": [], "last": "Mu\u00f1oz", "suffix": "" } ], "year": 2012, "venue": "", "volume": "1", "issue": "", "pages": "608--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Gonz\u00e1lez; R. Estrada; Y. Casta\u00f1eda; S. V\u00e1zquez; A. Montoyo and R. Mu\u00f1oz. UMCC_DLSI: Multidimensional Lexical- Semantic Textual Similarity. {*SEM 2012}: The First Joint Conference on Lexical and Computational Semantics --Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation {(SemEval 2012)}, Montreal, Canada, Association for Computational Linguistics, 2012. 608--616 p.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Un algoritmo para la extracci\u00f3n de caracter\u00edsticas lexicogr\u00e1ficas en la comparaci\u00f3n de palabras", "authors": [ { "first": "A", "middle": [ "C.; J" ], "last": "Fern\u00e1ndez Orqu\u00edn", "suffix": "" }, { "first": ";", "middle": [ "A" ], "last": "Blanco", "suffix": "" }, { "first": "R. Mu\u00f1oz", "middle": [], "last": "Fundora Rolo", "suffix": "" }, { "first": "", "middle": [], "last": "Guillena", "suffix": "" } ], "year": 2009, "venue": "IV Convenci\u00f3n Cient\u00edfica Internacional CIUM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fern\u00e1ndez Orqu\u00edn, A. C.; J. D\u00edaz Blanco; A. Fundora Rolo and R. Mu\u00f1oz Guillena. Un algoritmo para la extracci\u00f3n de caracter\u00edsticas lexicogr\u00e1ficas en la comparaci\u00f3n de palabras. IV Convenci\u00f3n Cient\u00edfica Internacional CIUM, Matanzas, Cuba, 2009.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Integration of semantic resources based on WordNet. XXVI Congreso de la Sociedad Espa\u00f1ola para el Procesamiento del Lenguaje Natural", "authors": [ { "first": "Y", "middle": [ "; A" ], "last": "Guti\u00e9rrez", "suffix": "" }, { "first": "; A", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "S", "middle": [], "last": "Montoyo", "suffix": "" }, { "first": "", "middle": [], "last": "V\u00e1zquez", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "1135--5948", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guti\u00e9rrez, Y.; A. Fern\u00e1ndez; A. Montoyo and S. V\u00e1zquez. Integration of semantic resources based on WordNet. XXVI Congreso de la Sociedad Espa\u00f1ola para el Procesamiento del Lenguaje Natural, Universidad Polit\u00e9cnica de Valencia, Valencia, SEPLN 2010, 2010a. 161-168 p. 1135- 5948.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "UMCC-DLSI: Integrative resource for disambiguation task", "authors": [ { "first": "Y", "middle": [ "; A" ], "last": "Guti\u00e9rrez", "suffix": "" }, { "first": "; A", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "S", "middle": [], "last": "Montoyo", "suffix": "" }, { "first": "", "middle": [], "last": "V\u00e1zquez", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guti\u00e9rrez, Y.; A. Fern\u00e1ndez; A. Montoyo and S. V\u00e1zquez. UMCC-DLSI: Integrative resource for disambiguation task. Proceedings of the 5th International Workshop on Semantic Evaluation, Uppsala, Sweden, Association for Computational Linguistics, 2010b. 427-432 p.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Enriching the Integration of Semantic Resources based on WordNet Procesamiento del Lenguaje Natural", "authors": [ { "first": "Y", "middle": [ "; A" ], "last": "Guti\u00e9rrez", "suffix": "" }, { "first": "; A", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "S", "middle": [], "last": "Montoyo", "suffix": "" }, { "first": "", "middle": [], "last": "V\u00e1zquez", "suffix": "" } ], "year": 2011, "venue": "", "volume": "47", "issue": "", "pages": "249--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guti\u00e9rrez, Y.; A. Fern\u00e1ndez; A. Montoyo and S. V\u00e1zquez Enriching the Integration of Semantic Resources based on WordNet Procesamiento del Lenguaje Natural, 2011, 47: 249-257.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Algorithms for the longest common subsequence problem", "authors": [ { "first": "D", "middle": [ "S" ], "last": "Hirschberg", "suffix": "" } ], "year": 1977, "venue": "J. ACM", "volume": "24", "issue": "", "pages": "664--675", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirschberg, D. S. Algorithms for the longest common subsequence problem J. ACM, 1977, 24: 664-675.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A Proposal of Automatic Selection of Coarse-grained Semantic Classes for WSD Procesamiento del Lenguaje Natural", "authors": [ { "first": "R", "middle": [ "; A" ], "last": "Izquierdo", "suffix": "" }, { "first": "G", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 2007, "venue": "", "volume": "39", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Izquierdo, R.; A. Su\u00e1rez and G. Rigau A Proposal of Automatic Selection of Coarse-grained Semantic Classes for WSD Procesamiento del Lenguaje Natural, 2007, 39: 189-196.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The Hungarian Method for the assignment problem Naval Research Logistics Quarterly", "authors": [ { "first": "H", "middle": [ "W" ], "last": "Kuhn", "suffix": "" } ], "year": 1955, "venue": "", "volume": "2", "issue": "", "pages": "83--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuhn, H. W. The Hungarian Method for the assignment problem Naval Research Logistics Quarterly, 1955, 2: 83-97.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Binary codes capable of correcting spurious insertions and deletions of ones. Problems of information Transmission", "authors": [ { "first": "V", "middle": [ "I" ], "last": "Levenshtein", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levenshtein, V. I. Binary codes capable of correcting spurious insertions and deletions of ones. Problems of information Transmission. 1965. pp. 8-17 p.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Introduction to WordNet: An Online", "authors": [ { "first": "G", "middle": [ "A.; R" ], "last": "Miller", "suffix": "" }, { "first": "; C", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": ";", "middle": [ "D" ], "last": "Fellbaum", "suffix": "" }, { "first": "K", "middle": [], "last": "Gross", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "Lexical Database International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "235--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, G. A.; R. Beckwith; C. Fellbaum; D. Gross and K. Miller Introduction to WordNet: An On- line Lexical Database International Journal of Lexicography, 3(4):235-244., 1990b.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Rus Explaining Answers with Extended WordNet ACL", "authors": [ { "first": "D", "middle": [ "I" ], "last": "Moldovan", "suffix": "" }, { "first": "V", "middle": [], "last": "", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moldovan, D. I. and V. Rus Explaining Answers with Extended WordNet ACL, 2001.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins", "authors": [ { "first": "S", "middle": [], "last": "Neeedleman", "suffix": "" }, { "first": "C", "middle": [], "last": "Wunsch", "suffix": "" } ], "year": 1970, "venue": "Mol. Biol", "volume": "48", "issue": "443", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neeedleman, S. and C. Wunsch A general method applicable to the search for similarities in the amino acid sequence of two proteins Mol. Biol, 1970, 48(443): 453.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Origins of the IEEE Standard Upper Ontology. Working Notes of the IJCAI-2001 Workshop on the IEEE Standard Upper Ontology", "authors": [ { "first": "I", "middle": [], "last": "Niles", "suffix": "" }, { "first": "A", "middle": [], "last": "Pease", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niles, I. and A. Pease. Origins of the IEEE Standard Upper Ontology. Working Notes of the IJCAI- 2001 Workshop on the IEEE Standard Upper Ontology, Seattle, Washington, USA., 2001.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "TakeLab: Systems for Measuring Semantic Text Similarity", "authors": [ { "first": "J", "middle": [], "last": "\u0160najder", "suffix": "" }, { "first": "B", "middle": [ "D" ], "last": "Basi\u0107", "suffix": "" } ], "year": 2012, "venue": "First Join Conference on Lexical and Computational Semantic (*SEM)", "volume": "", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. \u0160najder and B. D. Basi\u0107. TakeLab: Systems for Measuring Semantic Text Similarity. Montr\u00e9al, Canada, First Join Conference on Lexical and Computational Semantic (*SEM), pages 385-393. Association for Computational Linguistics., 2012.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "WordNet-Affect: an affective extension of WordNet", "authors": [ { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "A", "middle": [], "last": "Valitutti", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strapparava, C. and A. Valitutti. WordNet-Affect: an affective extension of WordNet. Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, 2004. 1083-1086 p.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "COGEX at the Second Recognizing Textual Entailment Challenge", "authors": [ { "first": "M", "middle": [ "; B" ], "last": "Tatu", "suffix": "" }, { "first": "; J", "middle": [], "last": "Iles", "suffix": "" }, { "first": ";", "middle": [ "N" ], "last": "Slavick", "suffix": "" }, { "first": "D", "middle": [], "last": "Adrian", "suffix": "" }, { "first": "", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Second PASCAL Recognising Textual Entailment Challenge Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tatu, M.; B. Iles; J. Slavick; N. Adrian and D. Moldovan. COGEX at the Second Recognizing Textual Entailment Challenge. Proceedings of the Second PASCAL Recognising Textual Entailment Challenge Workshop, Venice, Italy, 2006. 104-109 p.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The Compositionality of Meaning and Content", "authors": [ { "first": "M", "middle": [], "last": "Werning", "suffix": "" }, { "first": ";", "middle": [ "E" ], "last": "Machery", "suffix": "" }, { "first": "G", "middle": [], "last": "Schurz", "suffix": "" } ], "year": null, "venue": "North and South America by Transaction Books, 2005. p. Linguistics & philosophy", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Werning, M.; E. Machery and G. Schurz. The Compositionality of Meaning and Content, Volume 1: Foundational issues. ontos verlag [Distributed in] North and South America by Transaction Books, 2005. p. Linguistics & philosophy, Bd. 1. 3-937202-52-8.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The state of record linkage and current research problems", "authors": [ { "first": "W", "middle": [], "last": "Winkler", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Winkler, W. The state of record linkage and current research problems. Technical Report, Statistical Research Division, U.S, Census Bureau, 1999.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "System Architecture." }, "TABREF2": { "type_str": "table", "num": null, "text": "shows the weights associated to WordNet relations between two synsets.", "content": "", "html": null }, "TABREF4": { "type_str": "table", "num": null, "text": "Features from the analyzed sentences.", "content": "
Total Distances ofNumber of
optimal Matchingnon-aligned
Words
152
", "html": null }, "TABREF5": { "type_str": "table", "num": null, "text": "shows features extracted from the analysis of nouns. Feature extracted from analysis of nouns.", "content": "
GROUPSbeargroupcats
waleDist := 2Dist := 2
groupDist := 0
dogsDist := 1Dist := 1
Table 4. Distances between groups of nouns.
", "html": null }, "TABREF6": { "type_str": "table", "num": null, "text": "Suffixes for describe each type of alignment.", "content": "
Features
CPA_FCG, CPNA_FCG, SIM_FCG, CPA_LCG,
CPNA_LCG,SIM_LCG,CPA_FCGR,
CPNA_FCGR,SIM_FCGR,CPA_LCGR,
CPNA_LCGR, SIM_LCGR
Table 6. F1. Semantic feature group.
Prefixes Descriptions
CPANumber of aligned words.
CPNANumber of non-aligned words.
SIMSimilarity
Table 7. Meaning of each prefixes.
Prefixes Compared words for\u2026
FCGMorphology and POS
LCGLemma and POS
FCGRMorphology, POS and WordNet relation.
LCGRLemma, POS and WordNet relation.
Table 8FeaturesDescriptions
LevFormaLevenshtein Distance between two
phrasescomparingwordsby
morphology
LevLemaThe same as above, but now
comparing by lemma.
LevDobleIdem, but comparing again by
Levenshtein and accepting words
match if the distance is \u2264 2.
DExExtended Distance
NormLevF,Normalized forms of LevForma and
NormLevLLevLema.
Table 9. F2. Lexical alignment measures.
", "html": null }, "TABREF7": { "type_str": "table", "num": null, "text": "Lexical Measure from SimMetrics library. Aligning all against all.", "content": "
FeaturesDescriptions
AxAQGD_LAll against all applying QGramD
and comparing by lemmas of the
words.
AxAQGD_FSame as above, but applying
QGramD and comparing by
morphology.
AxAQGD_LF Idem, not only comparing by lemma
but also by morphology.
AxALev_LFAll against all applying Levenhstein
", "html": null }, "TABREF11": { "type_str": "table", "num": null, "text": "core test datasets.", "content": "
Run123456
(First) 0.7642 0.7529 0.5818 0.3804 0.6181 1
(Our) RUN 20.6168 0.5557 0.3045 0.3407 0.4833 44
Table 15. Comparison with best run (SemEval 2013).
", "html": null } } } }