{ "paper_id": "S15-2006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:38:20.893021Z" }, "title": "ECNU: Leveraging Word Embeddings to Boost Performance for Paraphrase in Twitter", "authors": [ { "first": "Jiang", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "Shanghai Key Laboratory of Multidimensional Information Processing", "institution": "East China Normal University", "location": { "postCode": "200241", "settlement": "Shanghai", "country": "P. R. China" } }, "email": "" }, { "first": "Man", "middle": [], "last": "Lan", "suffix": "", "affiliation": { "laboratory": "Shanghai Key Laboratory of Multidimensional Information Processing", "institution": "East China Normal University", "location": { "postCode": "200241", "settlement": "Shanghai", "country": "P. R. China" } }, "email": "mlan@cs.ecnu.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes our approaches to paraphrase recognition in Twitter organized as task 1 in Semantic Evaluation 2015. Lots of approaches have been proposed to address the paraphrasing task on conventional texts (surveyed in (Madnani and Dorr, 2010)). In this work we examined the effectiveness of various linguistic features proposed in traditional paraphrasing task on informal texts, (i.e., Twitter), for example, string based, corpus based, and syntactic features, which served as input of a classification algorithm. Besides, we also proposed novel features based on distributed word representations, which were learned using deep learning paradigms. Results on test dataset show that our proposed features improve the performance by a margin of 1.9% in terms of F1-score and our team ranks third among 10 teams with 38 systems.", "pdf_parse": { "paper_id": "S15-2006", "_pdf_hash": "", "abstract": [ { "text": "This paper describes our approaches to paraphrase recognition in Twitter organized as task 1 in Semantic Evaluation 2015. Lots of approaches have been proposed to address the paraphrasing task on conventional texts (surveyed in (Madnani and Dorr, 2010)). In this work we examined the effectiveness of various linguistic features proposed in traditional paraphrasing task on informal texts, (i.e., Twitter), for example, string based, corpus based, and syntactic features, which served as input of a classification algorithm. Besides, we also proposed novel features based on distributed word representations, which were learned using deep learning paradigms. Results on test dataset show that our proposed features improve the performance by a margin of 1.9% in terms of F1-score and our team ranks third among 10 teams with 38 systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Generally, a paraphrase is an alternative surface form in the same language expressing the same semantic content as the original form and it can appear at different levels, e.g., lexical, phrasal, sentential (Madnani and Dorr, 2010) . Identifying paraphrase can improve the performance of several natural language processing (NLP) applications, such as query and pattern expansion (Metzler et al., 2007) , machine translation (Mirkin et al., 2009) , question answering (Duboue and Chu-Carroll, 2006) , see survey (Androutsopoulos and Malakasiotis, 2010) for completion. Most of previous work of paraphrase are on formal text. Recently with the rapidly growth of microblogs and social media services, the computational linguistic community is moving its attention to informal genre of text (Java et al., 2007; Ritter et al., 2010) . For example, (Zanzotto et al., 2011) defined the problem of redundancy detection in Twitter and proposed SVM models based on bag-of-word, syntactic content features to detect paraphrase.", "cite_spans": [ { "start": 208, "end": 232, "text": "(Madnani and Dorr, 2010)", "ref_id": "BIBREF6" }, { "start": 381, "end": 403, "text": "(Metzler et al., 2007)", "ref_id": "BIBREF9" }, { "start": 426, "end": 447, "text": "(Mirkin et al., 2009)", "ref_id": "BIBREF12" }, { "start": 469, "end": 499, "text": "(Duboue and Chu-Carroll, 2006)", "ref_id": "BIBREF2" }, { "start": 789, "end": 808, "text": "(Java et al., 2007;", "ref_id": "BIBREF4" }, { "start": 809, "end": 829, "text": "Ritter et al., 2010)", "ref_id": "BIBREF15" }, { "start": 845, "end": 868, "text": "(Zanzotto et al., 2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To provide a benchmark so as to compare and develop different paraphrasing techniques in Twitter, the paraphrase and semantic similarity task in Se-mEval 2015 (Xu et al., 2015) requires the participants to determine whether two tweets express the same meaning or not and optionally a degree score between 0 and 1, which can be regarded as a binary classification problem. Paraphrasing task is very close to semantic textual similarity and textual entailment task (Marelli et al., 2014) since substantially these tasks all concentrated on modeling the underlying similarity between two sentences. The commonly-used features in these tasks can be categorized into several following groups: (1) string based which measures the sequence similarities of original strings with others, e.g., n-gram Overlap, cosine similarity; (2) corpus based which measures word or sentence similarities using word distributional vectors learned from large corpora using distributional models, like Latent Semantic Analysis (LSA), etc. (3) knowledge based which estimates similarities with the aid of external resources, such as WordNet; (4) syntactic based which utilizes syntax information to measure similarities; (5) other features such as using Named Entity similarity.", "cite_spans": [ { "start": 159, "end": 176, "text": "(Xu et al., 2015)", "ref_id": "BIBREF19" }, { "start": 463, "end": 485, "text": "(Marelli et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we built a supervised binary classifier for paraphrase judgment and adopted multi-ple features used in conventional texts to recognize paraphrase in Twitter, which includes string based features, corpus based features, etc. Besides, we also proposed a novel feature based on distributed word representations (i.e., word embeddings) learned over a large raw corpus using neural language models. The results on test dataset demonstrate that linguistic features are effective for paraphrase in Twitter task and proposed word embedding features further improve the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 describes the features used in our systems. System setups and experimental results on training and test datasets are presented in Section 3. Finally, conclusions and future work are given in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we describe the our preprocessing step and the traditional NLP linguistic features, as well as the word embedding features used in our systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Engineering", "sec_num": "2" }, { "text": "We conducted following text preprocessing operations before we extracted features: (1) we recovered the elongated words to their normal forms, e.g., \"goooooood\" to \"good\"; (2) about 5,000 slangs or abbreviations collected from Internet were used to convert these informal texts into their complete forms, e.g., \"1dering\" to \"wondering\", \"2g2b4g\" to \"to good to be forgotten\"; (3) the WordNetbased Lemmatizer implemented in Natural Language Toolkit 1 was used to lemmatize all words to their nearest base forms in WordNet, for example, was is lemmatized to be. (4) we replaced a word from one sentence with another word from the other sentence if the two words share the same meaning, where WordNet was used to look up synonyms. No word sense disambiguation was performed and all synsets for a particular lemma were considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "2.1" }, { "text": "We firstly recorded length information of given sentences pairs using following eight measure functions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Based Features", "sec_num": "2.2" }, { "text": "|A|, |B|, |A\u2212B|, |B\u2212A|, |A\u222aB|, |A\u2229B|, (|A|\u2212|B|) |B| , (|B|\u2212|A|) |A|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Based Features", "sec_num": "2.2" }, { "text": "where |A| stands for the number of non-repeated 1 http://nltk.org/ words in sentence A , |A \u2212 B| means the number of unmatched words found in A but not in B , |A \u222a B| stands for the set size of non-repeated words found in either A or B and |A \u2229 B| means the set size of shared words found in both A and B . Motivated by the hypothesis that two texts are considered to be more similar if they share more strings, we adopted the following five types of measurements: (1) longest common sequence similarity on the original and lemmatized sentences; (2) Jaccard, Dice, Overlap coefficient on original word sequences; (3) Jaccard similarity using n-grams, where n-grams were obtained at three different levels, i.e., the original word level (n=1,2,3), the lemmatized word level (n=1,2,3) and the character level (n=2,3,4); (4) weighted word overlap feature (\u0160ari\u0107 et al., 2012) that takes the importance of words into consideration, where Web 1T 5-gram Corpus 2 was used to estimate the importance of words. (5) sentences were represented as vectors in tf*idf schema based on their lemmatized forms and then these vectors were used to calculate cosine, Manhattan, Euclidean distance and Pearson, Spearmanr, Kendalltau correlation coefficients based on different perspectives. Totally, we got thirty-one string based features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Based Features", "sec_num": "2.2" }, { "text": "Corpus based features aim to capture the semantic similarities using distributional meanings of words and Latent Semantic Analysis (LSA) (Landauer and Dumais, 1997) is widely used to estimate the distributional vectors of words. Hence, we adopted two distributional sets released in TakeLab (\u0160ari\u0107 et al., 2012) , where LSA is performed over the New York Times Annotated Corpus (NYT) 3 and Wikipedia. Then two strategies were used to convert the distributional meanings of words to sentence level: (i) simply summing up the distributional vectors of words in the sentence, (ii) using the information content (\u0160ari\u0107 et al., 2012) to weigh the LSA vector of each word w and summing them up. At last we used cosine similarity to measure the similarity of two sentences based on these vectors. Besides, we used the Co-occurrence Retrieval Model (CRM) (Weeds, 2003) as another type of corpus based feature. The CRM was calculated based on a notion of substitutability, that is, the more appropriate it was to substitute word w 1 in place of word w 2 in a suitable natural language task, the more semantically similar they were.", "cite_spans": [ { "start": 283, "end": 311, "text": "TakeLab (\u0160ari\u0107 et al., 2012)", "ref_id": null }, { "start": 608, "end": 628, "text": "(\u0160ari\u0107 et al., 2012)", "ref_id": null }, { "start": 847, "end": 860, "text": "(Weeds, 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Based Features", "sec_num": "2.3" }, { "text": "Besides, the extraction of aforementioned features rely on large external corpora, while (Guo and Diab, 2012) proposed a novel latent model, i.e., weighted textual matrix factorization (WTM-F), to capture the contextual meanings of words in sentences based on internal term-sentence matrix. WTMF factorizes the original term-sentence matrix X into two matrices such that X i,j \u2248 P T * ,i Q * ,j , where P * ,i is a latent semantics vector profile for word w i and Q * ,j is the vector profile that represents the sentence s j . The weight matrix W is introduced in the optimization process in order to model the missing words at the right level of emphasis. Then, we used cosine, Manhattan, Euclidean functions and Pearson, Spearmanr, Kendalltau correlation coefficients to calculate the similarities based on sentence representations. At last, we obtained twelve corpus based features.", "cite_spans": [ { "start": 89, "end": 109, "text": "(Guo and Diab, 2012)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Based Features", "sec_num": "2.3" }, { "text": "We estimated the similarities of sentence pairs at syntactic level. Stanford CoreNLP toolkit (Manning and Surdeanu, 2014) was used to obtain POS tag sequences. Afterwards, we performed eight measure functions described in Section 2.2 over these sequences, which resulted in eight syntactic based features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Features", "sec_num": "2.4" }, { "text": "We built a binary feature to indicate whether two sentences in a pair have the same polarity (affirmative or negative) by looking up a manually-collected negation list with 29 negation words (e.g., scarcely, no, little). Also, we checked whether one sentence entails the other only using the named entity information which was provided in the dataset. Finally, we obtained nineteen other features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Features", "sec_num": "2.5" }, { "text": "Recently, deep learning has achieved a great success in the fields of computer vision, automatic speech recognition and natural language processing. As a consequence of its application in NLP, word embeddings have been building blocks in many tasks, e.g., named entity recognition and chunking (Turian et al., 2010) , semantic word similarities (Mikolov et al., 2013a) , etc. Being distributed representation of words, word embeddings usually are learned using neural networks over a large raw corpus and has outperformed LSA for preserving linear regularities among words (Mikolov et al., 2013a) . Due to its superior performance, we adopted word embeddings to estimate the similarities of sentence pairs. In our experiments, we used seven different word embeddings with different dimensions: word2vec (Mikolov et al., 2013b) , Collobert and Weston embeddings (Collobert and Weston, 2008) and HLBL embeddings (Mnih and Hinton, 2007) . Word2vec embeddings are distributed within the word2vec toolkit 4 and they are 300-dimensional vectors learned from Google News Corpus which consists of over a 100 billion words. Collobert and Weston and HLBL embeddings are learned over a part of RCV1 corpus which consists of 63 millions words, with 25, 50, 100, or 200 dimensions and 50, 100 dimensions over 5-gram windows respectively. To obtain sentence representations, we simply summed up embedding vectors corresponding to the non-stopwords tokens in bag of words (BOW) of sentences. After that, we used cosine, Manhattan, Euclidean functions and Pearson, Spearmanr, Kendalltau correlation coefficients to calculate the similarities based on these synthetic sentence representations. We got ninety word embedding features.", "cite_spans": [ { "start": 294, "end": 315, "text": "(Turian et al., 2010)", "ref_id": "BIBREF16" }, { "start": 345, "end": 368, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF10" }, { "start": 573, "end": 596, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF10" }, { "start": 803, "end": 826, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF11" }, { "start": 861, "end": 889, "text": "(Collobert and Weston, 2008)", "ref_id": "BIBREF1" }, { "start": 910, "end": 933, "text": "(Mnih and Hinton, 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Features", "sec_num": "2.6" }, { "text": "The organizers provided 13,063 training pairs together with 4,727 development pairs in development phase and 972 test pairs in test phase. We removed the debatable instances (i.e., two annotators vote for yes and the other three for no) existing in the dataset, which resulted in 11,530 training pairs and 4,142 development pairs. We built two supervised classification systems over these datasets. One is mlfeats which only uses the traditional linguistic features (i.e., features described in Section 2.2-2.5, 64 features in total) and the other is nnfeats which combines the traditional linguistic features with the word embedding features (148 features in total). Several classification algorithms were explored on development dataset including Support Vector Classification (SVC, linear), Random Forest (RF), Gradient Boosting (GB) implemented in the scikit-learn toolkit (Pedregosa et al., 2011 ) and a large scale of parameter values in these algorithms were tuned, i.e., the trade-off parameter c in SVR, the number of trees n in RF, the number of boosting stages n in G-B. F-score was used to evaluate the performance of systems. Table 1 presents the best four F1 results achieved by different algorithms together with their parameters in system mlfeats and nnfeats on development dataset. The results show that these two systems consistently yield comparable performance, which means that our proposed features based on word embeddings have little help to detect paraphrase on development set. And we also find that SVC performs slightly better than GB and RF algorithm. There-fore, we adopted a major voting schema based on SVC (c=0.1) and GB (n=140,150) in test period. Table 2 summarizes the performance and ranks of our systems on test dataset, along with the baseline systems provided by the organizers and the top three systems. From this table, we observe following findings. Firstly, nnfeats using word embedding features outperforms the system mlfeats only using traditional linguistic features by 1.9%, which is inconsistent with the findings on development set. The possible reason may be that test data is collected from a different time period while train and development data is from the same time period while the word embedding features might more or less capture this differences. Secondly, our results are significantly better than the three baseline systems since our systems incorporate the features used in baseline systems and other effective features. Thirdly, the top 1 system (i.e., ASOBEK svckernel) yields 3.1% and 1.2% improvement over our system mlfeats and nnfeats respectively, which indicates that word embedding features and traditional linguistic features are effective in resolving Twitter paraphrase problem.", "cite_spans": [ { "start": 877, "end": 900, "text": "(Pedregosa et al., 2011", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 1139, "end": 1146, "text": "Table 1", "ref_id": null }, { "start": 1682, "end": 1689, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "System Setups", "sec_num": "3.1" }, { "text": "To explore the influence of different feature type-s, we conducted feature ablation experiments where we removed one feature group from all feature set every time and then executed the same classification procedure. Table 3 shows the results of feature ablation experiments. From this table, we can see that the most influential features for recognizing tweet paraphrase is corpus based features and the second most important feature group is word embedding features, which are within our expectation since these two kinds of feature take advantage of the semantic meaning of words. ", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 223, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "3.2" }, { "text": "In this paper we address paraphrase in Twitter task by building a supervised classification model. Many linguistic features used in traditional paraphrase task and newly proposed features based on word embeddings were extracted. The results on test dataset demonstrate that (1) our proposed word embedding features improve the performance by a value of 1.9%; (2) the linguistic features used in paraphrase on conventional texts task are also useful and effective in Twitter domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "https://catalog.ldc.upenn.edu/LDC2006T13 3 https://catalog.ldc.upenn.edu/LDC2008T19", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://code.google.com/p/word2vec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is supported by grants from Science and Technology Commission of Shanghai Municipality under research grant no. (14DZ2260800 and 15ZR1410700) and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (ZF1213).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A survey of paraphrasing and textual entailment methods", "authors": [], "year": 2010, "venue": "J. Artif. Int. Res", "volume": "", "issue": "", "pages": "135--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ion Androutsopoulos and Prodromos Malakasiotis. 2010. A survey of paraphrasing and textual entailment methods. J. Artif. Int. Res., pages 135-187.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th international conference on Machine learning", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceedings of the 25th international conference on Machine learn- ing, pages 160-167.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Answering the question you wish they had asked: The impact of paraphrasing for question answering", "authors": [ { "first": "Pablo", "middle": [], "last": "Ariel Duboue", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" } ], "year": 2006, "venue": "NAA-CL, Companion Volume: Short Papers", "volume": "", "issue": "", "pages": "33--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pablo Ariel Duboue and Jennifer Chu-Carroll. 2006. An- swering the question you wish they had asked: The im- pact of paraphrasing for question answering. In NAA- CL, Companion Volume: Short Papers, pages 33-36.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Modeling sentences in the latent space", "authors": [ { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2012, "venue": "ACL", "volume": "", "issue": "", "pages": "864--872", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Guo and Mona Diab. 2012. Modeling sentences in the latent space. In ACL, pages 864-872.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Why we Twitter: understanding microblogging usage and communities", "authors": [ { "first": "Akshay", "middle": [], "last": "Java", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Finin", "suffix": "" }, { "first": "Belle", "middle": [], "last": "Tseng", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 9th We-bKDD and 1st SNA-KDD 2007 workshop on Web mining and social network analysis", "volume": "", "issue": "", "pages": "56--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akshay Java, Xiaodan Song, Tim Finin, and Belle Tseng. 2007. Why we Twitter: understanding microblogging usage and communities. In Proceedings of the 9th We- bKDD and 1st SNA-KDD 2007 workshop on Web min- ing and social network analysis, pages 56-65.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge", "authors": [ { "first": "K", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Landauer", "suffix": "" }, { "first": "", "middle": [], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological review", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas K Landauer and Susan T Dumais. 1997. A so- lution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, page 211.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generating phrasal and sentential paraphrases: A survey of data-driven methods", "authors": [ { "first": "Nitin", "middle": [], "last": "Madnani", "suffix": "" }, { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "36", "issue": "3", "pages": "341--387", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitin Madnani and Bonnie J Dorr. 2010. Gener- ating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36(3):341-387.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "52nd ACL : System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning and Mihai et al. Surdeanu. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In 52nd ACL : System Demonstra- tions.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "SemEval", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffael- la Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. Semeval-2014 task 1: Evaluation of composi- tional distributional semantic models on full sentences through semantic relatedness and textual entailment. In SemEval, pages 1-8.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Similarity measures for short segments of text", "authors": [ { "first": "Donald", "middle": [], "last": "Metzler", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Meek", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Donald Metzler, Susan Dumais, and Christopher Meek. 2007. Similarity measures for short segments of text.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corra- do, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111-3119.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Source-language entailment modeling for translating unknown terms", "authors": [ { "first": "Lucia", "middle": [], "last": "Shachar Mirkin", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Cancedda", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Dymetman", "suffix": "" }, { "first": "", "middle": [], "last": "Szpektor", "suffix": "" } ], "year": 2009, "venue": "ACL", "volume": "", "issue": "", "pages": "791--799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shachar Mirkin, Lucia Specia, Nicola Cancedda, Ido Dagan, Marc Dymetman, and Idan Szpektor. 2009. Source-language entailment modeling for translating unknown terms. In ACL, pages 791-799.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Three new graphical models for statistical language modelling", "authors": [ { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 24th international conference on Machine learning", "volume": "", "issue": "", "pages": "641--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th international conference on Machine learning, pages 641-648.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, et al. 2011. Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 12:2825- 2830.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised modeling of Twitter conversations", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "172--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Un- supervised modeling of Twitter conversations. pages 172-180.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Word representations: a simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "the 48th ACL", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In the 48th ACL, pages 384-394.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "TakeLab: Systems for measuring semantic text similarity", "authors": [ { "first": "Goran", "middle": [], "last": "Frane\u0161ari\u0107", "suffix": "" }, { "first": "Mladen", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Jan\u0161najder", "middle": [], "last": "Karan", "suffix": "" }, { "first": "Bojana Dalbelo", "middle": [], "last": "Ba\u0161i\u0107", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012 and", "volume": "", "issue": "", "pages": "441--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frane\u0160ari\u0107, Goran Glava\u0161, Mladen Karan, Jan\u0160najder, and Bojana Dalbelo Ba\u0161i\u0107. 2012. TakeLab: Systems for measuring semantic text similarity. In *SEM 2012 and (SemEval 2012), pages 441-448, Montr\u00e9al, Cana- da.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Measures and applications of lexical distributional similarity", "authors": [ { "first": "Julie", "middle": [ "Elizabeth" ], "last": "Weeds", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julie Elizabeth Weeds. 2003. Measures and applications of lexical distributional similarity. Ph.D. thesis, Uni- versity of Sussex.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "SemEval-2015 Task 1: Paraphrase and semantic similarity in Twitter (PIT)", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "William", "middle": [ "B" ], "last": "Dolan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Chris Callison-Burch, and William B. Dolan. 2015. SemEval-2015 Task 1: Paraphrase and semantic similarity in Twitter (PIT). In Proceedings of the 9th International Workshop on Semantic Evaluation (Se- mEval), Denver, CO.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Linguistic redundancy in Twitter", "authors": [ { "first": "Fabio", "middle": [ "Massimo" ], "last": "Zanzotto", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Kostas", "middle": [], "last": "Tsioutsiouliklis", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "659--669", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Massimo Zanzotto, Marco Pennacchiotti, and Kostas Tsioutsiouliklis. 2011. Linguistic redundan- cy in Twitter. In EMNLP, pages 659-669.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "num": null, "content": "
Algorithmmlfeats Precision RecallF1nnfeats Precision RecallF1
SVC(0.1)0.7560.942 0.8390.7560.942 0.839
GB(140)0.7560.939 0.8380.7540.940 0.837
GB(150)0.7550.939 0.8370.7530.939 0.836
RF(45)0.7540.937 0.8350.7490.936 0.832
Table 1: SystemF1-Rank Precision RecallF1
ECNU nnfeats40.7670.583 0.662
ECNU mlfeats100.7540.560 0.643
BASELINE logistic210.6790.520 0.589
BASELINE WTMF280.4500.663 0.536
BASELINE random380.1920.434 0.266
ASOBEK svckernel10.6800.669 0.674
ASOBEK linearsvm20.6820.663 0.672
MITRE ikr30.5690.806 0.667
", "type_str": "table", "text": "Top results of different classification algorithms in systems mlfeats and nnfeats on development dataset together with parameter values in brackets." }, "TABREF1": { "html": null, "num": null, "content": "", "type_str": "table", "text": "Performance and rankings of systems mlfeats, nnfeats and baseline systems on test dataset officially released by the organizers, as well as top ranking systems." }, "TABREF3": { "html": null, "num": null, "content": "
", "type_str": "table", "text": "The results of feature ablation experiments." } } } }