{ "paper_id": "S15-2001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:37:15.733702Z" }, "title": "SemEval-2015 Task 1: Paraphrase and Semantic Similarity in Twitter (PIT)", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania Philadelphia", "location": { "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania Philadelphia", "location": { "region": "PA", "country": "USA" } }, "email": "" }, { "first": "William", "middle": [ "B" ], "last": "Dolan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research Redmond", "location": { "region": "WA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this shared task, we present evaluations on two related tasks Paraphrase Identification (PI) and Semantic Textual Similarity (SS) systems for the Twitter data. Given a pair of sentences, participants are asked to produce a binary yes/no judgement or a graded score to measure their semantic equivalence. The task features a newly constructed Twitter Paraphrase Corpus that contains 18,762 sentence pairs. A total of 19 teams participated, submitting 36 runs to the PI task and 26 runs to the SS task. The evaluation shows encouraging results and open challenges for future research. The best systems scored a F1-measure of 0.674 for the PI task and a Pearson correlation of 0.619 for the SS task respectively, comparing to a strong baseline using logistic regression model of 0.589 F1 and 0.511 Pearson; while the best SS systems can often reach >0.80 Pearson on well-formed text. This shared task also provides insights into the relation between the PI and SS tasks and suggests the importance to bringing these two research areas together. We make all the data, baseline systems and evaluation scripts publicly available. 1", "pdf_parse": { "paper_id": "S15-2001", "_pdf_hash": "", "abstract": [ { "text": "In this shared task, we present evaluations on two related tasks Paraphrase Identification (PI) and Semantic Textual Similarity (SS) systems for the Twitter data. Given a pair of sentences, participants are asked to produce a binary yes/no judgement or a graded score to measure their semantic equivalence. The task features a newly constructed Twitter Paraphrase Corpus that contains 18,762 sentence pairs. A total of 19 teams participated, submitting 36 runs to the PI task and 26 runs to the SS task. The evaluation shows encouraging results and open challenges for future research. The best systems scored a F1-measure of 0.674 for the PI task and a Pearson correlation of 0.619 for the SS task respectively, comparing to a strong baseline using logistic regression model of 0.589 F1 and 0.511 Pearson; while the best SS systems can often reach >0.80 Pearson on well-formed text. This shared task also provides insights into the relation between the PI and SS tasks and suggests the importance to bringing these two research areas together. We make all the data, baseline systems and evaluation scripts publicly available. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The ability to identify paraphrases, i.e. alternative expressions of the same (or similar) meaning, and the degree of their semantic similarity has proven useful for a wide variety of natural language processing applications (Madnani and Dorr, 2010) . It is particularly useful to overcome the challenge of high redundancy in Twitter and the sparsity inherent in their short texts (e.g. oscar nom'd doc \u2194 Oscarnominated documentary; some1 shot a cop \u2194 someone shot a police). Emerging research shows paraphrasing techniques applied to Twitter data can improve tasks like first story detection (Petrovi\u0107 et al., 2012) , information retrieval (Zanzotto et al., 2011) and text normalization (Xu et al., 2013; Wang et al., 2013) .", "cite_spans": [ { "start": 225, "end": 249, "text": "(Madnani and Dorr, 2010)", "ref_id": "BIBREF27" }, { "start": 593, "end": 616, "text": "(Petrovi\u0107 et al., 2012)", "ref_id": "BIBREF33" }, { "start": 641, "end": 664, "text": "(Zanzotto et al., 2011)", "ref_id": "BIBREF48" }, { "start": 688, "end": 705, "text": "(Xu et al., 2013;", "ref_id": "BIBREF47" }, { "start": 706, "end": 724, "text": "Wang et al., 2013)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previously, many researchers have investigated ways of automatically detecting paraphrases on more formal texts, like newswire text. The ACL Wiki 2 gives an excellent summary of the state-ofthe-art paraphrase identification techniques. These can be categorized into supervised methods (Qiu et al., 2006; Wan et al., 2006; Das and Smith, 2009; Socher et al., 2011; Blacoe and Lapata, 2012; Madnani et al., 2012; Ji and Eisenstein, 2013) and unsupervised methods (Mihalcea et al., 2006; Rus et al., 2008; Fernando and Stevenson, 2008; Islam and Inkpen, 2007; Hassan and Mihalcea, 2011) . A few recent studies have highlighted the potential and importance of developing paraphrase identification (Zanzotto et al., 2011; Xu et al., 2013) and semantic similarity techniques (Guo and Diab, 2012) specifically for tweets. They also indicated that the very informal language, especially the high degree of lexical variation, used in social media has posed serious challenges to both tasks. The SemEval-2015 shared task on Paraphrase and Semantic Similarity In Twitter (PIT) uses a training and development set of 17,790 sentence pairs and a test set of 972 sentence pairs with paraphrase annotations (see examples in Table 1 ) that is the same as the Twitter Paraphrase Corpus we developed earlier in (Xu, 2014) and (Xu et al., 2014) . This PIT-2015 paraphrase dataset is distinct from the data used in previous studies in many aspects: (i) it contains sentences that are opinionated and colloquial, representing realistic informal language usage; (ii) it contains paraphrases that are lexically diverse; and (iii) it contains sentences that are lexically similar but semantically dissimilar. It raises many interesting research questions and could lead to a better understanding of our daily used language and how semantics can be captured in such language. We believe that such a common testbed will facilitate docking of the different approaches for purposes of comparison, lead to a better understanding of how semantics are conveyed in natural language, and help advance other NLP techniques for noisy user-generated text in the long run.", "cite_spans": [ { "start": 285, "end": 303, "text": "(Qiu et al., 2006;", "ref_id": "BIBREF34" }, { "start": 304, "end": 321, "text": "Wan et al., 2006;", "ref_id": "BIBREF43" }, { "start": 322, "end": 342, "text": "Das and Smith, 2009;", "ref_id": "BIBREF9" }, { "start": 343, "end": 363, "text": "Socher et al., 2011;", "ref_id": "BIBREF39" }, { "start": 364, "end": 388, "text": "Blacoe and Lapata, 2012;", "ref_id": "BIBREF7" }, { "start": 389, "end": 410, "text": "Madnani et al., 2012;", "ref_id": "BIBREF28" }, { "start": 411, "end": 435, "text": "Ji and Eisenstein, 2013)", "ref_id": "BIBREF24" }, { "start": 461, "end": 484, "text": "(Mihalcea et al., 2006;", "ref_id": "BIBREF29" }, { "start": 485, "end": 502, "text": "Rus et al., 2008;", "ref_id": "BIBREF37" }, { "start": 503, "end": 532, "text": "Fernando and Stevenson, 2008;", "ref_id": "BIBREF14" }, { "start": 533, "end": 556, "text": "Islam and Inkpen, 2007;", "ref_id": "BIBREF23" }, { "start": 557, "end": 583, "text": "Hassan and Mihalcea, 2011)", "ref_id": "BIBREF21" }, { "start": 693, "end": 716, "text": "(Zanzotto et al., 2011;", "ref_id": "BIBREF48" }, { "start": 717, "end": 733, "text": "Xu et al., 2013)", "ref_id": "BIBREF47" }, { "start": 769, "end": 789, "text": "(Guo and Diab, 2012)", "ref_id": "BIBREF18" }, { "start": 1293, "end": 1303, "text": "(Xu, 2014)", "ref_id": "BIBREF45" }, { "start": 1308, "end": 1325, "text": "(Xu et al., 2014)", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 1209, "end": 1216, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task has two sentence-level sub-tasks: a paraphrase identification task and an optional semantic textual similarity task. The two sub-tasks share the same data but differ in annotation and evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description and Evaluation Metrics", "sec_num": "2" }, { "text": "Given two sentences, determine whether they express the same or very similar meaning. Following the literature on paraphrase identification, we evaluate system performance by the F-1 score (harmonic mean of precision and recall) against human judgements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A -Paraphrase Identification (PI)", "sec_num": null }, { "text": "Task B -Semantic Textual Similarity (SS) Given two sentences, determine a numerical score between 0 (no relation) and 1 (semantic equivalence) to indicate their semantic similarity. Following the literature, the system outputs are compared by Pearson correlation with human scores. We also compute the maximum F-1 score over the precision-recall curve as an additional data point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A -Paraphrase Identification (PI)", "sec_num": null }, { "text": "In this shared task, we use the Twitter Paraphrase Corpus that we first presented in (Xu, 2014) and (Xu et al., 2014) . Dolan et al. (2004) . As noted in (Das and Smith, 2009) , the lack of natural non-paraphrases in the MSR corpus creates bias towards certain models.", "cite_spans": [ { "start": 85, "end": 95, "text": "(Xu, 2014)", "ref_id": "BIBREF45" }, { "start": 100, "end": 117, "text": "(Xu et al., 2014)", "ref_id": "BIBREF46" }, { "start": 120, "end": 139, "text": "Dolan et al. (2004)", "ref_id": "BIBREF11" }, { "start": 154, "end": 175, "text": "(Das and Smith, 2009)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": "3" }, { "text": "In this section, we describe our data collection and annotation methodology. Since Twitter users are free to talk about anything regarding any topic, a random pair of sentences about the same topic has a low chance of expressing the same meaning (empirically, this is less than 8%). This causes two problems: a) it is expensive to obtain paraphrases via manual annotation; b) non-expert annotators tend to loosen the criteria and are more likely to make false positive errors. To address these challenges, we design a simple annotation task and introduce two selection mechanisms to select sentences which are more likely to be paraphrases, while preserving diversity and representativeness. Figure 1: A heat-map showing overlap between expert and crowdsourcing annotation. The intensity along the diagonal indicates good reliability of crowdsourcing workers for this particular task; and the shift above the diagonal reflects the difference between the two annotation schemas. For crowdsourcing (turk), the numbers indicate how many annotators out of 5 picked the sentence pair as paraphrases; 0,1 are considered non-paraphrases; 3,4,5 are paraphrases. For expert annotation, all 0,1,2 are nonparaphrases; 4,5 are paraphrases. Medium-scored cases (2 for crowdsourcing; 3 for expert annotation) are discarded in the system evaluation of the PI sub-task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4" }, { "text": "We crawl Twitter's trending topics and their associated tweets using public APIs. 5 According to Twitter, trends are determined by an algorithm which identifies topics that are immediately popular, rather than those that have been popular for longer periods of time or which trend on a daily basis. We tokenize, remove emoticons 6 and split tweet into sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Raw Data from Twitter", "sec_num": "4.1" }, { "text": "We show the annotator an original sentence, then ask them to pick sentences with the same meaning from 10 candidate sentences. The original and candidate sentences are randomly sampled from the same topic. For each such 1 vs. 10 question, we obtain binary judgements from 5 different annotators, paying each annotator $0.02 per question. On average, each question takes one annotator about 30 \u223c 45 seconds to answer. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Design on Mechanical Turk", "sec_num": "4.2" }, { "text": "We remove problematic annotators by checking their Cohen's Kappa agreement (Artstein and Poesio, 2008 ) with other annotators. We also compute inter-annotator agreement with an expert annotator on the test dataset of 972 sentence pairs. In the expert annotation, we adopt a 5-point Likert scale to measure the degree of semantic similarity between sentences, which is defined by Agirre et al. (2012) as follows:", "cite_spans": [ { "start": 75, "end": 101, "text": "(Artstein and Poesio, 2008", "ref_id": "BIBREF1" }, { "start": 379, "end": 399, "text": "Agirre et al. (2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation Quality", "sec_num": "4.3" }, { "text": "5: Completely equivalent, as they mean the same thing; 4: Mostly equivalent, but some unimportant details differ; 3: Roughly equivalent, but some important information differs/missing. 2: Not equivalent, but share some details; 1: Not equivalent, but are on the same topic; 0: On different topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Quality", "sec_num": "4.3" }, { "text": "Although the two scales of expert and crowdsourcing annotation are defined differently, their Pearson correlation coefficient reaches 0.735 (twotailed significance 0.001). Figure 1 shows a heatmap representing the detailed overlap between the two annotations. It suggests that the graded similarity annotation task could be reduced to a binary choice in a crowdsourcing setup. As for the binary paraphrase judgements, the integrated judgement of five crowdsourcing workers achieve a F1-score of 0.823, precision of 0.752 and recall of 0.908 against expert annotations.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 180, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Annotation Quality", "sec_num": "4.3" }, { "text": "We filter the sentences within each topic to select more probable paraphrases for annotation. Our method is inspired by a typical problem in extractive summarization, that the salient sentences are likely redundant (paraphrases) and need to be removed in the output summaries. We employ the scoring method used in SumBasic (Nenkova and Vanderwende, 2005; Vanderwende et al., 2007) , a simple but powerful summarization system, to find salient sentences. For each topic, we compute the probability of each word P (w i ) by simply dividing its frequency by the total number of all words in all sentences. Each sentence s is scored as the average of the probabilities of the words in it, i.e.", "cite_spans": [ { "start": 323, "end": 354, "text": "(Nenkova and Vanderwende, 2005;", "ref_id": "BIBREF30" }, { "start": 355, "end": 380, "text": "Vanderwende et al., 2007)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Summarization Inspired Sentence Filtering", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= w i \u2208s P (w i ) |{w i |w i \u2208 s}|", "eq_num": "(1)" } ], "section": "Salience(s)", "sec_num": null }, { "text": "We then rank the sentences and pick the original sentence randomly from top 10% salient sentences and candidate sentences from top 50% to present to the annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Salience(s)", "sec_num": null }, { "text": "In a trial experiment of 20 topics, the filtering technique double the yield of paraphrases from 152 to 329 out of 2000 sentence pairs over na\u00efve random sampling (Figure 2 and Figure 3 ). We also use PINC (Chen and Dolan, 2011) to measure the quality of paraphrases collected (Figure 4 ). PINC was designed to measure n-gram dissimilarity between two sentences, and in essence it is the inverse of BLEU. In general, the cases with high PINC scores include more complex and interesting rephrasings.", "cite_spans": [ { "start": 205, "end": 227, "text": "(Chen and Dolan, 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 162, "end": 171, "text": "(Figure 2", "ref_id": "FIGREF1" }, { "start": 176, "end": 184, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 276, "end": 285, "text": "(Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Salience(s)", "sec_num": null }, { "text": "Another approach to increasing paraphrase yield is to choose more appropriate topics. This is particularly important because the number of paraphrases varies greatly from topic to topic and thus the chance to encounter paraphrases during annotation (Figure 2) . We treat this topic selection problem as a variation of the Multi-Armed Bandit (MAB) problem (Robbins, 1985) and adapt a greedy algorithm, the bounded -first algorithm, of Tran-Thanh et al.", "cite_spans": [ { "start": 355, "end": 370, "text": "(Robbins, 1985)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 249, "end": 259, "text": "(Figure 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Topic Selection using Multi-Armed Bandits (MAB) Algorithm", "sec_num": "4.5" }, { "text": "(2012) to accelerate our corpus construction. Our strategy consists of two phases. In the first exploration phase, we dedicate a fraction of the total budget, , to explore randomly chosen arms of each slot machine (trending topic on Twitter), each m times. In the second exploitation phase, we sort all topics according to their estimated proportion of paraphrases, and sequentially annotate (1\u2212 )B l\u2212m arms that have the highest estimated reward until reaching the maximum l = 10 annotations for any topic to insure data diversity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Selection using Multi-Armed Bandits (MAB) Algorithm", "sec_num": "4.5" }, { "text": "We tune the parameters m to be 1 and to be between 0.35 \u223c 0.55 through simulation experiments, by artificially duplicating a small amount of real annotation data. We then apply this MAB algorithm in the real-world. We explore 500 random topics and then exploited 100 of them. The yield of paraphrases rises to 688 out of 2000 sentence pairs by using MAB and sentence filtering, a 4-fold increase compared to only using random selection (Figure 3) .", "cite_spans": [], "ref_spans": [ { "start": 436, "end": 446, "text": "(Figure 3)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Topic Selection using Multi-Armed Bandits (MAB) Algorithm", "sec_num": "4.5" }, { "text": "We provide three baselines, including a random baseline, a strong supervised baseline and a stateof-the-art unsupervised system:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5" }, { "text": "This baseline provides a randomized real num-ber between [0, 1] for each test sentence pair as semantic similarity score, and uses 0.5 as cutoff for binary paraphrase identification output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random:", "sec_num": null }, { "text": "This is a supervised logistic regression (LR) baseline used by Das and Smith (2009) . It uses simple n-gram (also in stemmed form) overlapping features but shows very competitive performance on the MSR news paraphrase corpus. It uses 0.5 as cutoff to create binary outputs for the paraphrase identification task.", "cite_spans": [ { "start": 63, "end": 83, "text": "Das and Smith (2009)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Logistic Regression:", "sec_num": null }, { "text": "The third baseline is a state-of-the-art unsupervised method developed by Guo and Diab (2012) . It is specially developed for short sentences by modeling the semantic space of both words that are present in and absent from the sentences (Guo and Diab, 2012) . The model was learned from WordNet (Fellbaum, 2010) , OntoNotes (Hovy et al., 2006) , Wiktionary, the Brown corpus (Francis and Kucera, 1979) . It uses 0.5 as cutoff in the binary paraphrase identification task.", "cite_spans": [ { "start": 74, "end": 93, "text": "Guo and Diab (2012)", "ref_id": "BIBREF18" }, { "start": 237, "end": 257, "text": "(Guo and Diab, 2012)", "ref_id": "BIBREF18" }, { "start": 295, "end": 311, "text": "(Fellbaum, 2010)", "ref_id": "BIBREF13" }, { "start": 324, "end": 343, "text": "(Hovy et al., 2006)", "ref_id": "BIBREF22" }, { "start": 375, "end": 401, "text": "(Francis and Kucera, 1979)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Weighted Matrix Factorization (WTMF): 7", "sec_num": null }, { "text": "A total of 18 teams participated in the PI task (required), 13 of which also submitted to the SS task (optional). Every team submitted 2 runs except one (up to 2 were are allowed). Table 3 shows the evaluation results. We use the F1score and Pearson correlation as the primary evaluation metric for the PI and SS task respectively. The results are very exciting that most systems outperformed the two strong baselines we chose, while still showing room for improvement towards the human upper-bound estimated by the crowdsourcing worker's performance.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Systems and Results", "sec_num": "6" }, { "text": "Most participants choose supervised methods, except for MathLingBp who uses semi-supervised, Figure 4 : PINC scores of paraphrases collected. The higher the PINC, the more significant the rewording. Our proposed annotation strategy quadruples paraphrase yield, while not greatly reducing diversity as measured by PINC.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6.2" }, { "text": "Columbia and Yamraj who use unsupervised methods. While the best performed systems are supervised, the best unsupervised system still outperforms some supervised systems and the state-of-the-art unsupervised baseline. About half of systems use word embeddings and many use neural networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.2" }, { "text": "To out best knowledge, this is the first time to have a large number of systems in an evaluation that has the two related tasks -paraphrase identification and semantic similarity, side by side for comparison. One interesting observation that comes out is the performance of the same system on the two tasks (\"F1 vs. Pearson\") are not necessarily related. For example, ASOBEK ranked 1st (out of 35 runs) and 18th (out of 25 runs) in the PI and SS tasks respectively, RTM-DCU ranked 27th and 3rd, while the MITRE system ranked 3nd and 1st place. Neither \"F1 vs. max-F1\" nor \"Pearson vs. maxF1\" nor \"F1 vs. Pearson\" show a strong correlation. It implies that (i) high-performance PI systems can be developed focusing on the binary classification problem without focusing on the degree of similarity; (ii) it is crucial to select the threshold to balance precision and recall for the PI binary classification problem; (iii) it is important for SS system to handle the debatable cases proporiately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.2" }, { "text": "There are in total 19 teams participated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "AJ: This team utilizes TERp and BLEU -automatic evaluation metrics for Machine Translation. The system uses a logistic regression model and performs threshold selection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "AMRITACEN: This team uses Recursive Auto Encoders (RAEs). The matrix generated for the given input sentences is of variable size, then converted to equal sized matrix using repeat matrix concept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "ASOBEK (Eyecioglu and Keller, 2015) : This team uses SVM classifier with simple lexical word overlap and character n-grams features.", "cite_spans": [ { "start": 7, "end": 35, "text": "(Eyecioglu and Keller, 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "CDTDS (Karampatsis, 2015) : This team uses support vector regression trained only on the training set using the numbers of positive votes out of the 5 crowdsourcing annotations.", "cite_spans": [ { "start": 6, "end": 25, "text": "(Karampatsis, 2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "Columbia: This system maps each original sentence to a low dimensional vector as Orthogonal Matrix Factorization (Guo et al., 2014) , and then computes similarity score based on the low dimensional vectors.", "cite_spans": [ { "start": 113, "end": 131, "text": "(Guo et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "Depth: This team uses neural network that learns representation of sentences, then compute similarity scores based on hidden vector representations between two sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "EBIQUITY (Satyapanich et al., 2015) : This team trains supervised SVM and logistic re- Table 3 : Evaluation results. The first column presents the rank of each team in the two tasks based on each team's best system. The superscripts are the ranks of systems, ordered by F1 for Paraphrase Identification (PI) task and Pearson for Semantic Similarity (SS) task. indicates unsupervised or semi-supervised system. In total, 19 teams participated in the PI task, of which 14 teams also participated in the SS task. Note that although the two sub-tasks share the same test set of 972 sentence pairs, the PI task ignores 134 debatable cases (received a medium-score from expert annotator) and uses only 838 pairs (663 paraphrases and 175 non-paraphrases) in evaluation, while SS task uses all 972 pairs. This causes that the F1-score in the PI task can be higher than the maximum F1-score in the SS task. Also note that the F1-scores of the baselines in the PI task are higher than reported in the Table 2 of (Xu et al., 2014) , because the later reported maximum F1-scores on the PI task, ignoring the debatable cases.", "cite_spans": [ { "start": 9, "end": 35, "text": "(Satyapanich et al., 2015)", "ref_id": "BIBREF38" }, { "start": 1002, "end": 1019, "text": "(Xu et al., 2014)", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 87, "end": 94, "text": "Table 3", "ref_id": null }, { "start": 991, "end": 998, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "gression models using features of semantic similarities between sentence pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "ECNU (Zhao and Lan, 2015) : This team adopts typical machine learning classifiers and uses a variety of features, such as surface text, semantic level, textual entailment, word distributional representations by deep learning methods. Vo and Popescu, 2015) : This team uses supervised learning model with different features for the 2 runs, such as n-gram overlap, word alignment and edit distance.", "cite_spans": [ { "start": 5, "end": 25, "text": "(Zhao and Lan, 2015)", "ref_id": "BIBREF50" }, { "start": 234, "end": 255, "text": "Vo and Popescu, 2015)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Participants' Systems", "sec_num": "6.3" }, { "text": "Hassy: This team uses a bag-of-embeddings approach via supervised learning. Two sentences are first embedded into a vector space, and then the system computes the dot-product of the two sentence embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FBK-HLT (Ngoc Phuoc An", "sec_num": null }, { "text": "HLTC-HKUST (Bertero and Fung, 2015) : This team uses supervised classification with a standard two-layer neural network classifier. The features used include translation metrics, lexical, syntactic and semantic similarity scores, the latter with an emphasis on aligned semantic roles comparison.", "cite_spans": [ { "start": 11, "end": 35, "text": "(Bertero and Fung, 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "FBK-HLT (Ngoc Phuoc An", "sec_num": null }, { "text": "MathLingBp: This team implements the alignand-penalize architecture described by Han et al. (2013) with slight modifications and makes use of several word similarity metrics. One metric relies on a mapping of words to vectors built from the Rovereto Twitter N-Gram corpus, another on a synonym list built from Wiktionary's translations, while a third approach derives word similarity from concept graphs built using the 4lang lexicon and the Longman Dictionary of Contemporary English (Kornai et al., 2015) .", "cite_spans": [ { "start": 81, "end": 98, "text": "Han et al. (2013)", "ref_id": "BIBREF20" }, { "start": 485, "end": 506, "text": "(Kornai et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "FBK-HLT (Ngoc Phuoc An", "sec_num": null }, { "text": "MITRE (Zarrella et al., 2015) : A recurrent neural network models semantic similarity between sentences using the sequence of symmetric word alignments that maximize cosine similarity between word embeddings. We include features from local similarity of characters, random projection, matching word sequences, pooling of word embeddings, and alignment quality metrics. The resulting ensemble uses both semantic and string matching at many levels of granularity.", "cite_spans": [ { "start": 6, "end": 29, "text": "(Zarrella et al., 2015)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "FBK-HLT (Ngoc Phuoc An", "sec_num": null }, { "text": "RTM-DCU (Bicici, 2015) : This team uses referential translation machines (RTM) and machine translation performance prediction system (MTPP) for predicting semantic similarity where indicators of translatability are used as features (Bi\u00e7ici and Way, 2014) and instance selection for RTM is performed with FDA5 (Bi\u00e7ici and Yuret, 2014) . RTM works as follows: FDA5 \u2192 MTPP \u2192 ML training \u2192 predict.", "cite_spans": [ { "start": 8, "end": 22, "text": "(Bicici, 2015)", "ref_id": "BIBREF5" }, { "start": 232, "end": 254, "text": "(Bi\u00e7ici and Way, 2014)", "ref_id": "BIBREF3" }, { "start": 309, "end": 333, "text": "(Bi\u00e7ici and Yuret, 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "FBK-HLT (Ngoc Phuoc An", "sec_num": null }, { "text": "Rob (van der Goot and van Noord, 2015): This system is inspired by a state-of-the-art semantic relatedness prediction system by Bjerva et al. (2014) . It combines features from different parses with lexical and compositional distributional feature using a logistic regression model.", "cite_spans": [ { "start": 128, "end": 148, "text": "Bjerva et al. (2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "FBK-HLT (Ngoc Phuoc An", "sec_num": null }, { "text": "This team uses a supervised system with sentiment, phrase similarity matrix, and alignment features. Similarity metrics are based on vector space representation of phrases which was trained on a large corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STANFORD:", "sec_num": null }, { "text": "TkLbLiiR (Glava\u0161 et al., 2015) : This team uses a supervised model with about 15 comparisonbased numeric features. The most important features are the distributional features weighted by the topic-specific information.", "cite_spans": [ { "start": 9, "end": 30, "text": "(Glava\u0161 et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "STANFORD:", "sec_num": null }, { "text": "WHUHJP: This team uses the word2vec tool to train a vector model on the training data, then computes distributed representations of sentences in the test set and their cosine similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STANFORD:", "sec_num": null }, { "text": "Yamraj: This team uses pre-trained word and phrase vectors on Google News data set (about 100 billion words) and Wikipeida articles. The system relies on the cosine distance between vectors representing the sentences computed using open-source toolkit Gensim.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STANFORD:", "sec_num": null }, { "text": "We have presented the task definition, data annotation and evaluation results to the first Paraphrase and Semantic Similarity In Twitter (PIT) shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "7" }, { "text": "Our analysis provides some initial insights into the relation and the difference between paraphrase identification and semantic similarity problems. We make all the data, baseline systems and evaluation scripts publicly available. 8 In the future, we plan to extend the task to allow leverage of more information from social networks, for example, by providing the full tweets (and their ids) that are associated with each sentence and with each topic.", "cite_spans": [ { "start": 231, "end": 232, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "7" }, { "text": "http://www.cis.upenn.edu/\u02dcxwe/ semeval2015pit/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://aclweb.org/aclwiki/index.php? title=Paraphrase_Identification_(State_of_ the_art)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The tokenizer was developed by O'Connor et al. (2010): https://github.com/brendano/tweetmotif 4 The POS tagger was developed by Derczynski et al. (2013) and the NER tagger was developed by Ritter et al. (2011): https://github.com/aritter/twitter_nlp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "More information about Twitter's APIs: https://dev. twitter.com/docs/api/1.1/overview6 We use the toolkit developed by O'Connor et al.(2010): https://github.com/brendano/tweetmotif", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The source code and data for WTMF is available at: http://www.cs.columbia.edu/\u02dcweiwei/code. html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank all participants, reviewers and SemEval organizers Preslav Nakov, Torsten Zesch, Daniel Cer, David Jurgens. This material is based in part on research sponsored by the NSF under grant IIS-1430651, DARPA under agreement number FA8750-13-2-0017 (the DEFT program) and through a Google Faculty Research Award to Chris Callison-Burch. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Semeval-2012 task 6: A pilot on semantic textual similarity", "authors": [ { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "M", "middle": [], "last": "Diab", "suffix": "" }, { "first": "D", "middle": [], "last": "Cer", "suffix": "" }, { "first": "A", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "Proceedings of Se-mEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agirre, E., Diab, M., Cer, D., and Gonzalez-Agirre, A. (2012). Semeval-2012 task 6: A pilot on se- mantic textual similarity. In Proceedings of Se- mEval.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Inter-coder agreement for computational linguistics", "authors": [ { "first": "R", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "M", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artstein, R. and Poesio, M. (2008). Inter-coder agreement for computational linguistics. Compu- tational Linguistics, 34(4).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "HLTC-HKUST: A neural network paraphrase classifier using translation metrics, semantic roles and lexical similarity features", "authors": [ { "first": "D", "middle": [], "last": "Bertero", "suffix": "" }, { "first": "P", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2015, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertero, D. and Fung, P. (2015). HLTC-HKUST: A neural network paraphrase classifier using transla- tion metrics, semantic roles and lexical similarity features. In Proceedings of SemEval.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "RTM-DCU: Referential translation machines for semantic similarity", "authors": [ { "first": "E", "middle": [], "last": "Bi\u00e7ici", "suffix": "" }, { "first": "A", "middle": [], "last": "Way", "suffix": "" } ], "year": 2014, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bi\u00e7ici, E. and Way, A. (2014). RTM-DCU: Referen- tial translation machines for semantic similarity. In Proceedings of SemEval.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Optimizing instance selection for statistical machine translation with 8", "authors": [ { "first": "E", "middle": [], "last": "Bi\u00e7ici", "suffix": "" }, { "first": "D", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2014, "venue": "Speech, and Language Processing (TASLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bi\u00e7ici, E. and Yuret, D. (2014). Optimizing instance selection for statistical machine translation with 8 https://github.com/cocoxu/ SemEval-PIT2015 feature decay algorithms. IEEE/ACM Transac- tions On Audio, Speech, and Language Process- ing (TASLP).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "RTM-DCU: Predicting semantic similarity with referential translation machines", "authors": [ { "first": "E", "middle": [], "last": "Bicici", "suffix": "" } ], "year": 2015, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bicici, E. (2015). RTM-DCU: Predicting semantic similarity with referential translation machines. In Proceedings of SemEval.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The meaning factory: Formal semantics for recognizing textual entailment and determining semantic similarity", "authors": [ { "first": "J", "middle": [], "last": "Bjerva", "suffix": "" }, { "first": "J", "middle": [], "last": "Bos", "suffix": "" }, { "first": "R", "middle": [], "last": "Van Der Goot", "suffix": "" }, { "first": "M", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of Se-mEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bjerva, J., Bos, J., van der Goot, R., and Nissim, M. (2014). The meaning factory: Formal seman- tics for recognizing textual entailment and deter- mining semantic similarity. In Proceedings of Se- mEval.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A comparison of vector-based representations for semantic composition", "authors": [ { "first": "W", "middle": [], "last": "Blacoe", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2012, "venue": "Proceedings of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blacoe, W. and Lapata, M. (2012). A comparison of vector-based representations for semantic compo- sition. In Proceedings of EMNLP-CoNLL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Collecting highly parallel data for paraphrase evaluation", "authors": [ { "first": "D", "middle": [ "L" ], "last": "Chen", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Dolan", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, D. L. and Dolan, W. B. (2011). Collecting highly parallel data for paraphrase evaluation. In Proceedings of ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Paraphrase identification as probabilistic quasi-synchronous recognition", "authors": [ { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL-IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, D. and Smith, N. A. (2009). Paraphrase identi- fication as probabilistic quasi-synchronous recog- nition. In Proceedings of ACL-IJCNLP.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Twitter part-of-speech tagging for all: Overcoming sparse and noisy data", "authors": [ { "first": "L", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "A", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "K", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2013, "venue": "Proceedings of RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Derczynski, L., Ritter, A., Clark, S., and Bontcheva, K. (2013). Twitter part-of-speech tagging for all: Overcoming sparse and noisy data. In Proceed- ings of RANLP.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", "authors": [ { "first": "B", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "C", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dolan, B., Quirk, C., and Brockett, C. (2004). Un- supervised construction of large paraphrase cor- pora: Exploiting massively parallel news sources. In Proceedings of COLING.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "ASOBEK: Twitter paraphrase identification with simple overlap features and SVMs", "authors": [ { "first": "A", "middle": [], "last": "Eyecioglu", "suffix": "" }, { "first": "B", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Se-mEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eyecioglu, A. and Keller, B. (2015). ASOBEK: Twitter paraphrase identification with simple overlap features and SVMs. In Proceedings of Se- mEval.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "WordNet. In Theory and Applications of Ontology: Computer Applications", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. (2010). WordNet. In Theory and Ap- plications of Ontology: Computer Applications. Springer.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A semantic similarity approach to paraphrase detection", "authors": [ { "first": "S", "middle": [], "last": "Fernando", "suffix": "" }, { "first": "M", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics UK (CLUK) 11th Annual Research Colloquium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando, S. and Stevenson, M. (2008). A semantic similarity approach to paraphrase detection. Com- putational Linguistics UK (CLUK) 11th Annual Research Colloquium.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Brown corpus manual", "authors": [ { "first": "W", "middle": [ "N" ], "last": "Francis", "suffix": "" }, { "first": "H", "middle": [], "last": "Kucera", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis, W. N. and Kucera, H. (1979). Brown cor- pus manual. Technical report, Brown University. Department of Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Detecting Twitter paraphrases with TweetingJay", "authors": [], "year": null, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Detecting Twitter paraphrases with TweetingJay. In Proceedings of SemEval.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Modeling sentences in the latent space", "authors": [ { "first": "W", "middle": [], "last": "Guo", "suffix": "" }, { "first": "M", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guo, W. and Diab, M. (2012). Modeling sentences in the latent space. In Proceedings of ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Fast tweet retrieval with compact binary codes", "authors": [ { "first": "W", "middle": [], "last": "Guo", "suffix": "" }, { "first": "W", "middle": [], "last": "Liu", "suffix": "" }, { "first": "M", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guo, W., Liu, W., and Diab, M. (2014). Fast tweet retrieval with compact binary codes. In Proceed- ings of COLING.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "UMBC EBIQUITY-CORE: Semantic textual similarity systems", "authors": [ { "first": "L", "middle": [], "last": "Han", "suffix": "" }, { "first": "A", "middle": [], "last": "Kashyap", "suffix": "" }, { "first": "T", "middle": [], "last": "Finin", "suffix": "" }, { "first": "J", "middle": [], "last": "Mayfield", "suffix": "" }, { "first": "J", "middle": [], "last": "Weese", "suffix": "" } ], "year": 2013, "venue": "Proceedings of *SEM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han, L., Kashyap, A., Finin, T., Mayfield, J., and Weese, J. (2013). UMBC EBIQUITY-CORE: Se- mantic textual similarity systems. In Proceedings of *SEM.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Semantic relatedness using salient semantic analysis", "authors": [ { "first": "S", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2011, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassan, S. and Mihalcea, R. (2011). Semantic re- latedness using salient semantic analysis. In Pro- ceedings of AAAI.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "OntoNotes: the 90% solution", "authors": [ { "first": "E", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "L", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hovy, E., Marcus, M., Palmer, M., Ramshaw, L., and Weischedel, R. (2006). OntoNotes: the 90% solution. In Proceedings of HLT-NAACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semantic similarity of short texts", "authors": [ { "first": "A", "middle": [], "last": "Islam", "suffix": "" }, { "first": "D", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2007, "venue": "Proceedings of RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Islam, A. and Inkpen, D. (2007). Semantic similarity of short texts. In Proceedings of RANLP.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Discriminative improvements to distributional sentence similarity", "authors": [ { "first": "Y", "middle": [], "last": "Ji", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji, Y. and Eisenstein, J. (2013). Discriminative im- provements to distributional sentence similarity. In Proceedings of EMNLP.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "CDTDS: Predicting paraphrases in Twitter via support vector regression", "authors": [ { "first": "R.-M", "middle": [], "last": "Karampatsis", "suffix": "" } ], "year": 2015, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karampatsis, R.-M. (2015). CDTDS: Predicting paraphrases in Twitter via support vector regres- sion. In Proceedings of SemEval.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Extending 4lang using monolingual dictionaries", "authors": [ { "first": "A", "middle": [], "last": "Kornai", "suffix": "" }, { "first": "M", "middle": [], "last": "Makrai", "suffix": "" }, { "first": "D", "middle": [], "last": "Nemeskey", "suffix": "" }, { "first": "G", "middle": [], "last": "Recski", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kornai, A., Makrai, M., Nemeskey, D., and Recski, G. (2015). Extending 4lang using monolingual dictionaries. Unpublished manuscript.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Generating phrasal and sentential paraphrases: A survey of data-driven methods", "authors": [ { "first": "N", "middle": [], "last": "Madnani", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Madnani, N. and Dorr, B. J. (2010). Generating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Re-examining machine translation metrics for paraphrase identification", "authors": [ { "first": "N", "middle": [], "last": "Madnani", "suffix": "" }, { "first": "J", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "M", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 2012, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Madnani, N., Tetreault, J., and Chodorow, M. (2012). Re-examining machine translation met- rics for paraphrase identification. In Proceedings of NAACL-HLT.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Corpus-based and knowledge-based measures of text semantic similarity", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "C", "middle": [], "last": "Corley", "suffix": "" }, { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihalcea, R., Corley, C., and Strapparava, C. (2006). Corpus-based and knowledge-based mea- sures of text semantic similarity. In Proceedings of AAAI.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The impact of frequency on summarization", "authors": [ { "first": "A", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "L", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nenkova, A. and Vanderwende, L. (2005). The im- pact of frequency on summarization. Technical report, Microsoft Research. MSR-TR-2005-101.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "FBK-HLT: An application of semantic textual similarity for paraphrase and semantic similarity in Twitter", "authors": [ { "first": "Ngoc Phuoc An", "middle": [], "last": "Vo", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "Popescu", "suffix": "" }, { "first": "O", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngoc Phuoc An Vo, S. M. and Popescu, O. (2015). FBK-HLT: An application of semantic textual similarity for paraphrase and semantic similarity in Twitter. In Proceedings of SemEval.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Tweetmotif: Exploratory search and topic summarization for Twitter", "authors": [ { "first": "B", "middle": [], "last": "O'connor", "suffix": "" }, { "first": "M", "middle": [], "last": "Krieger", "suffix": "" }, { "first": "Ahn", "middle": [], "last": "", "suffix": "" }, { "first": "D", "middle": [], "last": "", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "O'Connor, B., Krieger, M., and Ahn, D. (2010). Tweetmotif: Exploratory search and topic sum- marization for Twitter. In Proceedings of ICWSM.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Using paraphrases for improving first story detection in news and twitter", "authors": [ { "first": "S", "middle": [], "last": "Petrovi\u0107", "suffix": "" }, { "first": "M", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "V", "middle": [], "last": "Lavrenko", "suffix": "" } ], "year": 2012, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petrovi\u0107, S., Osborne, M., and Lavrenko, V. (2012). Using paraphrases for improving first story de- tection in news and twitter. In Proceedings of NAACL-HLT.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Paraphrase recognition via dissimilarity significance classification", "authors": [ { "first": "L", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "M.-Y", "middle": [], "last": "Kan", "suffix": "" }, { "first": "T.-S", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiu, L., Kan, M.-Y., and Chua, T.-S. (2006). Para- phrase recognition via dissimilarity significance classification. In Proceedings of EMNLP.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Named entity recognition in tweets: an experimental study", "authors": [ { "first": "A", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "O", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ritter, A., Clark, S., and Etzioni, O. (2011). Named entity recognition in tweets: an experimental study. In Proceedings of EMNLP.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Some aspects of the sequential design of experiments", "authors": [ { "first": "H", "middle": [], "last": "Robbins", "suffix": "" } ], "year": 1985, "venue": "Herbert Robbins Selected Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robbins, H. (1985). Some aspects of the sequen- tial design of experiments. In Herbert Robbins Selected Papers. Springer.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Paraphrase identification with lexico-syntactic graph subsumption", "authors": [ { "first": "V", "middle": [], "last": "Rus", "suffix": "" }, { "first": "P", "middle": [ "M" ], "last": "Mccarthy", "suffix": "" }, { "first": "M", "middle": [ "C" ], "last": "Lintean", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Mcnamara", "suffix": "" }, { "first": "A", "middle": [ "C" ], "last": "Graesser", "suffix": "" } ], "year": 2008, "venue": "Proceedings of FLAIRS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rus, V., McCarthy, P. M., Lintean, M. C., McNa- mara, D. S., and Graesser, A. C. (2008). Para- phrase identification with lexico-syntactic graph subsumption. In Proceedings of FLAIRS.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Ebiquity: Paraphrase and semantic similarity in Twitter using skipgrams", "authors": [ { "first": "T", "middle": [], "last": "Satyapanich", "suffix": "" }, { "first": "H", "middle": [], "last": "Gao", "suffix": "" }, { "first": "T", "middle": [], "last": "Finin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satyapanich, T., Gao, H., and Finin, T. (2015). Ebiq- uity: Paraphrase and semantic similarity in Twit- ter using skipgrams. In Proceedings of SemEval.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection", "authors": [ { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "E", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "J", "middle": [], "last": "Pennin", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "A", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Socher, R., Huang, E. H., Pennin, J., Manning, C. D., and Ng, A. (2011). Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Efficient crowdsourcing of unknown experts using multi-armed bandits", "authors": [ { "first": "L", "middle": [], "last": "Tran-Thanh", "suffix": "" }, { "first": "S", "middle": [], "last": "Stein", "suffix": "" }, { "first": "A", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Jennings", "middle": [], "last": "", "suffix": "" }, { "first": "N", "middle": [ "R" ], "last": "", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ECAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tran-Thanh, L., Stein, S., Rogers, A., and Jennings, N. R. (2012). Efficient crowdsourcing of un- known experts using multi-armed bandits. In Pro- ceedings of ECAI.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "ROB: Using semantic meaning to recognize paraphrases", "authors": [ { "first": "R", "middle": [], "last": "Van Der Goot", "suffix": "" }, { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" } ], "year": 2015, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "van der Goot, R. and van Noord, G. (2015). ROB: Using semantic meaning to recognize para- phrases. In Proceedings of SemEval.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Beyond SumBasic: Taskfocused summarization with sentence simplification and lexical expansion", "authors": [ { "first": "L", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "H", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "A", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2007, "venue": "Information Processing & Management", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanderwende, L., Suzuki, H., Brockett, C., and Nenkova, A. (2007). Beyond SumBasic: Task- focused summarization with sentence simplifica- tion and lexical expansion. Information Process- ing & Management, 43.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Using dependency-based features to take the parafarce out of paraphrase", "authors": [ { "first": "S", "middle": [], "last": "Wan", "suffix": "" }, { "first": "M", "middle": [], "last": "Dras", "suffix": "" }, { "first": "R", "middle": [], "last": "Dale", "suffix": "" }, { "first": "Paris", "middle": [], "last": "", "suffix": "" }, { "first": "C", "middle": [], "last": "", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Australasian Language Technology Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wan, S., Dras, M., Dale, R., and Paris, C. (2006). Using dependency-based features to take the para- farce out of paraphrase. In Proceedings of the Australasian Language Technology Workshop.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Paraphrasing 4 microblog normalization", "authors": [ { "first": "L", "middle": [], "last": "Wang", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "A", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "I", "middle": [], "last": "Trancoso", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, L., Dyer, C., Black, A. W., and Trancoso, I. (2013). Paraphrasing 4 microblog normalization. In Proceedings of EMNLP.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Data-Drive Approaches for Paraphrasing Across Language Variations. PhD thesis", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W. (2014). Data-Drive Approaches for Para- phrasing Across Language Variations. PhD the- sis, Department of Computer Science, New York University.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Extracting lexically divergent paraphrases from Twitter", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "A", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Dolan", "suffix": "" }, { "first": "Ji", "middle": [], "last": "", "suffix": "" }, { "first": "Y", "middle": [], "last": "", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W., Ritter, A., Callison-Burch, C., Dolan, W. B., and Ji, Y. (2014). Extracting lexically divergent paraphrases from Twitter. Transactions of the As- sociation for Computational Linguistics (TACL), 2(1).", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Gathering and generating paraphrases from twitter with application to normalization", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "A", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W., Ritter, A., and Grishman, R. (2013). Gather- ing and generating paraphrases from twitter with application to normalization. In Proceedings of the Sixth Workshop on Building and Using Com- parable Corpora.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Linguistic redundancy in twitter", "authors": [ { "first": "F", "middle": [ "M" ], "last": "Zanzotto", "suffix": "" }, { "first": "M", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "K", "middle": [], "last": "Tsioutsiouliklis", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zanzotto, F. M., Pennacchiotti, M., and Tsiout- siouliklis, K. (2011). Linguistic redundancy in twitter. In Proceedings of EMNLP.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "MITRE: Seven systems for semantic similarity in tweets", "authors": [ { "first": "G", "middle": [], "last": "Zarrella", "suffix": "" }, { "first": "J", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "E", "middle": [ "M" ], "last": "Merkhofer", "suffix": "" }, { "first": "L", "middle": [], "last": "Strickhart", "suffix": "" } ], "year": 2015, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zarrella, G., Henderson, J., Merkhofer, E. M., and Strickhart, L. (2015). MITRE: Seven systems for semantic similarity in tweets. In Proceedings of SemEval.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "ECNU: Boosting performance for paraphrase and semantic similarity in Twitter by leveraging word embeddings", "authors": [ { "first": "J", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "M", "middle": [], "last": "Lan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, J. and Lan, M. (2015). ECNU: Boosting per- formance for paraphrase and semantic similarity in Twitter by leveraging word embeddings. In Proceedings of SemEval.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "text": "The proportion of paraphrases (percentage of positive votes from annotators) vary greatly across different topics. Automatic filtering in Section 4.4 roughly doubles the paraphrase yield.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Numbers of paraphrases collected by different methods. The annotation efficiency (3,4,5 are regarded as paraphrases) is significantly improved by the sentence filtering and Multi-Armed Bandits (MAB) based topic selection.", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "text": "", "content": "
: Representative examples from PIT-2015 Twitter Paraphrase Corpus
# Unique Sent # Sent Pair # Paraphrase # Non-Paraphrase # Debatable
Train13231130633996 (30.6%)7534 (57.7%)1533 (11.7%)
Dev477247271470 (31.1%)2672 (56.5%)585 (12.4%)
Test1295972175 (18.0%)663 (68.2%)134 (13.8%)
", "html": null, "num": null }, "TABREF2": { "type_str": "table", "text": "", "content": "", "html": null, "num": null }, "TABREF3": { "type_str": "table", "text": "", "content": "
shows the basic statistics of the
corpus. The sentences are preprocessed with tok-
enization, 3 POS and named entity tags. 4 The train-
ing and development set consists of 17,790 sentence
pairs posted between April 24th and May 3rd, 2013
from 500+ trending topics featured on Twitter (ex-
cluding hashtags). The training and development set
is a random split. Each sentence pair is annotated by
5 different crowdsourcing workers. For the test set,
we obtain both crowdsourced and expert labels on
972 sentence pairs from 20 randomly sampled Twit-
ter trending topics between May 13th and June 10th,
2013. We use expert labels in this SemEval eval-
uation. Our dataset is more realistic and balanced,
containing about 70% non-paraphrases vs. the 34%
non-paraphrases in the benchmark Microsoft Para-
phrase Corpus derived from news articles by
", "html": null, "num": null } } } }