|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:31:46.895672Z" |
|
}, |
|
"title": "Finnish Paraphrase Corpus", |
|
"authors": [ |
|
{ |
|
"first": "Jenna", |
|
"middle": [], |
|
"last": "Kanerva", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Li-Hsin", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Iiro", |
|
"middle": [], |
|
"last": "Rastas", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Valtteri", |
|
"middle": [], |
|
"last": "Skantsi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jemina", |
|
"middle": [], |
|
"last": "Kilpel\u00e4inen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hanna-Mari", |
|
"middle": [], |
|
"last": "Kupari", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jenna", |
|
"middle": [], |
|
"last": "Saarni", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Maija", |
|
"middle": [], |
|
"last": "Sev\u00f3n", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Otto", |
|
"middle": [], |
|
"last": "Tarkka", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Turku", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we introduce the first fully manually annotated paraphrase corpus for Finnish containing 53,572 paraphrase pairs harvested from alternative subtitles and news headings. Out of all paraphrase pairs in our corpus 98% are manually classified to be paraphrases at least in their given context, if not in all contexts. Additionally, we establish a manual candidate selection method and demonstrate its feasibility in high quality paraphrase selection in terms of both cost and quality.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we introduce the first fully manually annotated paraphrase corpus for Finnish containing 53,572 paraphrase pairs harvested from alternative subtitles and news headings. Out of all paraphrase pairs in our corpus 98% are manually classified to be paraphrases at least in their given context, if not in all contexts. Additionally, we establish a manual candidate selection method and demonstrate its feasibility in high quality paraphrase selection in terms of both cost and quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The powerful language models that have recently become available in NLP have also resulted in a distinct shift towards more meaning-oriented tasks for model fine-tuning and evaluation. The most typical example is entailment detection, with the paraphrase task raising in interest recently. Paraphrases, texts that express the same meaning with differing words (Bhagat and Hovy, 2013) , arealready by their very definition -a suitable target to induce and evaluate models' ability to represent meaning. Paraphrase detection and generation has numerous direct applications in NLP (Madnani and Dorr, 2010) , among others in question answering (Soni and Roberts, 2019) , plagiarism detection (Altheneyan and Menai, 2019), and machine translation (Mehdizadeh Seraj et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 360, |
|
"end": 383, |
|
"text": "(Bhagat and Hovy, 2013)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 602, |
|
"text": "(Madnani and Dorr, 2010)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 640, |
|
"end": 664, |
|
"text": "(Soni and Roberts, 2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 773, |
|
"text": "(Mehdizadeh Seraj et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Research in paraphrase naturally depends on the availability of datasets for the task. We will review these in more detail in Section 2, nevertheless, barring few exceptions, paraphrase corpora are typically large and gathered automatically using one of several possible heuristics. Typically a comparatively small section of the corpus is manually classified to serve as a test set for method development. The heuristics used to gather and filter the corpora naturally introduce a bias to the corpora which, as we will show later in this paper, demonstrates itself as a tendency towards short examples with a relatively high lexical overlap. Addressing this bias to the extent possible, and providing a corpus with longer, lexically more diverse paraphrases is one of the motivations for our work. The other motivation is to cater for the needs of Finnish NLP, and improve the availability of high-quality, manually annotated paraphrase data specifically for the Finnish language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we therefore aim for the following contributions: Firstly, we establish and test a fully manual procedure for paraphrase candidate selection with the aim of avoiding a selection bias towards short, lexically overlapping candidates. Secondly, we release the first fully manually annotated paraphrase corpus of Finnish, sufficiently large for model training. The number of manually annotated examples makes the released dataset one of the largest, if not the largest manually annotated paraphrase corpus for any language. And thirdly, we report the experiences, tools, and baseline results on this new dataset, hopefully allowing other language NLP communities to assess the potential of developing a similar corpus for other languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Statistics of the different paraphrase corpora most relevant to our work are summarized in Table 1 . For English, the Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005) is extracted from an online news collection by applying heuristics to recognize candidate document pairs and candidate sentences from the documents. Paraphrase candidates are subsequently filtered using a classifier, before the final manual binary annotation (paraphrase or not). In the Twitter URL Corpus (TUC) (Lan et al., 2017) Table 1 : Summary of available paraphrase corpora of naturally occurring sentential paraphrases. The corpora sizes include the total amount of pairs in the corpus (i.e. also those labeled as non-paraphrases), thus the actual number of good paraphrases depend on the class distribution of each corpus. *The highest quality cutpoint estimated by the authors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 501, |
|
"end": 519, |
|
"text": "(Lan et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 98, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 527, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "shared URLs in news related tweets. All candidates are manually binary-labeled. ParaSCI (Dong et al., 2021) is created by collecting paraphrase candidates from ACL and arXiv papers using heuristics based on term definitions, citation information as well as sentence embedding similarity. The extracted candidates are automatically filtered, but no manually annotated data is available. PARADE (He et al., 2020) is created by collecting online user-generated flashcards for computer science related concepts. All definitions for the same term are first clustered, and paraphrase candidates are extracted only among a cluster to reduce noise in candidate selection. All extracted candidates are manually annotated using a scheme with four labels. Quora Question Pairs (QQP) 1 contains question headings from the forum with binary labels into duplicate-or-not questions. The QQP dataset is larger than other datasets, however, although including human-produced labels, the labeling is not originally designed for paraphrasing and the dataset providers warn about labeling not guaranteed to be perfect. Another common approach for automatic paraphrase identification is through language pivoting using multilingual parallel datasets. Here sentence alignments are used to recognize whether two different surface realizations share an identical or near-identical translation, assuming that the identical translation likely implies a paraphrase. There are two different multilingual paraphrase datasets automatically extracted using language pivoting, Opusparcus (Creutz, 2018) and TaPaCo (Scherrer, 1 data.quora.com/First-Quora-Dataset-\\ Release-Question-Pairs 2020), both including a Finnish subsection. Opusparcus consists of candidate paraphrases automatically extracted from the alternative translations of movie and TV show subtitles after automatic sentence alignment. While the candidate paraphrases are automatically extracted, a small subset of a few thousand paraphrase pairs for each language is manually annotated. TaPaCo contains candidate paraphrases automatically extracted from the Tatoeba dataset 2 , which is a multilingual crowdsourced database of sentences and their translations. Like Opusparcus, TaPaCo is based on language pivoting, where all alternative translations for the same statement are collected. However, unlike most other corpora, the candidate paraphrases are grouped into 'sets' instead of pairs, and all sentences in a set are considered equivalent in meaning. TaPaCo does not include any manual validation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 107, |
|
"text": "(Dong et al., 2021)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 393, |
|
"end": 410, |
|
"text": "(He et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1556, |
|
"end": 1570, |
|
"text": "(Creutz, 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1582, |
|
"end": 1592, |
|
"text": "(Scherrer,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1593, |
|
"end": 1594, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As discussed previously, we elect to rely on fully manual candidate extraction as a measure against any bias introduced through heuristic candidate selection methods. In order to obtain sufficiently many paraphrases for the person-months spent, the text sources need to be paraphrase-rich, i.e. have a high probability for naturally occurring paraphrases. Such text sources include for example news headings and articles reporting on the same news, alternative translations of the same source material, different student essays and exam answers for the same assignment, and related questions with their replies in discussion fora, where one can assume different writers using distinct words to state similar meaning. For this first version of the corpus, we use two different text sources: alternative Finnish subtitles for the same movies or TV episodes, and headings from news articles discussing the same event in two different Finnish news sites.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Selection", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "OpenSubtitles 3 distributes an extensive collection of user generated subtitles for different movies and TV episodes. These subtitles are available in multiple languages, but surprisingly often the same movie or episode have versions in a single language, originating from different sources. This gives an opportunity to exploit the natural variation produced by independent translators, and by comparing two different subtitles for a single movie or episode, there is a high likelihood of finding naturally occurring paraphrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alternative Subtitles", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "From the database dump of OpenSubtitles2018 obtained through OPUS (Tiedemann, 2012) , we selected all movies and TV episodes with at least two Finnish subtitle versions. In case more versions are available, the two most lexically differing are selected for paraphrase extraction. We measure lexical similarity by TF-IDF weighted document vectors. Specifically, we create TF-IDF vectors with TfidfVectorizer from the sklearn package. We limit the number of features to 200K, apply sublinear scaling, use character 4-grams created out of text inside word boundaries, and otherwise use the default settings. To filter out subtitle pairs with low density of interesting paraphrase candidates, pairs with too high or too low cosine similarity of TF-IDF vectors are discarded. High similarity usually reflects identical subtitles with minor formatting differences, while low similarity is typically caused by incorrect identifiers in the source data. The two selected subtitle versions are then roughly aligned using the timestamps, and divided into segments of 15 minutes. For every movie/episode, the annotators are assigned one random such segment, the two versions presented side-by-side in a custom tool, allowing for fast selection of paraphrase candidates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 83, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alternative Subtitles", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In total, we were able to obtain at least one pair of aligned subtitle versions for 1,700 unique movies and TV series. While for each unique movie only one pair of aligned subtitles is se-lected for annotation, TV series comprise different episodes, dealing with the same plot and characters, and therefore overlapping in language. After an initial annotation period, we noticed a topic bias towards a limited number of TV series with a large number of episodes, and decided to limit the number of annotated episodes to 10 per each TV series in all subsequent annotation. In total, close to 3,000 different movies/episodes are used for manual paraphrase candidate extraction, each including exactly one pair of aligned subtitles.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alternative Subtitles", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We have downloaded news articles through open RSS feeds of different Finnish news sites during 2017-2021, resulting in a substantial collection of news from numerous complementary sources. For this present work, we narrow the data down to two sources: the Finnish Broadcasting Company (YLE) and Helsingin Sanomat (HS, English translation: Helsinki News). We align the news using a 7-day sliding window on time of publication, combined with cosine similarity of TF-IDF-weighted document vectors induced on the article body, obtaining article pairs likely reporting on the same event. The settings of the TF-IDF vectors is the same as in Section 3.1. We use the article headings as paraphrase candidates, striving to select maximally dissimilar headings of maximally similar articles as the most promising candidates for nontrivial paraphrases. In practice, we used a simple grid search and human judgement to establish the most promising region of article body and heading similarity values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "News Headings", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The paraphrase annotation is comprised of multiple annotation steps, including candidate selection as described above, manual classification of candidates based on an annotation scheme, as well as the possibility of rewriting partial paraphrases into full paraphrases. Next, we will discuss the different paraphrase types represented in our annotation scheme, and afterwards the annotation workflow is discussed in a more detailed fashion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase Annotation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Instead of a simple yes/no (equivalent or not equivalent) as in MRPC (Dolan and Brockett, 2005) or 1-4 scale (bad, mostly bad, mostly good and good) as in Opusparcus (Creutz, 2018) , our annotation scheme is adapted to capture the level of paraphrasability in a more detailed fashion. Our annotation scheme uses the base scale 1-4 similar to other paraphrase corpora, enriched with additional subcategories (flags) for distinguishing different types of paraphrases which would otherwise fall from the label 4 (good) into label 3 (mostly good).", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 95, |
|
"text": "(Dolan and Brockett, 2005)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 166, |
|
"end": 180, |
|
"text": "(Creutz, 2018)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "An example for each of the categories discussed below is shown in Table 2 (English translations available in Appendix A). Each candidate pair is first evaluated in terms of the base scale numbered from 1 to 4, where:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 73, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Label 4 is a full (perfect) paraphrase in all reasonably imaginable contexts, meaning one can always be replaced with the other without changing the meaning. This ability to substitute one for the other in any context is the primary test for label 4 used in the annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Label 3 is a context dependent paraphrase, where the meaning of the two statements is the same in the present context, but not necessarily in other contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Label 2 is related but not a paraphrase, where there is a clear relation between the two statements, yet they cannot be considered paraphrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Label 1 is unrelated, there being no reasonable relation between the two statements, most likely a false positive in candidate selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "If labeling a candidate pair is not possible for a reason, or giving a label would not serve the desired purpose (e.g. wrong language or identical statements), the example can be skipped with the label x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "With the base labels alone, a great number of candidate paraphrases would fail the substitution test for label 4 and be classified label 3. This is especially so for longer text segments which are less likely to express strictly the same meaning. In order to avoid populating the label 3 category with a very diverse set of paraphrases, we opt to introduce flags for finer sub-categorization and thus support a broader range of downstream applications of the corpus. These flags are always attached to label 4 (subcategories of full paraphrases), meaning the paraphrases are not fully interchangeable due to the specified reason, but, crucially, are context-independent, unlike label 3. The possible flags are: Subsumption (> or <) where one of the statements is more detailed and the other more general. The relation of the pair is therefore directional, where the more detailed statement can be replaced with the more general one in all contexts, but not the other way around. The two common cases are one statement having additional minor details the other omits, and one statement being ambiguous while the other not. If there is a justification for crossing directionality (one statement being more detailed in one aspect while the other in another aspect), the pair falls into label 3 as the directional replacement test does not hold anymore.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Style (s) for tone or register difference in cases where the meaning of the two statements is the same, but the statements differ in tone or register such that in certain situations, they would not be interchangeable. For example, if one statement uses pejorative language or profanities, while the other is neutral, or one is clearly colloquial language while the other is formal. The style flag also includes differences in the level of politeness, uncertainty, and strength of the statements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Minor deviation (i) marks in most cases minimal differences in meaning (typically \"this\" vs. \"that\") as well as easily traceable differences in grammatical number, person, tense or such. Some applications might consider these as label 4 for all practical purposes (e.g. information retrieval), while others should regard these as label 2 (e.g. automatic rephrasing).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The flags are independent of each other and can be combined in the annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Scheme", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Given two aligned documents as described in Section 3, an annotator first extracts all candidate paraphrases. These can be anything between a short phrase and several sentences long, typically being about a sentence long. The annotators are encouraged to select as long continuous statements as possible, nevertheless at the same time avoiding a bias towards subsumption flag by over-extending one of the candidates. The candidate paraphrases are subsequently transferred into a classification annotation tool. In case of news headings, where the candidates are extracted automatically, the candidates are introduced directly in the classification tool without any manual extraction step. In the classification tool, the annotator assigns a label for each candidate. The candidate paraphrases are shown one pair at a time, and for each pair the document context is available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Workflow", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In addition to assigning a label and optional flags for a candidate pair, the classification tool provides an option to rewrite the statements if the classification is anything else than label 4 without any flags. The annotators are instructed to rewrite the candidates in cases, where a simple fix, for example word or phrase deletion, addition or replacement with a synonym or changing an inflection, can be easily constructed. Rewrites must be such that the annotated label for the rewritten example is 4. In cases where the rewrite would require more complicated changes or would take too much time, the annotators are instructed to move on to the next candidate pair. One rewrite done during the data annotation is illustrated in Table 2. The annotators can mark unsure, difficult or otherwise interesting cases for later discussion in daily annotation meetings. The annotators also communicate online, for instance seeking a quick validation for a particular decision. The work is further supported by a jointly produced 17-page annotation manual, which is revised and extended regularly.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 735, |
|
"end": 743, |
|
"text": "Table 2.", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Workflow", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The annotation work is carried out by 5 annotators each working full-time or part-time throughout the 4 month period used to construct the first release version of the corpus. Each annotator has a strong background in language studies by having an academic degree or ongoing studies in a field related to languages or linguistics. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Workflow", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The released corpus includes 45,663 naturally occurring paraphrases with additional 7,909 rewrites, resulting in the total size of 53,572 paraphrase pairs. The data is randomly divided into training, development and test sections using 80/10/10 split, however, with a restriction of all paraphrases from the same movie or TV episode being in the same section. Basic data statistics are summarized in Table 3 , and label distribution in Figure 1 . Notably, the amount of candidate pairs labeled as not paraphrases (labels 1 or 2 in our scheme) is almost non-existent, owing to the manual candidate selection step in subtitles data from which the vast majority of the corpus data originates. Only 5.6% of paraphrase pairs in the corpus originate from the automated candidate selection from news data. The amount of candidates labeled with label 1 or label x is insignificantly small, therefore we decided to discard these from the final corpus. In Figure 2 we measure the density of different label combinations in the training set conditioned on cosine similarity of paraphrase pairs based on TF-IDF weighted character n-grams of lengths 2-4. Up to cosine similarity of 0.5 the most common labels are evenly represented, while the prevalence of label 4 increases throughout the range and dominates the sparsely populated range of similarities over 0.8.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 407, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 444, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 946, |
|
"end": 954, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus Statistics and Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "After the initial annotator training phase most of the annotation work is carried out as single annotation. In order to monitor annotation consistency, double annotation batches are assigned regularly. In double annotation, one annotator first extracts the candidate paraphrases from the aligned documents, but later on these candidates are assigned to two different annotators, who annotate the labels for these independently from each other. Next, these two individual annotations are merged and conflicting labels are resolved together with the whole annotation team in a meeting. These consensus annotations constitute a gold standard against which individual annotators can be measured.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Quality", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "A total of 1,175 examples are double annotated (2.5% of the data 4 ). Most of these are annotated by exactly two annotators, however, some examples may include annotations from more than two annotators, and thus the total amount of individual annotations for which the gold standard label exists is 2,513. We measure the agreement of individually annotated examples against the gold standard annotations in terms of accuracy, i.e. the proportion of individually annotated examples with correctly assigned label. The overall accuracy is 68.7% when the base label (labels 1-4) as well as all additional flags are taken into consideration. When discarding the least common flags s and i and evaluating only base labels and directional subsumption flags, the overall accuracy is 72.9%. To compare the observed agreement to previous studies on paraphrase annotation, the Opusparcus annotation agreement is approximately 64% on Finnish development set and 67% on test set (calculated from numbers in Table 4 and Table 5 in Creutz (2018)). The Opusparcus uses an annotation scheme with four labels, similar to our base label scheme. In MRPC, the reported agreement score is 84% on a binary paraphrase-or-not scheme. While direct comparison is difficult due to the different annotation schemes and label distributions, the figures show that the observed agreement seem to be roughly within the same range with agreement numbers seen in related works.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 994, |
|
"end": 1013, |
|
"text": "Table 4 and Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1315, |
|
"end": 1322, |
|
"text": "figures", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Quality", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In addition to agreement accuracy, we calculate two versions of Cohen's kappa, a metric for interannotator agreement taking into account the possibility of agreement occurring by chance. First we measure kappa agreement of all individual annotations against the gold standard, an approach typical in paraphrase literature. This kappa is 0.62, indicating substantial agreement. Additionally, we measure the Cohen's kappa between each pair of annotators. The weighted average kappa over all annotator pairs is 0.41 indicating moderate agree- ment. Both are measured on full labels. When evaluating only on base labels and directional subsumption flags, these kappa scores are 0.65 and 0.45, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Quality", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We compare the distribution of paraphrase lengths and lexical similarity with the two Finnish paraphrase candidate corpora, Opusparcus and TaPaCo, as the reference. Direct comparison is complicated by several factors. Firstly, both Opusparcus and TaPaCo consist primarily of automatically extracted paraphrase candidates, Opusparcus having only small manually curated development and test sections, and TaPaCo being fully uncurated. Secondly, the small manually annotated sections of Opusparcus are sampled to emphasize lexically dissimilar pairs, and therefore not representative of the characteristics of the whole corpus. We therefore compare with the fully automatically extracted sections of both Opusparcus and TaPaCo. For our corpus, we discard the small proportion of examples of base labels 1 and 2, i.e. not paraphrases. Another important factor to consider is that the proportion of false candidates in the automatically extracted sections of Opusparcus and TaPaCo is unknown, further decreasing comparability: the characteristics of false and true candidates may differ substantially, false candidates for example likely being on average more dissimilar in terms of lexical overlap than true candidates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For each corpus, we sample 12,000 paraphrase pairs. For our corpus, we selected a random sample of true paraphrases (label 3 or higher) from the train section. For TaPaCo, the sample covers all paraphrase candidates from the corpus, however with the restriction of taking only one, random pair from each 'set' of paraphrases. For Opusparcus, which is sorted by a confidence score in descending order, the sample was selected to contain the most confident 12K paraphrase candidates. 5 In Figure 3 the length distribution of paraphrases in terms of tokens is measured for the abovementioned samples. Although the majority of paraphrases are rather short in all three corpora, we see that our corpus includes a considerably higher proportion of longer paraphrases. The average number of tokens in our corpus is 8.3 tokens per paraphrase, while it is 5.6 in TaPaCo and 3.6 in Opusparcus candidates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 482, |
|
"end": 483, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 487, |
|
"end": 495, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In Figure 4 the paraphrase pair cosine similarity distribution is measured using TF-IDF weighted character n-grams of length 2-4. While both TaPaCo and Opusparcus lean towards higher similarity candidates, the distribution of our corpus is more balanced including a considerably higher proportion of pairs with low lexical similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In order to establish a baseline classification performance on the new dataset, we train a classifier based on the FinBERT model (Virtanen et al., 2019) . Each paraphrase pair of statements A and B is encoded as the sequence [CLS] A [SEP] B [SEP] , where [CLS] and [SEP] are the special marker tokens of the BERT model. Subsequently, the output embeddings of the three special tokens are concatenated together with the averaged embeddings of the tokens in A and B. These five concatenated embeddings are then propagated into four decision layers: one for the base label 2/3/4, one for the subsumption flag </>/none, and one for each the binary flag s and i. Since the flags only apply to base label 4, no gradients are applied to these layers for examples with base labels 2 and 3. We have explored also other BERT-based architectures, such as basing the classification on the [CLS] embedding only as is customary, and having a single classification layer comprising all possible base label and flag combinations. These resulted in a consistent drop in prediction accuracy, and we did not pursue them any further.", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 152, |
|
"text": "(Virtanen et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 246, |
|
"text": "[SEP]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase Classification Baseline", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The baseline results are listed in Table 4 showing that per-class F-score ranges between 38-71%, strongly correlated with the number of examples available for each class. When interpreting the task as a pure multi-class classification, i.e. when counting all possible combinations of base label and flags as their own class, the accuracy is 54% with majority baseline being 34.3%, and the annotators' accuracy 68.7%. The model thus positions roughly to the mid-point between the trivial majority baseline, and human performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 42, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Paraphrase Classification Baseline", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this work, we set out to build a paraphrase corpus for Finnish that would be (a) in the size category allowing deep model fine-tuning and (b) manually gathered maximizing the chance of finding more non-trivial, longer paraphrases than would be possible with the traditional automatic candidate extraction. The annotation so far took 14 person-months and resulted in little over 50,000 manually classified paraphrases. We have demon- strated that, indeed, the corpus has longer, more lexically dissimilar paraphrases. Building such a corpus is therefore shown feasible and presently it is likely the largest manually annotated paraphrase dataset for any language, naturally at the inevitably higher data collection cost. The manual selection is only feasible for texts rich in paraphrase, and the domains and genres covered by the corpus is necessarily restricted by this condition. In our future work, we intend to extend the manually annotated corpus, ideally roughly double its present size. We expect the pursued data size will allow us to build sufficiently accurate models, both in terms of embedding and pair classification, to gather further candidates automatically at a level of accuracy sufficient to support down-stream applications. We are also investigating further text sources, especially parallel translations outside of the present subtitle domain. The additional flags in our annotation scheme, as well as the nearly 10,000 rewrites allow for interesting further investigations in their own right.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "While in the current study we concentrated on training a classifier for categorizing the paraphrases into fine-grained sub-categories, where only 2% of the paraphrases in the current release belonged to related but not a paraphrase category (label 2), which can be seen as a negative class in the more traditional paraphrase or not a paraphrase classification task. In order to better account for this traditional classification task, in future work, in addition to extending the number of positive examples, we will also look into methods for expanding the training section with negative examples. While extending the data with unrelated paraphrase candidates (label 1) can be considered a trivial task, as more or less any random sentence pair can be considered unrelated, the task of expanding the data with interesting related but not a paraphrase candidates (label 2) is an intriguing question. One option to consider in future work is active learning, where the confidence scores provided by the initial classifier could be used to collect difficult negatives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this paper we presented the first entirely manually annotated paraphrase corpus for Finnish including 45,663 naturally occurring paraphrases gathered from alternative movie or TV episode subtitles and news headings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Further 7,909 hand-made rewrites are provided, turning contextdependent paraphrases into perfect paraphrases whenever possible. The total size of the released corpus is 53,572 paraphrase pairs of which 98% are manually classified to be at least paraphrases in their given context if not in all contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Additionally, we evaluated the advantages and costs of manual paraphrase candidate selection from two 'parallel' but monolingual documents. We demonstrated the approach on alternative subtitles showing the technique being feasible for high quality candidate selection yielding sufficient amount of paraphrase candidates for the given annotation effort. We have shown the candidates to be notably longer and less lexically overlapping than what automated candidate selection permits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The corpus is available at github.com/ TurkuNLP/Turku-paraphrase-corpus under the CC-BY-SA license.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "https://tatoeba.org/eng/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.opensubtitles.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "During the initial annotator training double annotation was used extensively; this annotator training data is not included in the released corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When we repeated the length analysis with a sample of 480K most confident pairs, the length distribution and average length remained largely unchanged, while the similarity distribution became close to flat. Without manual annotation, it is hard to tell the reason for this behavior.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We gratefully acknowledge the support of European Language Grid which funded the annotation work. Computational resources were provided by CSC -the Finnish IT Center for Science and the research was supported by the Academy of Finland. We also thank Sampo Pyysalo for fruitful discussions and feedback throughout the project, and J\u00f6rg Tiedemann for his generous assistance with the OpenSubtitles data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A English Translation of ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluation of state-of-the-art paraphrase identification and its application to automatic plagiarism detection", |
|
"authors": [], |
|
"year": 2019, |
|
"venue": "International Journal of Pattern Recognition and Artificial Intelligence", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alaa Saleh Altheneyan and Mohamed El Bachir Menai. 2019. Evaluation of state-of-the-art paraphrase identification and its application to automatic pla- giarism detection. International Journal of Pattern Recognition and Artificial Intelligence, 34.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Squibs: What is a paraphrase?", |
|
"authors": [ |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Bhagat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "3", |
|
"pages": "463--472", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rahul Bhagat and Eduard Hovy. 2013. Squibs: What is a paraphrase? Computational Linguistics, 39(3):463-472.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Open Subtitles paraphrase corpus for six languages", |
|
"authors": [ |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Creutz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mathias Creutz. 2018. Open Subtitles paraphrase corpus for six languages. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Automatically constructing a corpus of sentential paraphrases", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Third International Workshop on Paraphrasing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP 2005).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "ParaSCI: A large scientific paraphrase dataset for longer paraphrase generation", |
|
"authors": [ |
|
{ |
|
"first": "Qingxiu", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2101.08382" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qingxiu Dong, Xiaojun Wan, and Yue Cao. 2021. ParaSCI: A large scientific paraphrase dataset for longer paraphrase generation. arXiv preprint arXiv:2101.08382.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "PARADE: A new dataset for paraphrase identification requiring computer science domain knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhuoer", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruihong", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Caverlee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7572--7582", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yun He, Zhuoer Wang, Yin Zhang, Ruihong Huang, and James Caverlee. 2020. PARADE: A new dataset for paraphrase identification requiring computer sci- ence domain knowledge. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 7572-7582.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A continuously growing dataset of sentential paraphrases", |
|
"authors": [ |
|
{ |
|
"first": "Wuwei", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siyu", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1224--1234", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential para- phrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1224-1234.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Generating phrasal and sentential paraphrases: A survey of data-driven methods", |
|
"authors": [ |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Madnani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bonnie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Computational Linguistics", |
|
"volume": "36", |
|
"issue": "3", |
|
"pages": "341--387", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitin Madnani and Bonnie J. Dorr. 2010. Generat- ing phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36(3):341-387.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Improving statistical machine translation with a multilingual paraphrase database", |
|
"authors": [ |
|
{ |
|
"first": "Maryam", |
|
"middle": [], |
|
"last": "Ramtin Mehdizadeh Seraj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Siahbani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1379--1390", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramtin Mehdizadeh Seraj, Maryam Siahbani, and Anoop Sarkar. 2015. Improving statistical machine translation with a multilingual paraphrase database. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 1379-1390.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "TaPaCo: A corpus of sentential paraphrases for 73 languages", |
|
"authors": [ |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Scherrer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference (LREC 2020)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6868--6873", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yves Scherrer. 2020. TaPaCo: A corpus of sentential paraphrases for 73 languages. In Proceedings of The 12th Language Resources and Evaluation Confer- ence (LREC 2020), pages 6868-6873.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A paraphrase generation system for EHR question answering", |
|
"authors": [ |
|
{ |
|
"first": "Sarvesh", |
|
"middle": [], |
|
"last": "Soni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kirk", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "20--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarvesh Soni and Kirk Roberts. 2019. A paraphrase generation system for EHR question answering. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 20-29.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Parallel data, tools and interfaces in OPUS", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2214--2218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC 2012), pages 2214-2218.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish", |
|
"authors": [ |
|
{ |
|
"first": "Antti", |
|
"middle": [], |
|
"last": "Virtanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenna", |
|
"middle": [], |
|
"last": "Kanerva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rami", |
|
"middle": [], |
|
"last": "Ilo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jouni", |
|
"middle": [], |
|
"last": "Luoma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juhani", |
|
"middle": [], |
|
"last": "Luotolahti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1912.07076" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Lu- oma, Juhani Luotolahti, Tapio Salakoski, Filip Gin- ter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish. arXiv preprint arXiv:1912.07076.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Labels distribution in our corpus excluding 7,909 rewrites which can be added up with label 4.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Density of different labels in the training set conditioned on cosine similarity of the paraphrase pairs.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Comparison of paraphrase length distributions in terms of tokens per paraphrase.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Comparison of paraphrase pair cosine similarity distributions.", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "Example paraphrase pairs annotated with different labels and flags (English translations available in Appendix A).", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"text": "Classification performance on the test set, when the base label and the flags are predicted separately. In the upper section, we merge the subsumption flags with the base class prediction, but leave the i and s separated. The rows W. avg and Acc on the other hand refer to performance on the complete labels, comprising all allowed combinations of base label and flags. W. avg is the average of P/R/F values across the classes, weighted by class support. Acc is the accuracy.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Label</td><td colspan=\"2\">Prec Rec F-score Support</td></tr><tr><td>2</td><td>50.9 31.2 38.7</td><td>93</td></tr><tr><td>3</td><td>57.7 31.9 41.1</td><td>990</td></tr><tr><td>4</td><td>66.2 78.2 71.7</td><td>2149</td></tr><tr><td>4<</td><td>52.8 53.5 53.2</td><td>1007</td></tr><tr><td>4></td><td>52.6 56.1 54.3</td><td>1136</td></tr><tr><td>i</td><td>51.5 36.5 42.7</td><td>329</td></tr><tr><td>s</td><td>51.4 28.9 37.0</td><td>249</td></tr><tr><td colspan=\"2\">W. avg 52.9 54.0 52.2</td><td/></tr><tr><td>Acc</td><td>54.0</td><td/></tr><tr><td>Table 4:</td><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |