|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:06:43.682988Z" |
|
}, |
|
"title": "Automatically Ranked Russian Paraphrase Corpus for Text Generation", |
|
"authors": [ |
|
{ |
|
"first": "Vadim", |
|
"middle": [], |
|
"last": "Gudkov", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saint Petersburg State University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Mitrofanova", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saint Petersburg State University", |
|
"location": {} |
|
}, |
|
"email": "o.mitrofanova@spbu.ru" |
|
}, |
|
{ |
|
"first": "Elizaveta", |
|
"middle": [], |
|
"last": "Filippskikh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saint Petersburg State University", |
|
"location": {} |
|
}, |
|
"email": "efilippskikh@crafttalk.ru" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The article is focused on automatic development and ranking of a large corpus for Russian paraphrase generation which proves to be the first corpus of such type in Russian computational linguistics. Existing manually annotated paraphrase datasets for Russian are limited to small-sized ParaPhraser corpus and ParaPlag which are suitable for a set of NLP tasks, such as paraphrase and plagiarism detection, sentence similarity and relatedness estimation, etc. Due to size restrictions, these datasets can hardly be applied in end-to-end text generation solutions. Meanwhile, paraphrase generation requires a large amount of training data. In our study we propose a solution to the problem: we collect, rank and evaluate a new publicly available headline paraphrase corpus (ParaPhraser Plus), and then perform text generation experiments with manual evaluation on automatically ranked corpora using the Universal Transformer architecture.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The article is focused on automatic development and ranking of a large corpus for Russian paraphrase generation which proves to be the first corpus of such type in Russian computational linguistics. Existing manually annotated paraphrase datasets for Russian are limited to small-sized ParaPhraser corpus and ParaPlag which are suitable for a set of NLP tasks, such as paraphrase and plagiarism detection, sentence similarity and relatedness estimation, etc. Due to size restrictions, these datasets can hardly be applied in end-to-end text generation solutions. Meanwhile, paraphrase generation requires a large amount of training data. In our study we propose a solution to the problem: we collect, rank and evaluate a new publicly available headline paraphrase corpus (ParaPhraser Plus), and then perform text generation experiments with manual evaluation on automatically ranked corpora using the Universal Transformer architecture.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A large amount of work is dedicated for a clear understanding of the nature of a paraphrase. On the one hand, traditional theories of language allow to trace the notion of paraphrase back to the ancient rhetorical tradition (cf. Greek \u03c0\u03b1\u03c1\u03ac\u03c6\u03c1\u03b1\u03c3\u03b9\u03c2 'retelling') and treat it quite broadly in case of different types of prose, verse, musical pieces, etc. On the other hand, the generative trend in linguistic research encouraged description of transformations involved in the transition from deep to surface structures and at the same time responsible for the emergence of a wide range of paraphrases, cf. Chomskian generative grammar giving account of various lexical transformations, Melchuk's Sense-Text theory postulating the process of paraphrasing as synonymic conversion, etc. In recent works paraphrases are treated as \"alternative expressions of the same (or similar) meaning\" (Agirre et al., 2015) . Ranking paraphrases as regards their similarity in form and meaning is reflected in a set of paraphrase classifications, where precise paraphrases are distinguished from quasi-paraphrases and non-paraphrases (Andrew and Gao, 2007) . At the same time, paraphrase corpora development required deep analysis of paraphrase transformations types (e.g. morphosyntactic, lexical and semantic shifts).", |
|
"cite_spans": [ |
|
{ |
|
"start": 882, |
|
"end": 903, |
|
"text": "(Agirre et al., 2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1114, |
|
"end": 1136, |
|
"text": "(Andrew and Gao, 2007)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Paraphrasing plays an important role in a broad range of NLP tasks, including but not limited to question answering, summarization, information retrieval, sentence simplification, machine translation and dialogue systems. However, in order to be able to train a good paraphrasing system, large parallel corpora are required, which can be a problem in underdeveloped languages from a data resources standpoint. In order to bridge this gap, we propose a methodology to collect enough data for proper deep learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Paraphrase identification inspired a set of NLP competitions within SemEval conferences in 2012, 2013, 2015 and 2016, so that baseline decisions and their improvements were worked out for English. There also exist several well-known manually annotated paraphrase datasets for English: Microsoft Paraphrase (Dolan and Brockett, 2005), Quora Question Pairs and MS COCO (Lin et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 385, |
|
"text": "(Lin et al., 2014)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, Russian is less represented in paraphrase research both in case of resource development and algorithm evaluation, a few exceptions being AINL Paraphrase detection competition in 2016 based on Paraphraser corpus and Dialogue Paraphrased plagiarism detection competition in 2017 based on ParaPlag corpus. Alongside with Paraphraser and ParaPlag, there are some para-phrase resources which include Russian language, for instance by Opusparcus (Creutz, 2018) and PPDB (Ganitkevitch et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 463, |
|
"text": "(Creutz, 2018)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 500, |
|
"text": "(Ganitkevitch et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our study we mainly focus on the collection, evaluation and generation of the, so called, sentential paraphrases. This approach is different from the collection of PPDB, where sub-sentential paraphrases, such as individual word-pairs, were also included and ParaPlag with main focus on textlevel rephrasing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recent work (Gupta et al., 2018; Fu et al., 2019; Egonmwan and Chali, 2019) provides solid evidence in favour of paraphrase generation by means of seq-2-seq architectures. The main problem, however, is that such systems require significant expansion of existing datasets for proper machine learning (Roy and Grangier, 2019) . The lack of data still remains the greatest obstacle to the development of a stable generation system which could be lexically rich and insensitive to rare words. E.g., the largest datasets supplied with proper annotation seldom exceed 100K samples in size. The authors of the aforementioned articles claim that any user generated content is valuable even though noisy to a certain extent. We propose a solution which overcomes the given problem, and it is based on the denoising procedure which has recently attracted growing attention. We argue that automatically matched and ranked datasets can be used for paraphrase generation task, especially in low-resource languages, by providing experimental results obtained on the Russian Opusparcus subcorpus and on the novel ParaPhraser Plus corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 32, |
|
"text": "(Gupta et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 33, |
|
"end": 49, |
|
"text": "Fu et al., 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 50, |
|
"end": 75, |
|
"text": "Egonmwan and Chali, 2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 323, |
|
"text": "(Roy and Grangier, 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The ParaPhraser Plus corpus 1 is distilled from a database of news headlines, that was kindly provided by the Russian Internet monitoring service, \"Webground\". Although, the contents of the resources are pretty similar, the data itself in the original ParaPhraser corpus and the ParaPhraser Plus corpus as well as the methodology used to collect the headlines are not the same by any means. It is important to note, however, that ranking model which will be described in the corresponding section was based on the original corpus. The headlines in \"Webground\" were initially clustered by events over a ten year span, beginning from the year 2009. Following the hypothesis that within such theme-based user-generated clusters the chance of seeing a paraphrase is particularly high, we formed sets of pairs of all possible combinations within each of them. After weeding out pairs, consisting of the same tokens, we were left with just over 56 million pairs of potential paraphrases. We have also discarded over 200 thousand headlines where it was not possible to verify the authorship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are several known approaches to paraphrase ranking, including heuristic scoring (Pavlick et al., 2015) and supervised modelling (Creutz, 2018) . Heuristic scoring can be effectively conducted in resources with cross-linguistic support, such as PPDB and Opuspracus. However, ParaPhraser, as well as our addition, is monolingual, therefore this approach was not possible. On the other hand, supervised modelling techniques can be adopted: there is a significant amount of labeled data in the original ParaPhraser corpus and several approaches to paraphrase identification in Russian headlines have been thoroughly researched and summarized in (Pivovarova et al., 2017) . The methods included shallow neural networks, linguistic features based classifier and a combination of machine translation with semantic similarity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 108, |
|
"text": "(Pavlick et al., 2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 148, |
|
"text": "(Creutz, 2018)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 672, |
|
"text": "(Pivovarova et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking methodology", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "However, recent research conducted in (Kuratov and Arkhipov, 2019) shows that deep bidirectional pretrained monolingual transformers improve paraphrase detection in Russian by a large margin. It was shown that finetuning a monolingual BERT based model (RuBERT) on the ParaPhraser corpus yields results far better than all of the aforementioned approaches (see Table 1 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 66, |
|
"text": "(Kuratov and Arkhipov, 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 360, |
|
"end": 367, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ranking methodology", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The training set in ParaPhraser includes 7,227 pairs of sentences, which are classified by humans into three classes: 2,582 non-paraphrases, 2,957 near-paraphrases,and 1,688 precise-paraphrases. The aforementioned RuBERT model was finetuned to a binary classification task: both nearparaphrases and paraphrases were considered to be a single class. Such approach helps in automatic ranking: it is possible to sort the items in accordance to the probability of the paraphrase class in descending order. The fine-tuned RuBERT model is available as part of the DeepPavlov library (Burtsev et al., 2018), which enabled us to adopt this approach in our corpus construction study.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking methodology", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F1 Accuracy Shallow Neural Networks (Pivovarova et al., 2017) 79.82 76.65 Linguistic Features Classifier (Pivovarova et al., 2017) 81.10 77.39", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 61, |
|
"text": "(Pivovarova et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 130, |
|
"text": "(Pivovarova et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Machine Translation Based Semantic Similarity (Kravchenko, 2018) 78.51 81.41", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 64, |
|
"text": "(Kravchenko, 2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "RuBERT (Kuratov and Arkhipov, 2019) 87.73 84.99 Table 1 : Paraphrase detection algorithms evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 35, |
|
"text": "(Kuratov and Arkhipov, 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 55, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to evaluate our supervised automatic ranking approach we randomly select a subsample of 500 pairs for manual annotation. To provide a more thorough comparison analysis we step aside from the original 3-way annotation scheme utilized in ParaPhraser and adopt the approach provided in (Creutz, 2018) with more similarity degrees. The annotation scheme from the original paper is provided in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 292, |
|
"end": 306, |
|
"text": "(Creutz, 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 398, |
|
"end": 405, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ranking evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To measure The inter-annotator agreement we use Fleiss Kappa, which is a Cohen's Kappa generalization to more than two annotators (in our case -5); expected agreement is calculated on the basis of the assumption that random assignment of categories to items, by any annotator, is governed by the distribution of items among categories in the actual world. The annotators reach a fair agreement (Kappa 0.267, p-value < 0.05).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The cosine similarity baseline solution of Word2Vec embeddings achieves a manual annotation Pearson's correlation coefficient of 0.535. Our supervised model rankings for ParaPhraser Plus dramatically improve correlation with human judgments (p = 0.734).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To test our initial hypothesis we conduct a paraphrase generation experiment on two datasets: Opusparcus and our ParaPhraser Plus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase generation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "There exist several methods to generate paraphrases. The following techniques are known: rulebased (McKeown, 1983) , Seq-2-Seq (Gupta et al., 2018; Fu et al., 2019; Egonmwan and Chali, 2019; Roy and Grangier, 2019) , reinforcement learning (Li et al., 2017) , deep generative models (Iyyer et al., 2018) and a varied combination (Gupta et al., 2018; Mallinson et al., 2017) of the later three.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 114, |
|
"text": "(McKeown, 1983)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 147, |
|
"text": "(Gupta et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 164, |
|
"text": "Fu et al., 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 190, |
|
"text": "Egonmwan and Chali, 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 214, |
|
"text": "Roy and Grangier, 2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 257, |
|
"text": "(Li et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 303, |
|
"text": "(Iyyer et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 349, |
|
"text": "(Gupta et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 373, |
|
"text": "Mallinson et al., 2017)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase generation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We show the results that can be achieved on large automatically ranked corpora using a Sequence-to-Sequence model based on the Universal Transformer architecture as it has demonstrated superior performance over the past year in multiple generative tasks, such as abstractive summarization, machine translation and, of course, paraphrase generation. (Gupta et al., 2018; Mallinson et al., 2017; Gupta et al., 2018; Fu et al., 2019; Egonmwan and Chali, 2019; Roy and Grangier, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 369, |
|
"text": "(Gupta et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 393, |
|
"text": "Mallinson et al., 2017;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 413, |
|
"text": "Gupta et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 430, |
|
"text": "Fu et al., 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 456, |
|
"text": "Egonmwan and Chali, 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 480, |
|
"text": "Roy and Grangier, 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase generation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As pointed out in (Vaswani et al., 2017) , the attention heads in the transformer model can be found very useful in learning grammatical, syntactical, morphological and semantical behavior in the language, which is essential in paraphrase generation. Such results are being achieved thanks to the fact that input vectors are connected to every other via the attention mechanism, thus allowing the network to learn complex rephrasing dependencies. Moreover, contrary to recurrent neural networks, a transformer can be trained in parallel.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 40, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase generation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For both datasets, Opusparcus and ParaPhraser Plus, we used the same set of model hyperparameters: 4 layers in the encoder and decoder with 8 heads of attention. In addition, we added a Dropout of p = 0.3. The models were trained until convergence with the Adam optimizer using a scaled learning rate, as proposed by the authors of the original Transformer", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase generation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also adopt byte-pair encoding (BPE), a data compression technique where often occuring pairs of bytes are replaced by additional extra-alphabet symbols. Thanks to this approach, the most frequent parts of words are kept in the vocabulary, while rarely occuring words are replaced by a sequence of several tokens. Languages with rich morphology benefit the most as the word endings could be separated since each word form is definitely less frequent than its stem. BPE encoding allows us to represent all words, including the ones unseen during training (e.g. first and last names, which are common in headlines), with a fixed vocabular.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase generation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Good 4The two sentences can be used in the same situation and essentially \"mean the same thing\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Description Examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It was a last minute thing <-> This wasn't planned; I have goose flesh <-> The hair's standing upon my arms Mostly Good 3It is acceptable to think that the two sentences refer to the same thing, although one sentence might be more specific than the other one, or there are differences in style.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Description Examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Go to your bedroom <-> Just go to sleep; Next man, move it <-> Next, please;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Description Examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Calvin, now what? <-> What are we doing?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Description Examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Mostly Bad 2There is some connection between the sentences that explains why they occur together, but one would not really consider them to mean the same thing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Description Examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Did you ask him <-> Have you asked her?; Hello, operator? <-> Yes, operator, I'm trying to get to the police Bad (1) There is no obvious connection. The sentences mean different things.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Description Examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "She's over there <-> Take me to him; All the cons <-> Nice and comfy Table 2 : Paraphrase annotation scheme as provided in (Creutz, 2018) . A pair can also be ranked \"in-between\" categories (e.g. 2.5 or 3.5). ", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 137, |
|
"text": "(Creutz, 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 76, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Category Description Examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We perform experiments on the above mentioned datasets, and report, both qualitative and quantitative results of our approach. As can be seen in Table 3 which demonstrates the quantitative results, there is a strong correlation between the size of the training set, selected from top N samples, and the final score of the model. We also perform a qualitative analysis by sampling 100 examples of the original phrase, reference and our 2m model generated phrase for human evaluation. We asked 3 annotators to choose their paraphrase preference over three possible options: original paraphrase (Human), generated paraphrase (Machine), no preference (Tie). The results can be seen in 23.9 14.5 Table 4 : Human evaluation of generated paraphrases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 152, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 691, |
|
"end": 698, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For the both corpora, we could see that our model is not reaching human parity yet, having 47.7 and 38.4 of (Machine + Tie) user preference for Opusparcus and ParaPhraser datasets respectively. Some examples of the produced paraphrases can be seen below (translated into English):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 Original: \"State Duma may prohibit doctors and teachers from accepting gifts other than flowers\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Reference: \"Teachers and doctors in Russia may be prohibited from accepting gifts\" Generated: \"The State Duma proposed to ban doctors and teachers from accepting gifts\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 Original: \"The Bank of Russia revoked its license from the Yekaterinburg Plateau Bank\" Reference: \"Yekaterinburg Plateau Bank is left without its license\" Generated: \"Central Bank revoked the license from \"plateau-bank\"\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 Original: \"Stocks are ready to rise in the stock market.\" Reference: \"Stocks are going to rise on the market\" Generated: \"Stock market ready to go up\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Despite the fact that both of the training sets are noisy to a certain extent, the model is able to generalize and generate paraphrases of decent quality (from semantic and grammatical standpoint) for types of content it has never seen during the training phase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "This study confirms our initial hypothesis that data size restrictions can be effectively resolved with automatically ranked corpora, especially in lowresource languages where large manually annotated datasets are not available. We also present a newly gathered ParaPhraser Plus corpus and results achieved by a transformer model applied to it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In the future we would like to extend our work to other generative tasks and create more diverse and large ranked corpora utilizing different approaches for supervised ranking. In addition to that, we are interested in investigating how a combination of ranking techniques could be used for better data sampling in generation oriented tasks. Also we would like to investigate what is the minimal amount of manually annotated data that is sufficient for successful automatic ranking in parallel corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future work", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Available at: http://paraphraser.ru/download/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability", |
|
"authors": [ |
|
{ |
|
"first": "German", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Larraitz", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Uria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "252--263", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S15-2045" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic tex- tual similarity, English, Spanish and pilot on inter- pretability. pages 252-263.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Scalable training of L1-regularized log-linear models", |
|
"authors": [ |
|
{ |
|
"first": "Galen", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Galen Andrew and Jianfeng Gao. 2007. Scalable train- ing of L1-regularized log-linear models. pages 33- 40.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Deeppavlov: Open-source library for dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Burtsev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Seliverstov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafael", |
|
"middle": [], |
|
"last": "Airapetyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Arkhipov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilyara", |
|
"middle": [], |
|
"last": "Baymurzina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nickolay", |
|
"middle": [], |
|
"last": "Bushkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Gureenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taras", |
|
"middle": [], |
|
"last": "Khakhulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yurii", |
|
"middle": [], |
|
"last": "Kuratov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Kuznetsov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "122--127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Burtsev, Alexander Seliverstov, Rafael Airapetyan, Mikhail Arkhipov, Dilyara Baymurz- ina, Nickolay Bushkov, Olga Gureenkova, Taras Khakhulin, Yurii Kuratov, Denis Kuznetsov, et al. 2018. Deeppavlov: Open-source library for dialogue systems. pages 122-127.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Open subtitles paraphrase corpus for six languages", |
|
"authors": [ |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Creutz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.06142" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mathias Creutz. 2018. Open subtitles paraphrase corpus for six languages. arXiv preprint arXiv:1809.06142.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatically constructing a corpus of sentential paraphrases", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Transformer and seq2seq model for paraphrase generation", |
|
"authors": [ |
|
{ |
|
"first": "Elozino", |
|
"middle": [], |
|
"last": "Egonmwan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yllias", |
|
"middle": [], |
|
"last": "Chali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--255", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-5627" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elozino Egonmwan and Yllias Chali. 2019. Trans- former and seq2seq model for paraphrase generation. pages 249-255.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Paraphrase generation with latent bag of words", |
|
"authors": [ |
|
{ |
|
"first": "Yao", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yansong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John P", |
|
"middle": [], |
|
"last": "Cunningham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13623--13634", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yao Fu, Yansong Feng, and John P Cunningham. 2019. Paraphrase generation with latent bag of words. pages 13623-13634.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "PPDB: The paraphrase database", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "758--764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. pages 758-764.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A deep generative framework for paraphrase generation", |
|
"authors": [ |
|
{ |
|
"first": "Ankush", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prawaan", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyush", |
|
"middle": [], |
|
"last": "Rai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Adversarial example generation with syntactically controlled paraphrase networks", |
|
"authors": [ |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.06059" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. arXiv preprint arXiv:1804.06059.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Paraphrase detection using machine translation and textual similarity algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Kravchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "277--292", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitry Kravchenko. 2018. Paraphrase detection us- ing machine translation and textual similarity algo- rithms. pages 277-292.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adaptation of deep bidirectional multilingual transformers for russian language", |
|
"authors": [ |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kuratov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Arkhipov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.07213" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuri Kuratov and Mikhail Arkhipov. 2019. Adaptation of deep bidirectional multilingual transformers for russian language. arXiv preprint arXiv:1905.07213.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Paraphrase generation with deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.00279" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2017. Paraphrase generation with deep reinforce- ment learning. arXiv preprint arXiv:1711.00279.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Microsoft coco: Common objects in context", |
|
"authors": [ |
|
{ |
|
"first": "Tsung-Yi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Maire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "Belongie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Hays", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pietro", |
|
"middle": [], |
|
"last": "Perona", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deva", |
|
"middle": [], |
|
"last": "Ramanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Doll\u00e1r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C Lawrence", |
|
"middle": [], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "740--755", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. pages 740-755.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Paraphrasing revisited with neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Mallinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "881--893", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. pages 881-893.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Focus constraints on language generation", |
|
"authors": [ |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kathleen McKeown. 1983. Focus constraints on lan- guage generation.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Ppdb 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification", |
|
"authors": [ |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "425--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. Ppdb 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. pages 425-430.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Paraphraser: Russian paraphrase corpus and shared task", |
|
"authors": [ |
|
{ |
|
"first": "Lidia", |
|
"middle": [], |
|
"last": "Pivovarova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Pronoza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Yagunova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Pronoza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "211--225", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lidia Pivovarova, Ekaterina Pronoza, Elena Yagunova, and Anton Pronoza. 2017. Paraphraser: Russian paraphrase corpus and shared task. pages 211-225.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Unsupervised paraphrasing without translation", |
|
"authors": [ |
|
{ |
|
"first": "Aurko", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6033--6039", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1605" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. pages 6033-6039.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. pages 5998-6008.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Generation scores on the test set of each dataset for different train sizes.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">Human Tie Machine</td></tr><tr><td>Opusparcus</td><td>52.3</td><td>26.2 21.5</td></tr><tr><td colspan=\"2\">ParaPhraser Plus 60.6</td><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |