{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:05:31.593421Z" }, "title": "System Description for the CommonGen task with the POINTER model", "authors": [ { "first": "Anna", "middle": [], "last": "Shvets", "suffix": "", "affiliation": {}, "email": "anna.shvets@inetum.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In a current experiment we were testing Com-monGen dataset for structure-to-text task from GEM living benchmark with the constraint based POINTER model. POINTER represents a hybrid architecture, combining insertionbased and transformer paradigms, predicting the token and the insertion position at the same time. The text is therefore generated gradually in a parallel non-autoregressive manner, given the set of keywords. The pretrained model was fine-tuned on a training split of the Common-Gen dataset and the generation result was compared to the validation and challenge splits. 1 The received metrics outputs, which measure lexical equivalence, semantic similarity and diversity, are discussed in details in a present system description.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In a current experiment we were testing Com-monGen dataset for structure-to-text task from GEM living benchmark with the constraint based POINTER model. POINTER represents a hybrid architecture, combining insertionbased and transformer paradigms, predicting the token and the insertion position at the same time. The text is therefore generated gradually in a parallel non-autoregressive manner, given the set of keywords. The pretrained model was fine-tuned on a training split of the Common-Gen dataset and the generation result was compared to the validation and challenge splits. 1 The received metrics outputs, which measure lexical equivalence, semantic similarity and diversity, are discussed in details in a present system description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The 2021 edition of the Generation Evaluation and Metrics (GEM) challenge for the creation of living NLG benchmark leaderboard (Gehrmann et al., 2021) , comprised four groups of tasks -summarization, structure-to-text, simplification and dialog. The CommonGen dataset makes part of the structure-to-text group and was designed to measure a common sense reasoning capacities of generative models given a set of concepts (Lin et al., 2020) . Due to the nature of the constraint based text generation of the POINTER model (Zhang et al., 2020b) and resemblance in a generation strategy (the model takes a set of keywords as an input and generates a text, containing these keywords) the CommonGen dataset for hard constrained generation of the GEM benchmark appears to be a good fit for testing the model performance. The pretrained POINTER model was therefore fine-tuned on a training set of the CommonGen dataset and the inference results were compared to the validation and challenge splits of the same dataset. 2", "cite_spans": [ { "start": 127, "end": 150, "text": "(Gehrmann et al., 2021)", "ref_id": null }, { "start": 419, "end": 437, "text": "(Lin et al., 2020)", "ref_id": "BIBREF9" }, { "start": 519, "end": 540, "text": "(Zhang et al., 2020b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Insertion-based transformer architecture leverage implies the use of the masking mechanism, the goal of which is to predict not only the likelihood of a token itself, but the likelihood of the token insertion between two given tokens, in other words, we need to predict the word and the place where a new word is inserted. In that regard, a text is preprocessed in a specific way, where the tokens are scored using a combination of three schemes of the token importance measurement (term frequency-inverse document frequency (TF-IDF), part-of-speech (POS) tagging and Yet-Another-Keyword-Extractor (YAKE)) and the highest scored tokens are replaced with a special noinsertion token [NOI] tag. This procedure is iterative and results in generation of several utterances out of the initial sentence. During the training phase, the model is initialised with the Multilingual BERT and its vobabulary is extended with the [NOI] tag. At the inference time, the masking mechanism is used in a reverse order, allowing an iterative tokens prediction -the model will chose to either generate a token or a [NOI] tag at a given generation stage and if the next stage contains [NOI] tag predictions only, the generation is finished.", "cite_spans": [ { "start": 686, "end": 691, "text": "[NOI]", "ref_id": null }, { "start": 1099, "end": 1104, "text": "[NOI]", "ref_id": null }, { "start": 1168, "end": 1173, "text": "[NOI]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data description and pre-processing", "sec_num": "2" }, { "text": "The model was pre-trained on 12GB of Wikipedia corpora, therefore the pre-training data consisted of a well written English with the correct spelling, grammar and punctuation. For the finetuning, the sentences from the training split were preprocessed with the pre-training data generation script, 3 which inserts the token position masks in a gradual manner, resulting in a data augmentation from 67.389 source entries to 160.680 processed 2 Available under the MIT license at https://github.com/dreasysnail/POINTER.", "cite_spans": [ { "start": 441, "end": 442, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data description and pre-processing", "sec_num": "2" }, { "text": "3 Available in the project repository cited earlier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data description and pre-processing", "sec_num": "2" }, { "text": "entries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data description and pre-processing", "sec_num": "2" }, { "text": "The fine-tuning was done on 8 cores (16GB of RAM each) of a TPU-v3 device, following the multiprocessing paradigm, and took three hours to train on 40 epochs with the batch size equal to 64 and gradient accumulation equal to 2. The finetuning hyperparameters were preserved from the original paper and included AdamW optimizer, learning rate equal to 1e-5, Adam epsilon equal to 1e-8, 10 warmup optimizer scheduler steps and the seed equal to 1. The inference of the finetuned model was done using the concept sets from the validation and challenge splits of the CommonGen dataset. The decoding strategy included two sampling methods, applied separately -greedy and sampling. The greedy decoding is based on a greedy search algorithm, which consists of choosing the highest scoring token at a given time step, along with the temperature (Ackley et al., 1985) , while sampling uses a combination of top-k (Fan et al., 2018) , top-p (Holtzman et al., 2020) and the temperature parameters to render model predictions.", "cite_spans": [ { "start": 837, "end": 858, "text": "(Ackley et al., 1985)", "ref_id": "BIBREF0" }, { "start": 904, "end": 922, "text": "(Fan et al., 2018)", "ref_id": "BIBREF3" }, { "start": 931, "end": 954, "text": "(Holtzman et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Training details and decoding strategy", "sec_num": "3" }, { "text": "For the greedy decoding method, a temperature, which is a scale factor of each token's probability before going through softmax function, was set to its lower value 0.3, ensuring the most stable generations. This parameter alone draws a limit on the model's creativity, resulting in a more rigid generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training details and decoding strategy", "sec_num": "3" }, { "text": "For the sampling decoding method, the parameters promoting a high creativity of the model were chosen: the top-k window of the most probable tokens was set to 10, following the strategy expressed in the original paper (Fan et al., 2018) , the top-p cumulative probability threshold for the most probable tokens was set to its highest tested value 0.95, according to the original paper (Holtzman et al., 2020) , and the temperature was set to 0.9 -this is the highest lower probability threshould for this sampling parameter, allowing the maximum tokens pass-though without giving up stability of the text generation.", "cite_spans": [ { "start": 218, "end": 236, "text": "(Fan et al., 2018)", "ref_id": "BIBREF3" }, { "start": 385, "end": 408, "text": "(Holtzman et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Training details and decoding strategy", "sec_num": "3" }, { "text": "Other parameters were common for both sampling methods and included noi decay and reduce decay, which were equal to 1, and prevent, reduce stop, lessrepeat, which were set to true. The inference for both decoding methods was done with the maximum sequence length Description Content keys val.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training details and decoding strategy", "sec_num": "3" }, { "text": "ball court run throw greedy val.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training details and decoding strategy", "sec_num": "3" }, { "text": "Olympic athlete then brings in the tennis ball straight back up down on the tennis court. sampling val. Olympic athlete quickly moves toward the soccer ball about halfway way up on the clay court. target val.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training details and decoding strategy", "sec_num": "3" }, { "text": "The boy must run from one end of the court to the other to throw the ball into the hoop. equal to 256. The opposite set of parameters (rigid versus creative) intended to explore the model's edge generative performance. This induces the metrics measurements for both types of the decoding strategy within validation and challenge splits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training details and decoding strategy", "sec_num": "3" }, { "text": "Before diving in the metrics output results, let us explore a few examples of the generated text. 4 The Table 1 shows the examples of generation using greedy and sampling decoding methods for the validation split, compared to the human-generated target from the CommonGen dataset. To fairly measure the metrics output, the number of entries in the validation split was truncated to 500 in order to match the number of entries in the challenge set.", "cite_spans": [ { "start": 98, "end": 99, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Metrics outputs", "sec_num": "4" }, { "text": "Since the goal of GEM challenge is an in-depth analysis of the model performance regarding lexical, semantic similarity and language richness, we will divide the analysis in separate subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics outputs", "sec_num": "4" }, { "text": "The lexical equivalence was measured with four n-gram based automated metrics and is reflected in two tables: Table 2 and Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 129, "text": "Table 2 and Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Lexical equivalence", "sec_num": "4.1" }, { "text": "The Recall-Oriented Understudy for Gisting Evaluation (ROUGE), which relies on counting the matching n-grams in candidate and reference text, is a metric initially designed for evaluating summaries (Lin, 2004) , which nowadays is widely used for many other tasks in natural language processing and generation. The ROUGE-1 (R1) and ROUGE-2 (R2) in a Table 2 reflects the co-occurrence of unigrams and bigrams in generated text versus validation or challenge splits of the CommonGen dataset. The ROUGE-L (RL) measures the longest in-sequence common n-grams and as we may observe, the values are quite small, meaning that the generated text might use different vocabulary, compared to the reference text. The ROUGE score is a bit higher for greedy decoding method of the challenge set.", "cite_spans": [ { "start": 198, "end": 209, "text": "(Lin, 2004)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 349, "end": 356, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Lexical equivalence", "sec_num": "4.1" }, { "text": "While ROUGE is a recall-oriented metric, BLEU relies on a precision calculation of the overlapping n-grams and was primarily designed to measure the quality of the automatic translation (Papineni et al., 2002) . The BLEU score augmentation is observed for the challenge set (Table 3) , which might indicate, that the generated text might suffer from the noise, since it gives better scores when compared to a noisy reference text.", "cite_spans": [ { "start": 186, "end": 209, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 274, "end": 283, "text": "(Table 3)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Lexical equivalence", "sec_num": "4.1" }, { "text": "The calculation of the geometric mean with the BLUE score is completed by calculation of the arithmetic mean of the n-gram overlap with the NIST metric. This metric also calculates a degree of the informativeness of n-grams (rare n-grams are given more weight) and is less sensible towards small differences between the candidate and reference texts (Doddington, 2002) . The NIST score shows no significant difference between validation and challenge splits, however the score itself is rather low, which indicates considerable lexical differences of the generated text compared to the reference text.", "cite_spans": [ { "start": 350, "end": 368, "text": "(Doddington, 2002)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical equivalence", "sec_num": "4.1" }, { "text": "Additionally to the geometric and arithmetic mean, a harmonic mean of unigram precision and recall is calculated with the METEOR metric (Banerjee and Lavie, 2005) . The advantage of this n-gram based metric is that the calculation includes synonym matching, stemming and word matching, which lowers the impact of alternative vocabulary and grammatical forms used in the generated text, compared to the golden human standard. Although the values appear to be low, it should be noted, that the maximum correlation with human judgement achieved was equal to 0.403. 5 The METEOR score is slightly higher for the challenge set and is generally higher for the sampling decoding method.", "cite_spans": [ { "start": 136, "end": 162, "text": "(Banerjee and Lavie, 2005)", "ref_id": "BIBREF1" }, { "start": 562, "end": 563, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lexical equivalence", "sec_num": "4.1" }, { "text": "A recent shift towards neural based metrics changed the very essence of the metrics inputthe words are represented by their embeddings, facilitating the calculation of many parameters, unavailable while calculating n-grams. In this system description three neural based automated metrics were used: BERTscore, which computes the cosine similarity of word embedding and applies greedy matching to maximize the similarity score in score arrays between words in the candidate and reference sentences (Zhang et al., 2020a) , BLEURT, which uses a BERT model, pre-trained on a large amount of synthetic examples and finetuned on human judgement (Sellam et al., 2020) , and NUBIA, which uses neural models output predictions on a set of parameters (Kane et al., 2020) . As shown in Table 4 , there is no significant difference neither in BERTscore, nor in BLEURT score between validation and challenge sets. F1 and precision of the BERTscore are higher for greedy decoding, while recall is higher for the sampling decoding. We used the HuggingFace's API load metric() from Datasets library to calculate the BLEURT score: by default, the API loads the BLEURT-base checkpoint with the sequence length limited to 128 tokens -the truncation of the original sentences resulted in an average score -1.4 for both decoding methods in both splits; the loading of the BLEURT-large checkpoint with the sequence length equal to 512, augmented the average score by 14%. The final values are shown in the above-mentioned Table 4 -the higher scores are observed for the greedy decoding method in both splits, however the overall values of the BLEURT metric are rather low (since the maximum score that can be achieved with this metric is equal to 1), which indicates the semantic distance of the model's generations from the benchmark reference text.", "cite_spans": [ { "start": 497, "end": 518, "text": "(Zhang et al., 2020a)", "ref_id": null }, { "start": 639, "end": 660, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF12" }, { "start": 741, "end": 760, "text": "(Kane et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 775, "end": 782, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Semantic similarity", "sec_num": "4.2" }, { "text": "Samp. F BERT P BERT R BERT BLEURT g. v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic similarity", "sec_num": "4.2" }, { "text": "NUBIA metric calculates such parameters as semantic relation, logical agreement, grammaticality, contradiction and a degree of new information presence (which might also signify the irrelevance) in the candidate sentence, regarding the reference sentence. In view of the current experiment's scope, we show the mean values of the cumulative NU-BIA score and a semantic relevance measurement in Table 5 . As we can see, the semantic relevance is considerably higher for the validation split.", "cite_spans": [], "ref_spans": [ { "start": 394, "end": 401, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Semantic similarity", "sec_num": "4.2" }, { "text": "Finally, the calculation of the lexical richness was done with four automated metrics -Mean Segmental Type-Token Ratio (MSTTR) (Johnson, 1944) , Distinct (Li et al., 2016) , Unique and Entropy (Shannon, 1948) .", "cite_spans": [ { "start": 127, "end": 142, "text": "(Johnson, 1944)", "ref_id": "BIBREF6" }, { "start": 154, "end": 171, "text": "(Li et al., 2016)", "ref_id": "BIBREF8" }, { "start": 193, "end": 208, "text": "(Shannon, 1948)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Vocabulary diversity", "sec_num": "4.3" }, { "text": "We can see in Table 6 that MSTTR is higher for the sampling decoding and is equivalent for greedy decoding in validation and challenge splits together. The Distinct score is surprisingly higher for the greedy decoding, but doesn't differ substantially between validation and challenge splits. Table 7 shows that the amount of the unique unigrams and bigrams is higher for the sampling decoding (which is rather expected, as the sampling allows more creativity) and is substantially lower for the challenge set for both decoding methods. The Entropy is slightly higher for the sampling decoding method, and is generally higher for the validation set. This can be explained by the inconsistencies of the challenge set, which correlate with possible inconsistencies of the model generations, while a comparison with the perfect validation set, translates into higher rates of entropy, required to map one probability distribution to another.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 293, "end": 300, "text": "Table 7", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Vocabulary diversity", "sec_num": "4.3" }, { "text": "The system description depicted the experiment on application of the CommonGen task from the GEM benchmark to a hard constraint text generation with the insertion based transformer. The use of eleven automated metrics for measuring the generative performance of the POINTER model allowed to detect the issues of the model output and reveal the advantages of a specific decoding method. For the lexical equivalence, METEOR metric seems to be the most relevant (since it takes stemmed forms of the words and makes the synonym comparison), when looking at the score augmentation for more creative text generations, accomplished with the sampling decoding method. The semantic similarity measured with the BERTscore and BLEURT neural based metrics showed that both validation and challenge splits result in a semantically equivalent text generations, with a small difference between decoding methods, while the application of NUBIA metric with a refined semantic relevance parameter resulted in a better score for the validation split. The Entropy showed the noisiness of the generated text for both decoding methods, and the Distinct score showed an unexpected boost for the greedy decoding, which means less words' repetitions than for the sampling decoding. Finally, the Unique score showed that sampling decoding method resulted in lexically richer text generations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "CommonGen have a private test set, which is not distributed by GEM benchmark, therefore a comparison to the test set was not possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The complete lists of generated sentences along with the scripts for calculating the metrics can be found in a dedicated github repository: https://github.com/asnota/metrics", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Non-european languages have even lower METEOR scores -0.347 on the Arabic data and 0.331 on the Chinese data, according to the ressource.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A learning algorithm for boltzmann machines", "authors": [ { "first": "H", "middle": [], "last": "David", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Ackley", "suffix": "" }, { "first": "Terrence", "middle": [ "J" ], "last": "Hinton", "suffix": "" }, { "first": "", "middle": [], "last": "Sejnowski", "suffix": "" } ], "year": 1985, "venue": "Cognitive science", "volume": "9", "issue": "1", "pages": "147--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "David H Ackley, Geoffrey E Hinton, and Terrence J Se- jnowski. 1985. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147-169.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguis- tics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5555/1289189.1289273" ] }, "num": null, "urls": [], "raw_text": "George Doddington. 2002. Automatic evaluation of ma- chine translation quality using n-gram co-occurrence statistics. San Francisco, CA, USA. Morgan Kauf- mann Publishers Inc.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hierarchical neural story generation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The curious case of neural text degeneration", "authors": [ { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Buys", "suffix": "" }, { "first": "Li", "middle": [], "last": "Du", "suffix": "" }, { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Studies in language behavior: A program of research", "authors": [ { "first": "Wendell", "middle": [ "Johnson" ], "last": "", "suffix": "" } ], "year": 1944, "venue": "Psychological Monographs", "volume": "56", "issue": "2", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wendell Johnson. 1944. Studies in language behavior: A program of research. Psychological Monographs, 56(2):1-15.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Pelkins Ajanoh, and Mohamed Coulibali. 2020. NU-BIA: NeUral based interchangeability assessor for text generation", "authors": [ { "first": "Hassan", "middle": [], "last": "Kane", "suffix": "" }, { "first": "Yusuf", "middle": [], "last": "Muhammed", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Kocyigit", "suffix": "" }, { "first": "", "middle": [], "last": "Abdalla", "suffix": "" } ], "year": null, "venue": "Proceedings of the 1st Workshop on Evaluating NLG Evaluation", "volume": "", "issue": "", "pages": "28--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassan Kane, Muhammed Yusuf Kocyigit, Ali Abdalla, Pelkins Ajanoh, and Mohamed Coulibali. 2020. NU- BIA: NeUral based interchangeability assessor for text generation. In Proceedings of the 1st Workshop on Evaluating NLG Evaluation, pages 28-37, On- line (Dublin, Ireland). Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A diversity-promoting objective function for neural conversation models", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objec- tive function for neural conversation models.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Commongen: A constrained text generation challenge for generative commonsense reasoning", "authors": [ { "first": "Wangchunshu", "middle": [], "last": "Bill Yuchen Lin", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Pei", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Choi", "suffix": "" }, { "first": "", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. Commongen: A constrained text genera- tion challenge for generative commonsense reason- ing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bleurt: Learning robust metrics for text generation", "authors": [ { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ankur", "middle": [ "P" ], "last": "Parikh", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. Bleurt: Learning robust metrics for text gener- ation.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A mathematical theory of communication. The Bell system technical journal", "authors": [ { "first": "Claude", "middle": [ "E" ], "last": "Shannon", "suffix": "" } ], "year": 1948, "venue": "", "volume": "27", "issue": "", "pages": "379--423", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claude E Shannon. 1948. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "2020a. Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020a. Bertscore: Eval- uating text generation with bert.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Pointer: Constrained progressive text generation via insertionbased generative pre-training", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2020, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yizhe Zhang, Guoyin Wang, Chunyuan Li, Zhe Gan, Chris Brockett, and Bill Dolan. 2020b. Pointer: Con- strained progressive text generation via insertion- based generative pre-training. In EMNLP.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "num": null, "html": null, "text": "Examples of generated text compared to the ground truth.", "content": "" }, "TABREF1": { "type_str": "table", "num": null, "html": null, "text": ".008 0.109 sampling val. 0.142 0.008 0.106 greedy ch. 0.142 0.009 0.111 sampling ch. 0.136 0.008 0.103", "content": "
SampleR1 R2RL
greedy val.0.137 0
" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "text": "Lexical equivalence: ROUGE metric.", "content": "" }, "TABREF4": { "type_str": "table", "num": null, "html": null, "text": "Lexical equivalence: BLEU, NIST and ME-TEOR metrics.", "content": "
" }, "TABREF6": { "type_str": "table", "num": null, "html": null, "text": "", "content": "
:Semantic similarity: BERTscore and
BLEURT.
Samp.NUBIA score semantic rel.
greedy val.0.395 0.803
sampling val.0.523 0.743
greedy ch.0.406 0.35
sampling ch.0.52 0.335
" }, "TABREF7": { "type_str": "table", "num": null, "html": null, "text": "Semantic similarity: NUBIA.", "content": "" }, "TABREF9": { "type_str": "table", "num": null, "html": null, "text": "Diversity: MSTTR and Distinct.", "content": "
SampleU1 U2E1 E2
greedy val.972 13115 5.818 10.241
sampling val. 1285 20540 6.123 10.602
greedy ch.758 81725.788 9.638
sampling ch. 1030 11693 6.051 9.915
" }, "TABREF10": { "type_str": "table", "num": null, "html": null, "text": "Diversity: Unique and Entropy.", "content": "" } } } }