|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:21.985447Z" |
|
}, |
|
"title": "Trainable Ranking Models to Evaluate the Semantic Accuracy of Data-to-Text Neural Generator", |
|
"authors": [ |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Garneau", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universit\u00e9 Laval", |
|
"location": { |
|
"region": "Qu\u00e9bec", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Luc", |
|
"middle": [], |
|
"last": "Lamontagne", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universit\u00e9 Laval", |
|
"location": { |
|
"region": "Qu\u00e9bec", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we introduce a new embeddingbased metric relying on trainable ranking models to evaluate the semantic accuracy of neural data-to-text generators. This metric is especially well suited to semantically and factually assess the performance of a text generator when tables can be associated with multiple references and table values contain textual utterances. We first present how one can implement and further specialize the metric by training the underlying ranking models on a legal Data-to-Text dataset. We show how it may provide a more robust evaluation than other evaluation schemes in challenging settings using a dataset comprising paraphrases between the table values and their respective references. Finally, we evaluate its generalization capabilities on a well-known dataset, WebNLG, by comparing it with human evaluation and a metric recently introduced based on natural language inference. We then illustrate how it naturally characterizes, both quantitatively and qualitatively, omissions and hallucinations.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we introduce a new embeddingbased metric relying on trainable ranking models to evaluate the semantic accuracy of neural data-to-text generators. This metric is especially well suited to semantically and factually assess the performance of a text generator when tables can be associated with multiple references and table values contain textual utterances. We first present how one can implement and further specialize the metric by training the underlying ranking models on a legal Data-to-Text dataset. We show how it may provide a more robust evaluation than other evaluation schemes in challenging settings using a dataset comprising paraphrases between the table values and their respective references. Finally, we evaluate its generalization capabilities on a well-known dataset, WebNLG, by comparing it with human evaluation and a metric recently introduced based on natural language inference. We then illustrate how it naturally characterizes, both quantitatively and qualitatively, omissions and hallucinations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Data-to-Text (D2T) generation (Kukich, 1983; McKeown, 1985; Reiter and Dale, 1997 ) is a specialized task of natural language generation (NLG) where a model takes as input (semi)-structured data (e.g. a table) and generates a textual utterance that is both syntactical and semantically faithful to the input. Several architectures were proposed to solve this task. They may rely strictly on templates (Gatti et al., 2018; Puzikov and Gurevych, 2018; Wiseman et al., 2018) , separate planning (what to say) from generation (how to say it) (Puduppully et al., 2019; Moryossef et al., 2019) or be a fully derivable neural architecture (Lebret et al., 2016; Wiseman et al., 2017; Gehrmann et al., 2018) . While achieving interesting performance at natural language generation tasks (Lewis et al., 2020; Gehrmann et al., 2021) , pre-trained neural language models (hence neural architectures in general), are prone to hallucinate facts (Du\u0161ek et al., 2018) which brings their usability at stake in sensitive domains such as the legal one. In this paper, we wish to promote the usability of neural architecture by proposing a new trainable automatic evaluation metric well suited to evaluate the semantic accuracy of such D2T generator. This metric is designed as a two factor \"round-trip evaluation\" in order to assess the accuracy a given generated hypothesis. First, we use the hypothesis to try to recreate the original table by ranking its values amongst all other values in the dataset. Then, we retrieve similar references amongst all other references in the dataset by ranking them still using that same hypothesis. We illustrate both round-trip evaluation scheme (table reconstruction and reference ranking) in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 44, |
|
"text": "(Kukich, 1983;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 45, |
|
"end": 59, |
|
"text": "McKeown, 1985;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 60, |
|
"end": 81, |
|
"text": "Reiter and Dale, 1997", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 421, |
|
"text": "(Gatti et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 449, |
|
"text": "Puzikov and Gurevych, 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 471, |
|
"text": "Wiseman et al., 2018)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 563, |
|
"text": "(Puduppully et al., 2019;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 587, |
|
"text": "Moryossef et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 632, |
|
"end": 653, |
|
"text": "(Lebret et al., 2016;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 675, |
|
"text": "Wiseman et al., 2017;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 676, |
|
"end": 698, |
|
"text": "Gehrmann et al., 2018)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 778, |
|
"end": 798, |
|
"text": "(Lewis et al., 2020;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 799, |
|
"end": 821, |
|
"text": "Gehrmann et al., 2021)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 931, |
|
"end": 951, |
|
"text": "(Du\u0161ek et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1714, |
|
"end": 1722, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "value v 1 value v 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our approach is well suited to semantically and factually assess a generator's performance in cases where the tables can be associated with multiple references, and the tables' values contain textual utterances. We present how one can further specialize the proposed evaluation metric by training the underlying ranking models on the target dataset, hence providing a more robust evaluation. Re-lying on the mean average precision, we present how it naturally characterizes, both quantitatively and qualitatively, omissions and hallucinations of a given generator. This framework offers great flexibility in how it can be implemented and further improved. We identify two main components that can be tuned to improve the efficiency of our proposed metric when evaluating NLG systems; a similarity function between reference texts and the underlying ranking models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, having a metric where the efficiency is highly dependent on how it is implemented is highly problematic for an absolute comparison against other metrics or evaluation methodologies. This is why we propose a way to \"fix a metric\" i.e. how good, on the gold annotations, one implementation of the metric can be? Having a \"fixed metric\" then allows us to evaluate NLG systems against each other properly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In our experiments, we first apply our metric on a challenging dataset in the legal domain, Plum2Text (Garneau et al., 2021) . We show how the specialization of the ranking models can be beneficial or even necessary. More precisely, we illustrate these benefits when paraphrasing between the data (i.e. table) and the reference text highly characterizes the dataset in hand. Then, we illustrate its generalization capabilities even in simpler settings on a well-known D2T dataset, WebNLG (Gardent et al., 2017) . We show how it is able to discriminate a set of generators, and correlates positively with human judgment. Our contribution is thus a new trainable automatic D2T evaluation metric that naturally characterizes both omissions and hallucinations of neural architectures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 124, |
|
"text": "(Garneau et al., 2021)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 510, |
|
"text": "(Gardent et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Evaluating natural language generated text is a very hard task. Reiter and Belz (2009) and Reiter (2018) question the validity of widely used metrics. Sai et al. (2020) provide an extensive survey of the field, and more precisely separate the D2T evaluation metrics along 2 dimensions: either they use the table t or not (i.e. Table-Free) , and either they are trained or untrained metrics. For instance, automatic evaluation metrics such as BLEU (Papineni et al., 2002) , ROUGE (Lin, 2004) or METEOR (Banerjee and Lavie, 2005) are not trained and neither use the table t. They only partially account for the faithfulness of a given generated hypothesis w.r.t its associated references. Even though these metrics are widely used, they fall short to capture factual aspects in a D2T setting, and correlate poorly with human judgment (Liu et al., 2016; Novikova et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 86, |
|
"text": "Reiter and Belz (2009)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 91, |
|
"end": 104, |
|
"text": "Reiter (2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 470, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 490, |
|
"text": "(Lin, 2004)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 527, |
|
"text": "(Banerjee and Lavie, 2005)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 832, |
|
"end": 850, |
|
"text": "(Liu et al., 2016;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 851, |
|
"end": 873, |
|
"text": "Novikova et al., 2017)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 338, |
|
"text": "Table-Free)", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Data-to-Text Generation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "BLEURT (Sellam et al., 2020 ) is a metric trained on English texts that is designed to better model human judgment on generated texts. However, it does not take into account the input table. Sun and Zhou (2012) proposed iBLEU for paraphrase generation, an adaptation of the BLEU score that takes into account the context (the original phrase), the generated hypothesis and the reference. Its variant, BLEU-T, rewards an hypothesis h that overlaps with the content of the input table t as follows;", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 27, |
|
"text": "(Sellam et al., 2020", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 210, |
|
"text": "Sun and Zhou (2012)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Data-to-Text Generation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "BLEU-T = \u03b1BLEU(h, t) + (1 \u2212 \u03b1)BLEU(h, r) where \u03b1 is a parameter that balances faithfulness between the table t and the reference r. Table 1 : Different metrics and their position in the evaluation spectrum, including the two variants of our proposed metric which can be trained or not. Wiseman et al. (2017) proposed an extractive evaluation scheme where a model tries to identify relations in h between a pair of entities in order to recreate the table t that was used for the generation. Matching between the extracted entities and the table values is simply done via string-tostring comparison since the values in their dataset are short textual utterance of up to a few tokens. Dhingra et al. (2019) extended this metric (PAR-ENT) by considering overlapping n-grams in the generated hypothesis h with both the table t and the reference text r. More recently, Du\u0161ek and Kasner (2020) proposed a metric that relies strictly on a pre-trained version of a natural language inference model (which will be referenced as \"NLI\" from now on) that verifies if a given hypothesis is entailed or not by the input table. They framed the evaluation as a categorical result given an hypothesis h (e.g. as being \"Correct\" or \"Incorrect\") but one can also use the underlying NLI model's confidence score for a softer evaluation. Using ranking models supplement their approach by identifying hallucinations and omissions (to some extent) and quantitatively characterizes both phenomena.", |
|
"cite_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 307, |
|
"text": "Wiseman et al. (2017)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 682, |
|
"end": 703, |
|
"text": "Dhingra et al. (2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 863, |
|
"end": 886, |
|
"text": "Du\u0161ek and Kasner (2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 139, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Data-to-Text Generation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Trained metrics using the context (in our case t) have been proposed for dialogue generation tasks (Lowe et al., 2017; Tao et al., 2018) . However none of them is suitable for a D2T setting where the characterization of hallucinations (and to some extent omissions) are required in order to use a neural D2T generator in production. We thus wish to fill in this gap by presenting in the following section a new evaluation scheme. This scheme offers the advantage to exploit both the table t and the reference r, to be based on ranking models, and to be either trained or not. We illustrate in Table 1 where the metrics discussed in this section lie in the table-table-free and trained-untrained spectrum.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 118, |
|
"text": "(Lowe et al., 2017;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 136, |
|
"text": "Tao et al., 2018)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 593, |
|
"end": 600, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Data-to-Text Generation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To assess the accuracy of a generated hypothesis h, it can be useful to consider both the input table t and the target reference r, i.e. validate the correctness of h according to its table and its reference. We thus propose a way to assess the fidelity of h by reconstructing the table t (h ) t) and by retrieving its corresponding reference(s) r (h ) r) using ranking models. This premise is highly motivated by the fact that similar text descriptions should be associated to semantically similar table contents. In both settings, the ranking models are evaluated using the Average Precision (AP), which we describe next.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data-to-Text Evaluation through Ranking", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Borrowing from information retrieval terminology, in the context of h ) t, we treat every value v in the table t as a document and h as a query. Different from the information extraction method proposed by Wiseman et al. 20171 , we wish to recreate t by finding the corresponding set of values v i amongst the set of possible table values V , given a ranker M v and the query h i . To do so, we retrieve a ranked list of all the possible valuesV = M v (h i ) and compute the Precision at k (P@k) of the query h i in the following way;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table Reconstruction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P@k i = |v i @k \u2229V @k| |V @k| (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table Reconstruction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We can then compute the Average Precision of the table reconstruction (AP h ) t ) given the following formula;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table Reconstruction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "AP h ) t = 1 |v i | |V | k=1 P@k i ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Table Reconstruction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "giving us a sense of how well the ranker M v is able to retrieve the set of table values v i corresponding to the hypothesis h i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table Reconstruction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the context of h ) r, we still treat h as the query, but r as the document. In a case where multiple references can be deduced by the same table (or similar ones), it makes sense to take into consideration other references that share some similarities with the real reference. More precisely, a table t could refer to multiple references with a certain degree of correspondence, hence these references can be seen as similar documents. We can even push this further by assuming that if two tables t i and t j share similarities amongst their values, their corresponding references r i and r j will be semantically similar.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Given t i , r i and t j , r j , we define the following similarity function;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f : (t i , t j ) \u2192 d i,j", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "such that d i,j is the degree of similarity between between two tables t i and t j . This similarity function is a proxy to the semantic similarity between r i and r j . For instance, by using the intersection over union of t i and t j table values, we use the following function;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "f = (t i \u2229 t j )/(t i \u222a t j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ". We thus consider, for a given r i and its associated table t i , the set of references where d i, * > \u03b4 as being relevant references 2 . Given an hypothesis h i , we can then query the set of references R in order to get a ranked list of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "referencesR i = M r (h i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where M r is a ranking model for the references. We define R * i as being the ordered gold set of references according to f .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Let the Cumulative Relevance Score (CRS) of h i be k j=1 d i,j . We define the estimated and true CRS at k being CRS applied onR i and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "R * i , yieldingR i -CRS@k and R * i -CRS@k respec- tively. Formally,R i -CRS@k = k j=1 f (h i ,r j ) and R * i -CRS@k = k j=1 f (h i , r * j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ". We thus compute Precision at k in the following way;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P@k i =R i -CRS@k i R * i -CRS@k i", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "obtaining the AP h ) r of h i with the following formula;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "AP h ) r = 1 R * i -CRS |R| k=1 P@k i \u00d7 d i,k", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where d i,k , properly scales P@k i so that AP h ) r is between 0 and 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Finally, in both settings, we respectively compute the mean Average Precision (mAP) over the set of Hypotheses H;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "mAP h ) t = 1 |H| |H| i=1 AP h ) t i (6) mAP h ) r = 1 |H| |H| i=1 AP h ) r i (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where mAP h ) t illustrate the capacity of M v to rank the hypotheses H accordingly to their respective table values, and mAP h ) r the capacity of M r to rank the hypotheses H according to their similar references (and implicitly their respective tables).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In an ideal world, evaluating on gold annotations, both ranking models M v and M r should obtain a mAP of 1. In practice, however, we can only hope that each model will be as close as possible to 1, mostly due to noise in the data, annotation errors, or to the distribution of the data itself. In the next section, we introduce a robust ranking model based on sentence embeddings applied in both h ) t and h ) r settings. We also introduce how this model can be trained on the dataset in hand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference Ranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We consider the embedding-based ranking models using the information retrieval version of Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) , based on the BERT architecture (Devlin et al., 2019) 3 . More concretely, we use SBERT to encode both the set of possible table values V and the set of references R into a list of vector representations (i.e. matrices) V and R. We then encode the hypothesis h into its respective vector representation h. The models M v and M r re-order the vectors in V and R according to the cosine similarity with h, yieldingV andR needed for the computation of our metric. Reimers and Gurevych (2019) showed that finetuning SBERT on the downstream task's dataset can lead to substantial improvements. To this end, we propose to fine-tune SBERT in both settings, h ) t and h ) r, by creating their very specific training datasets. In our experiments, we used the multilingual version of BERT (Devlin et al., 2019) as the base model, XLM-R (Conneau et al., 2020; Reimers and Gurevych, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 140, |
|
"text": "(Reimers and Gurevych, 2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 195, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 630, |
|
"text": "Reimers and Gurevych (2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 921, |
|
"end": 942, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 968, |
|
"end": 990, |
|
"text": "(Conneau et al., 2020;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 991, |
|
"end": 1018, |
|
"text": "Reimers and Gurevych, 2020)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Ranking Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the h ) t setting, for every value v i,j in table t i with its corresponding reference r i , we create a positive pair (v i,j , r i , 1). We then randomly sample negative examples such that (v i,j , r m ) is not within the original dataset and fine-tune SBERT to discriminate positive from negative pairs,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Ranking Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(v i,j , r m , 0).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Ranking Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the h ) r setting, we use Equation 3 to determine the similarity between two given references r i and r j . Using the cartesian product of R \u00d7 R, we thus generate every possible reference pairs with their respective similarity value, (r i , r j , d i,j ) and fine-tune SBERT to maximize the similarity between similar pairs, and minimize it between dissimilar pairs. In practice, we down sample pairs where d i,j = 0 since it corresponds to 80% of the generated pairs. While this training procedure is very generic an applicable to most D2T datasets, it can be modified to suit one's specifics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Ranking Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In both settings, we split the training data in a train and validation sets of 80% and 20% respectively. We fine-tuned SBERT for 4 epochs, keeping only the best performing model on the validation set. Regardless of the downstream dataset used, using a GeForce 2080 graphic card, this process took 4 hours for each setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Ranking Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "An insightful way of qualitatively analyzing the capacity of a given neural D2T generator is by characterizing omissions and hallucinations. More precisely, we want to know which element from the table may have been forgotten by the generator and which element in the generated text may be consid-ered as hallucinations (Du\u0161ek et al., 2018; Du\u0161ek and Kasner, 2020) . We acknowledge the fact that characterizing omissions may not be relevant in cases we would only like to describe highlights of a basketball game (Wiseman et al., 2017) , especially when there is a seperate planning step (Puduppully et al., 2019; Moryossef et al., 2019) . However, in the legal domain, describing a semi-structured document in its whole (omissions), and solely this document (hallucinations), is of high importance to foster a truthful view of a legal system (Beauchemin et al., 2020).", |
|
"cite_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 340, |
|
"text": "(Du\u0161ek et al., 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 364, |
|
"text": "Du\u0161ek and Kasner, 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 535, |
|
"text": "(Wiseman et al., 2017)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 613, |
|
"text": "(Puduppully et al., 2019;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 637, |
|
"text": "Moryossef et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characterizing omissions and hallucinations", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Using a ranking-based metric, we can potentially identify which elements from the table are considered as omissions and what has been hallucinated in the hypothesis. Intuitively, ranking models basically offer this characterization for free. Indeed, on the table reconstruction side, considering the gold set of n values v and the set of top n returned valuesv from M t , we obtain the omissions by computing the set difference of o t = v \u2212v. As per the implicit definition of hallucinations and assuming a neural generator has been trained on a training set, we define hallucinations as being values from the training set that have been highly ranked. Therefore, we compute hallucinations as a t =v \u2212 v such that a t \u2208 V train . On the reference retrieval side, we consider the first retrieved reference and its associated table (or set of values),v. We can similarly compute omissions and hallucinations as on the table reconstruction side, obtaining o r and a r . A given value v i \u2208 v will considered omitted if it is present in o t and o r (i.e. 1.0), partially omitted if it is present in either o t or o r (i.e. 0.5) and not omitted if it is not present in any of the sets (i.e. 0.0). The same logic applies to hallucinations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characterizing omissions and hallucinations", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Roughly speaking, the mean average precision is a quantitative proxy loosely characterizing the omissions and hallucinations. Indeed, the average precision will consider not only the top n, but up to the last value that should have been retrieve in the list, thus making it an optimistic approximation of omissions and hallucinations. On the reference side, then again our metric is an optimistic approximation of omissions and hallucination in the sense that we consider every other references having a similarity score > 0. One could design a more exact quantitative approximation of omissions and hallucinations by only considering the first returned reference by the ranking model and analyzing its table with the corresponding true table t, as previ-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characterizing omissions and hallucinations", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "h ) t h ) r Avg.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characterizing omissions and hallucinations", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Elasticsearch 0.588 0.596 0.592 SBERT Untrained 0.274 0.584 0.429 SBERT Trained 0.831 0.871 0.851 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characterizing omissions and hallucinations", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In this section, we first illustrate the benefits of fine-tuning our proposed metric on the Plum2Text target dataset (Garneau et al., 2021) . We then show how our metric can discriminate generators, and analyze omission and hallucination rates using WebNLG (Gardent et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 139, |
|
"text": "(Garneau et al., 2021)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 279, |
|
"text": "(Gardent et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this section, we apply our metric on a challenging French dataset in the legal field, Plum2Text (Garneau et al., 2021) , comprising references being a paraphrase of the table's values. It is composed of pairs of plumitif -description. A plumitif is a structured document containing all the key steps of a judicial case. The purpose of this dataset is to make the plumitifs more understandable to the population by generating a description from the input data. There is an interesting exercise when comes the time to evaluate the effectiveness of a metric, especially when it is embedding-based. We dub this exercise as \"fixing the metric\", whereas we apply it on the gold annotations, i.e. h = r. This exercise tells us, to some extent, \"how far we can go\" with a given generator w.r.t the evaluation metric in the case we would know the answer. Using metrics based on word overlap would obviously yield a perfect score in the reference ranking setting. In these experiments, we thus only consider \"fixing the metric\" on the Plum2Text dataset since we do not have access to human evaluation over systems' outputs. This illustrates the challenge posed by the Plum2Text dataset (paraphrasing) as well as the benefits of fine-tuning our metric. As a baseline, we use Elasticsearch's ranking model (ES) based on word co-occurrence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 121, |
|
"text": "(Garneau et al., 2021)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments with Plum2Text", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We can see from Table 4 : Qualitative results of both ES and SBERT fine-tuned in the h ) r setting. We show the ranking of a paraphrased reference r j according to the hypothesis h i (in the case of Gold annotations, r i ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 23, |
|
"text": "Table 4", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with Plum2Text", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Gurevych 2019, a pre-trained version of SBERT applied without fine-tuning on Plum2Text struggles at ranking, especially in the h ) t setting (comprising a lot of paraphrasing). However a fine-tuned version of SBERT on Plum2Text yields strong performance, achieving a score of 0.851. There is inevitably a trade-off between using an untrained or trained version of our metric. From the results shown in Table 2 and in the context of paraphrases and synonyms, an embedding-based ranking model is definitely improving the evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 409, |
|
"text": "Table 2", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with Plum2Text", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Supporting our claim that an embedding-based ranking model would be better at evaluating the generated hypotheses, especially when paraphrases characterize the dataset in hand, we qualitatively compare the ranking capabilities of the ES ranker and SBERT fine-tuned model. In the h ) t setting, we extract references r that are a paraphrased version of a given table value v. As illustrated in Table 3 4 , SBERT learned synonyms such as \"sexual contacts\" and \"sexual touching\". SBERT also learned that \"possession of substance\" is related to drugs like cannabis or coca\u00efne, and is thus able to properly rank the associated ref-erence even though there are different types of \"possessions\" (e.g. child pornography, illegal firearms). As illustrated in the results, simply relying on word co-occurrences yields poor ranking in cases where synonyms are used and shows that an embeddingbased ranking model is clearly providing a better performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 400, |
|
"text": "Table 3", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Rankers' Behavior on Paraphrases and Synonyms", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "In the h ) r setting, we analyze again the ranking behaviour of both rankers on paraphrased references. We can see in Table 4 that SBERT learned the various ways \"pleading guilty\" can be expressed (rows 1 and 2). It also learned that \"communicating with people of 16 years and under using a computer\" is similar to \"computer luring of people between 13 and 17 years old\". In most cases, we did not see the paraphrased reference among the top 200 results the ES ranker returned (N/A).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 125, |
|
"text": "Table 4", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Rankers' Behavior on Paraphrases and Synonyms", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "To motivate the need to fine-tune rankers (hence the metric) for specific types of D2T datasets, we illustrate the performance of the recently introduced metric PARENT (Dhingra et al., 2019) on the gold annotations of Plum2Text, referred to as \"Original\" in Table 5 . Without any surprises, the precision is 1.0 when h = r. The lower performance on the recall (and thus F1-Score) is due to paraphrasing, an inherent problem Dhingra et al. (2019) raised when they introduced their metric. To better illustrate this problem, we evaluate PARENT on a augmented version of Plum2Text i.e. for every pair r i , r j where d i,j = 1.0 and r i = r j , we create a paraphrased example (t i , r j , r i ) where t i is the table, r i the hypothesis and r j its associated reference. We can see in Table 5 that the results drop significantly due to the word overlap evaluation behavior, even though they should be similar to the Original Dataset. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 424, |
|
"end": 445, |
|
"text": "Dhingra et al. (2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 265, |
|
"text": "Table 5", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 784, |
|
"end": 791, |
|
"text": "Table 5", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Motivation over a metric based on word overlap", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "In this section, we show that, even if our metric is well-suited for D2T dataset comprising textual utterance as table values, it generalizes to more common D2T settings as in WebNLG (Gardent et al., 2017) . Shimorina et al. (2018) provide human evaluation on the different systems' outputs (listed in Figure 2 , as well as on the gold annotations (webnlg). They considered three evaluation dimensions; Fluency, Grammar, and Semantic. In our case, we consider only the Semantic dimension. First, we simply consider \"fixing the metric\", i.e. we apply our metric on the webnlg team's outputs (i.e. the gold annotations) and compare its values against the human evaluation 5 and the recently introduced metric by Du\u0161ek and Kasner (2020) (NLI) . Results can be found in Figure 2 . Intuitively, human evaluation should be very close to one (0.92) 6 , and our trained metric achieves 0.88, which is close to human evaluation. The NLI metric average score is 0.73.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 205, |
|
"text": "(Gardent et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 231, |
|
"text": "Shimorina et al. (2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 733, |
|
"text": "Du\u0161ek and Kasner (2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 734, |
|
"end": 739, |
|
"text": "(NLI)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 842, |
|
"end": 843, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 310, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 766, |
|
"end": 774, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with WebNLG", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We then run the same experiment on a set of generators' outputs (Shimorina et al., 2018) in order to assess the capabilities of our proposed metric to discriminate amongst systems (i.e. teams). By looking at Figure 2 , we can see that there is an agreement on the macro level between the human 5 Human annotators used a three-point Likert scale (1 = Incorrect, 2 = Medium, 3 = Correct) and answers are averaged over multiple annotators. We normalized the scores between 0-1 for an easier comparison 6 Out of the 224 human evaluation on the gold annotations, 38 have a score below or equal to 0.77. evaluation, our metric, and NLI in order to discriminate between the teams that performed well from the teams that did not. While the sample size is rather small (10 teams), the Pearson correlation score between human evaluation and our metric, human evaluation and NLI, are both 0.92 (\u03c1 < 0.005). The average difference between human evaluation and our metric is 0.09 while human evaluation and NLI is 0.26. On the micro level, correlation scores show another story; human evaluation and our metric yield a Pearson correlation score of 0.47, human evaluation and NLI 0.59, our metric and NLI 0.43 (\u03c1 < 0.005 in every cases). While there is a slight correlation between the different evaluation scheme, it seems that they do not always agree at the utterance level, contradicting one another in some cases. This point has already been raised by Du\u0161ek and Kasner (2020) , suggesting that in some cases, the human evaluation is not accurate. However we decide to leave this specific analysis for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 88, |
|
"text": "(Shimorina et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 295, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1443, |
|
"end": 1466, |
|
"text": "Du\u0161ek and Kasner (2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 216, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with WebNLG", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We further analyze the capacity of our metric to characterize omissions and hallucinations on the systems' outputs. To this end, we follow the methodology introduced in Section 3.4 and compute the estimated omission and hallucination rates w.r.t to the capacity of underlying rankers. Omission rate is the number of times an input value v was considered omitted by our metric, over the set of n input values. Hallucinations rate is the number of times a value v from the training set has been improperly ranked at the top n expected values 7 . We thus average the rates per system overall 223 examples for the WebNLG's test set. In this experiment, we are interested in comparing Neural vs Non-Neural architectures, and see if our metric captures the implicit omission/hallucination behavior of neural generators. Results are displayed in Table 6 . On the gold annotations, we obtained 0.41 and 0.37 omission and hallucination rates respectively. This is expected mostly because the underlying rankers are not perfect. While achieving 0.88 average precision on the gold annotations, this is an optimistic estimation of the ranking capabilities (see Section 3.4 Shimorina et al. (2018) , the NLI metric proposed by Du\u0161ek and Kasner (2020) , our metric trained and untrained. Our proposed metric, in every cases except one, is closer to human evaluation than the NLI metric.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1161, |
|
"end": 1184, |
|
"text": "Shimorina et al. (2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1214, |
|
"end": 1237, |
|
"text": "Du\u0161ek and Kasner (2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 839, |
|
"end": 846, |
|
"text": "Table 6", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of omissions and hallucinations", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "for a discussion on this topic). Also, according to Du\u0161ek and Kasner (2020) , there is some noise in the human evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 75, |
|
"text": "Du\u0161ek and Kasner (2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of omissions and hallucinations", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Regarding the teams' statistics, we denote higher omission and hallucination rates for 3 out 5 neural systems. Interestingly, the estimated omission and hallucination rates of Adapt are quite high, while having a high semantic score from the human evaluation. On the contrary, Tilburg-NMT has low omission and hallucination rates while having a low human evaluation score. Melbourne has low omission and hallucination rates, which corroborates the fact that it was a good system. Non-Neural systems tend to be more stable on omissions and hallucinations, which is expected. In future experiments, we would like to compute the exact omission and hallucination rates of each team. While being a very time-consuming task, this evaluation will enable an in-depth analysis of omissions and hallucinations per system. This leads to the conclusion that, while being a first step at characterizing/quantifying omissions and hallucinations, more work has to be done towards this direction since it is a crucial evaluation aspect in D2T evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of omissions and hallucinations", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "In an era where deep learning models seem to be the norm, it is nonetheless legitimate to ask ourselves if training such a metric is worth the shot. In the case of WebNLG, where the data is extracted from Wikidata (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) , a proxy of Wikipedia, and the underlying transformer models of the metric have been pre-trained Table 6 : Omission and Hallucination rates per team. We compare the omission and hallucination rates between WebNLG (the gold standard), Neural and Non-Neural architectures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 244, |
|
"text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 350, |
|
"text": "Table 6", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Is Fine-Tuning Worth the Shot?", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "on Wikipedia, we see a negligible gain from finetuning the metric. The lexical field is pretty much the same, and the reference text does not show many signs of paraphrasing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Is Fine-Tuning Worth the Shot?", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we introduced a new trainable automation evaluation metric relying on ranking models which is specific to the D2T setting. To the best of our knowledge, this is the first metric that naturally quantifies omissions and hallucinations of neural textual generators that can also handle paraphrases. Characterizing omissions and hallucinations are of important matter in a sensitive area such as the legal domain. This lack of characterization is often a blocker to the use of recent neural generator models in the legal field. We hope that this metric will also promote the use of recent advances in neural textual generation in sensitive domains such as the medical field. In our future works, we would like to use our metric to guide the decoding steps of neural D2T generators in order to produce faithful textual descriptions. As previously mentioned, the metric that we proposed is well suited to semantically and factually assess a generator's performance in cases where the tables can be associated with multiple references, and the tables' values contain textual utterances. We wish to generalize the way our method ranks table values through relevance matching (Guo et al., 2016) . This would be highly desirable in cases where table values are only up to a few tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1182, |
|
"end": 1200, |
|
"text": "(Guo et al., 2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The information extraction scheme is relevant where the table values can be framed as triplets, where a model tries to put the different extracted entities into relation (e.g. the Rotowire dataset). Our method generalizes the table reconstruction step whereas one can freely design its own ranker model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use \u03b4 > 0 in our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our experiments, we also tested a word co-occrurrence ranking, Elasticsearch (https://www.elastic.co/), as explained in Section 4.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that all examples have been translated from French to facilitate the comprehension of the reader.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rates are computed w.r.t the hypotheses produced by a given system. For example, Vietnam only produced 55 hypotheses given 223 input tables. We thus considered the input values of the 55 input tables for the calculation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the reviewers for their thoughtful comments and suggestions. Nicolas is funded by the Natural Sciences and Engineering Research Council of Canada (NSERC).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", |
|
"authors": [ |
|
{ |
|
"first": "Satanjeev", |
|
"middle": [], |
|
"last": "Banerjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Generating intelligible plumitifs descriptions: Use case application with ethical considerations", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Beauchemin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Garneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eve", |
|
"middle": [], |
|
"last": "Gaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Luc", |
|
"middle": [], |
|
"last": "D\u00e9ziel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Khoury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luc", |
|
"middle": [], |
|
"last": "Lamontagne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 13th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "15--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Beauchemin, Nicolas Garneau, Eve Gaumond, Pierre-Luc D\u00e9ziel, Richard Khoury, and Luc Lam- ontagne. 2020. Generating intelligible plumitifs de- scriptions: Use case application with ethical consid- erations. In Proceedings of the 13th International Conference on Natural Language Generation, pages 15-21, Dublin, Ireland. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8440--8451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.747" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Handling divergent reference texts when evaluating table-to-text generation", |
|
"authors": [ |
|
{ |
|
"first": "Bhuwan", |
|
"middle": [], |
|
"last": "Dhingra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4884--4895", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1483" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Co- hen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884-4895, Flo- rence, Italy. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Evaluating semantic accuracy of data-to-text generation with natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Du\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zden\u011bk", |
|
"middle": [], |
|
"last": "Kasner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 13th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Du\u0161ek and Zden\u011bk Kasner. 2020. Evaluating se- mantic accuracy of data-to-text generation with nat- ural language inference. In Proceedings of the 13th International Conference on Natural Language Gen- eration, pages 131-137, Dublin, Ireland. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Findings of the E2E NLG challenge", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Du\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jekaterina", |
|
"middle": [], |
|
"last": "Novikova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 11th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "322--328", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6539" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the E2E NLG challenge. In Proceedings of the 11th International Conference on Natural Language Generation, pages 322-328, Tilburg University, The Netherlands. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Creating training corpora for NLG micro-planners", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Gardent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anastasia", |
|
"middle": [], |
|
"last": "Shimorina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shashi", |
|
"middle": [], |
|
"last": "Narayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Perez-Beltrachini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "179--188", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1017" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating train- ing corpora for NLG micro-planners. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 179-188, Vancouver, Canada. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Plum2text: A french plumitifs-descriptions data-to-text dataset for natural language generation", |
|
"authors": [ |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Garneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eve", |
|
"middle": [], |
|
"last": "Gaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luc", |
|
"middle": [], |
|
"last": "Lamontagne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Luc", |
|
"middle": [], |
|
"last": "D\u00e9ziel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 18th International Conference on Artificial Intelligence and Law", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicolas Garneau, Eve Gaumond, Luc Lamontagne, and Pierre-Luc D\u00e9ziel. 2021. Plum2text: A french plumitifs-descriptions data-to-text dataset for natu- ral language generation. In Proceedings of the 18th International Conference on Artificial Intelligence and Law, Sao Paulo, Brazil. International Associa- tion for Artificial Intelligence and Law.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Template-based multilingual football reports generation using Wikidata as a knowledge base", |
|
"authors": [ |
|
{ |
|
"first": "Lorenzo", |
|
"middle": [], |
|
"last": "Gatti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Van Der Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mari\u00ebt", |
|
"middle": [], |
|
"last": "Theune", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 11th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "183--188", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6523" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lorenzo Gatti, Chris van der Lee, and Mari\u00ebt Theune. 2018. Template-based multilingual football reports generation using Wikidata as a knowledge base. In Proceedings of the 11th International Conference on Natural Language Generation, pages 183-188, Tilburg University, The Netherlands. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The GEM benchmark: Natural language generation, its evaluation and metrics", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tosin", |
|
"middle": [], |
|
"last": "Adewumi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karmanya", |
|
"middle": [], |
|
"last": "Aggarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawan", |
|
"middle": [], |
|
"last": "Sasanka Ammanamanchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anuoluwapo", |
|
"middle": [], |
|
"last": "Aremu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bosselut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miruna-Adriana", |
|
"middle": [], |
|
"last": "Khyathi Raghavi Chandu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Clinciu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaustubh", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wanyu", |
|
"middle": [], |
|
"last": "Dhole", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Esin", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Durmus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [ |
|
"Chinenye" |
|
], |
|
"last": "Du\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varun", |
|
"middle": [], |
|
"last": "Emezue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Gangal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatsunori", |
|
"middle": [], |
|
"last": "Garbacea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yufang", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Hou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harsh", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Jhamtani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shailza", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihir", |
|
"middle": [], |
|
"last": "Jolly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Kale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Faisal", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aman", |
|
"middle": [], |
|
"last": "Ladhak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mounica", |
|
"middle": [], |
|
"last": "Madaan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khyati", |
|
"middle": [], |
|
"last": "Maddela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saad", |
|
"middle": [], |
|
"last": "Mahajan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mahamood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prasad", |
|
"middle": [], |
|
"last": "Bodhisattwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [ |
|
"Henrique" |
|
], |
|
"last": "Majumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angelina", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Mcmillan-Major", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mille", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 1st Workshop on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "96--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khy- athi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ond\u0159ej Du\u0161ek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tat- sunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mi- hir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan- Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, Jo\u00e3o Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Nat- ural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 96-120, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "End-to-end content and plan selection for data-to-text generation", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Falcon", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henry", |
|
"middle": [], |
|
"last": "Elder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 11th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "46--56", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6505" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Gehrmann, Falcon Dai, Henry Elder, and Alexander Rush. 2018. End-to-end content and plan selection for data-to-text generation. In Proceed- ings of the 11th International Conference on Natu- ral Language Generation, pages 46-56, Tilburg Uni- versity, The Netherlands. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A deep relevance matching model for ad-hoc retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Jiafeng", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yixing", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qingyao", |
|
"middle": [], |
|
"last": "Ai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Bruce" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM '16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--64", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2983323.2983769" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM '16, page 55-64, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Design of a knowledge-based report generator", |
|
"authors": [ |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Kukich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "21st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "145--150", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/981311.981340" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karen Kukich. 1983. Design of a knowledge-based re- port generator. In 21st Annual Meeting of the As- sociation for Computational Linguistics, pages 145- 150, Cambridge, Massachusetts, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Neural text generation from structured data with application to the biography domain", |
|
"authors": [ |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Lebret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1203--1213", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1128" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R\u00e9mi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203-1213, Austin, Texas. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal ; Abdelrahman Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7871--7880", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.703" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "ROUGE: A package for automatic evaluation of summaries", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Text Summarization Branches Out", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", |
|
"authors": [ |
|
{ |
|
"first": "Chia-Wei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iulian", |
|
"middle": [], |
|
"last": "Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Noseworthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Charlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2122--2132", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1230" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2122-2132, Austin, Texas. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Towards an automatic Turing test: Learning to evaluate dialogue responses", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Noseworthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iulian", |
|
"middle": [], |
|
"last": "Vlad Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Angelard-Gontier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1116--1126", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1103" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Lowe, Michael Noseworthy, Iulian Vlad Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic Tur- ing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116-1126, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kathleen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kathleen R. McKeown. 1985. Text Generation: Using Discourse Strategies and Focus Constraints to Gen- erate Natural Language Text. Cambridge University Press, USA.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Step-by-step: Separating planning from realization in neural data-to-text generation", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Moryossef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2267--2277", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1236" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267-2277, Minneapolis, Minnesota. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Why we need new evaluation metrics for NLG", |
|
"authors": [ |
|
{ |
|
"first": "Jekaterina", |
|
"middle": [], |
|
"last": "Novikova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Du\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [ |
|
"Cercas" |
|
], |
|
"last": "Curry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2241--2252", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1238" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cer- cas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Data-to-text generation with content selection and planning", |
|
"authors": [ |
|
{ |
|
"first": "Ratish", |
|
"middle": [], |
|
"last": "Puduppully", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "6908--6915", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1609/aaai.v33i01.33016908" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):6908-6915.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "E2E NLG challenge: Neural models vs. templates", |
|
"authors": [ |
|
{ |
|
"first": "Yevgeniy", |
|
"middle": [], |
|
"last": "Puzikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 11th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "463--471", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6557" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yevgeniy Puzikov and Iryna Gurevych. 2018. E2E NLG challenge: Neural models vs. templates. In Proceedings of the 11th International Conference on Natural Language Generation, pages 463-471, Tilburg University, The Netherlands. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3982--3992", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1410" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Making monolingual sentence embeddings multilingual using knowledge distillation", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual us- ing knowledge distillation. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A structured review of the validity of BLEU", |
|
"authors": [ |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Computational Linguistics", |
|
"volume": "44", |
|
"issue": "3", |
|
"pages": "393--401", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/coli_a_00322" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehud Reiter. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393- 401.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "An investigation into the validity of some metrics for automatically evaluating natural language generation systems", |
|
"authors": [ |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anja", |
|
"middle": [], |
|
"last": "Belz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Computational Linguistics", |
|
"volume": "35", |
|
"issue": "4", |
|
"pages": "529--558", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/coli.2009.35.4.35405" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evalu- ating natural language generation systems. Compu- tational Linguistics, 35(4):529-558.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Building applied natural language generation systems", |
|
"authors": [ |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Dale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Natural Language Engineering", |
|
"volume": "3", |
|
"issue": "1", |
|
"pages": "57--87", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1017/S1351324997001502" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Lan- guage Engineering, 3(1):57-87.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A survey of evaluation metrics used for nlg systems", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Ananya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akash", |
|
"middle": [], |
|
"last": "Sai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitesh", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kumar Mohankumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Khapra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ananya B. Sai, Akash Kumar Mohankumar, and Mitesh M. Khapra. 2020. A survey of eval- uation metrics used for nlg systems. ArXiv, abs/2008.12009.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "BLEURT: Learning robust metrics for text generation", |
|
"authors": [ |
|
{ |
|
"first": "Thibault", |
|
"middle": [], |
|
"last": "Sellam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7881--7892", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.704" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "WebNLG Challenge: Human Evaluation Results", |
|
"authors": [ |
|
{ |
|
"first": "Anastasia", |
|
"middle": [], |
|
"last": "Shimorina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Gardent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shashi", |
|
"middle": [], |
|
"last": "Narayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Perez-Beltrachini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anastasia Shimorina, Claire Gardent, Shashi Narayan, and Laura Perez-Beltrachini. 2018. WebNLG Chal- lenge: Human Evaluation Results. Technical report, Loria & Inria Grand Est.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Joint learning of a dual SMT system for paraphrase generation", |
|
"authors": [ |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "38--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hong Sun and Ming Zhou. 2012. Joint learning of a dual SMT system for paraphrase generation. In Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 38-42, Jeju Island, Korea. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems", |
|
"authors": [ |
|
{ |
|
"first": "Chongyang", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Mou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for au- tomatic evaluation of open-domain dialog systems. AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Wikidata: A free collaborative knowledgebase", |
|
"authors": [ |
|
{ |
|
"first": "Denny", |
|
"middle": [], |
|
"last": "Vrande\u010di\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Kr\u00f6tzsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Commun. ACM", |
|
"volume": "57", |
|
"issue": "10", |
|
"pages": "78--85", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2629489" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: A free collaborative knowledgebase. Commun. ACM, 57(10):78-85.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Challenges in data-to-document generation", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2253--2263", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1239" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2253-2263, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Learning neural templates for text generation", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3174--3187", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1356" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text genera- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3174-3187, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Round-trip evaluation in the table-hypothesis setting (left) and reference-hypothesis setting (right)." |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "A comparison across human evaluation (semantic) on systems' outputs of" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Ranker</td><td>hypothesis h</td><td>Ranker</td></tr><tr><td>M v</td><td/><td>M r</td></tr><tr><td/><td>Generator</td><td/></tr><tr><td>Ranked values</td><td/><td>Ranked references</td></tr><tr><td>value v 1</td><td>1</td><td>reference r 1</td></tr><tr><td>value v 4</td><td/><td>reference r 4</td></tr><tr><td>value v 2 value v 3</td><td>value v 3</td><td>reference r 3 reference r 2</td></tr><tr><td>...</td><td>associated reference r 1</td><td>...</td></tr><tr><td>value v n</td><td>similar reference r 2</td><td>reference r m</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Free</td><td>Table</td></tr><tr><td>Untrained</td><td>BLEU, ROUGE, METEOR</td><td>PARENT, BLEU-T/iBLEU, NLI, Ours</td></tr><tr><td>Trained</td><td>BLEURT</td><td>Ours</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Results of \"fixing the metric\" on Plum2Text</td></tr><tr><td>using Elasticsearch, SBERT Untrained and SBERT</td></tr><tr><td>Trained as different ranking models.</td></tr><tr><td>ously proposed.</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>h</td><td>)</td><td>t</td><td colspan=\"2\">Rank of v</td></tr><tr><td/><td/><td/><td>20</td><td>1</td></tr><tr><td>the accused pleaded guilty to the fol-lowing charges : to have, on #DATE, in</td><td/><td colspan=\"2\">Section 4 -27</td><td>1</td></tr><tr><td>its possession 0,61 gram of cannabis</td><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"text": "that, on the Gold annotations, ES has decent performance on both h ) t and h ) r. Supporting the findings of Reimers andReference ri A Table ti's Value v ES SBERTthe accused is charged with sexual touching of his stepdaughter when she was between 10 and 15 years old.Section 151 -Sexual interference; Any person who, for a sexual purpose, directly or indirectly touches, with any part of the body or with any object, any part of the body of a child under the age of sixteen years Possession of substance. Except as authorized by the regulations, the possession of any substance listed in Appendix I, II or III is prohibited." |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>h</td><td>)</td><td>r</td><td colspan=\"2\">Rank of rj</td></tr><tr><td>Reference ri</td><td/><td>Paraphrased Reference rj|di,j = 1.0</td><td>ES</td><td>SBERT</td></tr><tr><td>he denies any sexual act committed.</td><td/><td>the accused states that he has nothing to re-proach himself for.</td><td>N/A</td><td>3</td></tr><tr><td>PER pleaded guilty to computer luring of six teenage girls between the ages of 13 and 17.</td><td/><td>a guilty plea [...] to communicating by means of sixteen [...] of a computer with x, a person under the age</td><td>N/A</td><td>4</td></tr></table>", |
|
"html": null, |
|
"text": "Qualitative results of ES and SBERT fine-tuned model in the h ) t setting. We illustrate the ability of a fine-tuned ranking model to properly rank a particular value v in the table t i given the hypothesis h i (in the case of Gold annotations, r i )." |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Evaluation of PARENT on the Original and</td></tr><tr><td>a Augmented version of Plum2Text (Garneau et al.,</td></tr><tr><td>2021)</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
} |
|
} |
|
} |
|
} |