ACL-OCL / Base_JSON /prefixH /json /humeval /2021.humeval-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:29:07.272439Z"
},
"title": "It's Common Sense, isn't it? Demystifying Human Evaluations in Commonsense-enhanced NLG systems",
"authors": [
{
"first": "Miruna",
"middle": [],
"last": "Clinciu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heriot-Watt University",
"location": {
"settlement": "Edinburgh",
"country": "Scotland, UK"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Dimitra Gkatzia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Edinburgh Napier University",
"location": {
"settlement": "Edinburgh",
"country": "Scotland, UK"
}
},
"email": ""
},
{
"first": "Saad",
"middle": [],
"last": "Mahamood",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Common sense is an integral part of human cognition which allows us to make sound decisions, communicate effectively with others and interpret situations and utterances. Endowing AI systems with commonsense knowledge capabilities will help us get closer to creating systems that exhibit human intelligence. Recent efforts in Natural Language Generation (NLG) have focused on incorporating commonsense knowledge through large-scale pretrained language models or by incorporating external knowledge bases. Such systems exhibit reasoning capabilities without common sense being explicitly encoded in the training set. These systems require careful evaluation, as they incorporate additional resources during training which adds additional sources of errors. Additionally, human evaluation of such systems can have significant variation, making it impossible to compare different systems and define baselines. This paper aims to demystify human evaluations of commonsenseenhanced NLG systems by proposing the Commonsense Evaluation Card (CEC), a set of recommendations for evaluation reporting of commonsense-enhanced NLG systems, underpinned by an extensive analysis of human evaluations reported in the recent literature.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Common sense is an integral part of human cognition which allows us to make sound decisions, communicate effectively with others and interpret situations and utterances. Endowing AI systems with commonsense knowledge capabilities will help us get closer to creating systems that exhibit human intelligence. Recent efforts in Natural Language Generation (NLG) have focused on incorporating commonsense knowledge through large-scale pretrained language models or by incorporating external knowledge bases. Such systems exhibit reasoning capabilities without common sense being explicitly encoded in the training set. These systems require careful evaluation, as they incorporate additional resources during training which adds additional sources of errors. Additionally, human evaluation of such systems can have significant variation, making it impossible to compare different systems and define baselines. This paper aims to demystify human evaluations of commonsenseenhanced NLG systems by proposing the Commonsense Evaluation Card (CEC), a set of recommendations for evaluation reporting of commonsense-enhanced NLG systems, underpinned by an extensive analysis of human evaluations reported in the recent literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Commonsense knowledge is vital for human communication, as it helps us make inferences without explicitly mentioning the context. Recently, there has been an interest in developing Natural Language Generation (NLG) systems that exhibit commonsense abilities (e.g. (Lin et al., 2020) ). Although everyone understands what common sense is, defining it remains a challenge as it is highly context-dependent. Common sense can be defined as \"simple wisdom\" (Oxford English Dictionary * * Equal Contribution online), \"the ability to use good judgment in making decisions and to live in a reasonable and safe way\" (Cambridge dictionary), or as a \"sound and prudent judgment based on a simple perception of the situation or facts\" (Mirriam Webster). Common sense involves language understanding and reasoning abilities, representing a key factor for establishing effective interactions between humans and machines (Minsky, 1991) . In his pioneering work, McCarthy (1959) proposes that \"a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows\".",
"cite_spans": [
{
"start": 264,
"end": 282,
"text": "(Lin et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 906,
"end": 920,
"text": "(Minsky, 1991)",
"ref_id": "BIBREF19"
},
{
"start": 947,
"end": 962,
"text": "McCarthy (1959)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, commonsense knowledge has been injected in NLG systems either implicitly in the form of rules and/or explicitly with semantic representations in the form of external knowledge bases or ontologies. For instance, expert domain NLG systems (such as the BabyTalk system (Portet et al., 2008) ) have incorporated external knowledge in the form of a clinical ontology. In these expert domain NLG systems, knowledge (which might include procedural knowledge) is represented in rules that are built into the system and have been acquired through experts via interviews, observations or other approaches (Reiter et al., 2003) . Most recent challenges have focused on injecting commonsense knowledge into neural NLG models in two ways: through pre-trained models and through utilising commonsense graphs or knowledge bases. The former assumes that pre-trained models already contain commonsense knowledge (Petroni et al., 2019) . The latter incorporate entity relationships derived from semantic graphs (e.g. Concept-Net (Speer et al., 2016) ) or knowledge bases (e.g. (Sydorova et al., 2019) ).",
"cite_spans": [
{
"start": 281,
"end": 302,
"text": "(Portet et al., 2008)",
"ref_id": "BIBREF24"
},
{
"start": 610,
"end": 631,
"text": "(Reiter et al., 2003)",
"ref_id": "BIBREF30"
},
{
"start": 910,
"end": 932,
"text": "(Petroni et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 1026,
"end": 1046,
"text": "(Speer et al., 2016)",
"ref_id": "BIBREF34"
},
{
"start": 1074,
"end": 1097,
"text": "(Sydorova et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is clear that the incorporation of external knowledge of some form has always been at the heart of NLG system development. In this paper, we are interested in examining how commonsenseenhanced NLG systems are evaluated and whether the accuracy of the underlying commonsense knowledge is assessed by the system creators. To our knowledge, there are no automatic metrics available for commonsense evaluation, and therefore we focus only on human evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Human evaluation is an area that has received an increasing amount of scrutiny within the wider NLG research community. Previous work has highlighted issues with regards to missing details in evaluations, lack of proper analysis of results obtained, variability in the use of names and definitions of evaluated aspects of output quality (van der Lee et al., 2019; Amidei et al., 2018) and a mismatch on evaluation methods chosen which is correlated with the publication venue rather than the NLG task (Gkatzia and Mahamood, 2015) . After examining the last twenty years of human evaluations in NLG, recent survey work has found systemic issues with high levels of diversity of evaluation approaches, inconsistencies and variability in quality criterion names, missing definitions, and fundamental reporting gaps (Howcroft et al., 2020) . These issues mean there is a pressing need to better understand the state of human evaluations in other niche areas of NLG such as those systems enhanced with commonsense knowledge.",
"cite_spans": [
{
"start": 337,
"end": 363,
"text": "(van der Lee et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 364,
"end": 384,
"text": "Amidei et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 501,
"end": 529,
"text": "(Gkatzia and Mahamood, 2015)",
"ref_id": "BIBREF7"
},
{
"start": 812,
"end": 835,
"text": "(Howcroft et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are three-fold: (1) we firstly present an annotated dataset of papers reporting commonsense-enhanced NLG systems published between 2018-2020 in ACL conferences; (2) we present a detailed analysis on human evaluation including reporting on what criteria researchers have most commonly used and whether they have evaluated the underlying commonsense knowledge on its own right and through the generated text; and (3) finally we present the Commonsense Evaluation Card, a set of recommendations for human evaluation reporting of commonsense-enhanced NLG systems with the aim to improve not only reproducibility but also improve understanding of such systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NLG systems have typically been built with the aim of integrating some form of expertise in their application domain (Jacobs, 1986; Reiter and Dale, 1997) . However, as NLG systems find greater general use cases there is a need to incorporate a form of knowledge that is much broader to make up for the differences between human and machine language understanding in decision making, known as common sense (Davis and Marcus, 2015; Lin et al., 2020; Zhang et al., 2020) .",
"cite_spans": [
{
"start": 117,
"end": 131,
"text": "(Jacobs, 1986;",
"ref_id": "BIBREF12"
},
{
"start": 132,
"end": 154,
"text": "Reiter and Dale, 1997)",
"ref_id": "BIBREF29"
},
{
"start": 406,
"end": 430,
"text": "(Davis and Marcus, 2015;",
"ref_id": "BIBREF5"
},
{
"start": 431,
"end": 448,
"text": "Lin et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 449,
"end": 468,
"text": "Zhang et al., 2020)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense Knowledge in NLG",
"sec_num": "2.1"
},
{
"text": "The incorporation of commonsense knowledge is considered a challenging task within AI. This challenge is due to the fact that commonsense reasoning or knowledge is considered a black box, as there is uncertainty on how to represent knowledge in order to solve commonsense reasoning problems (Zhang et al., 2020) . The reliance on existing knowledge bases to incorporate this type of broadbased knowledge might not be sufficient as it may, in many cases, fail to incorporate explicit fundamental knowledge (Tandon et al., 2018; Ji et al., 2020) .",
"cite_spans": [
{
"start": 291,
"end": 311,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF44"
},
{
"start": 505,
"end": 526,
"text": "(Tandon et al., 2018;",
"ref_id": "BIBREF38"
},
{
"start": 527,
"end": 543,
"text": "Ji et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense Knowledge in NLG",
"sec_num": "2.1"
},
{
"text": "Pre-trained models, on the other hand, have capabilities of learning relational patterns and can achieve commonsense reasoning without explicit knowledge representation, as conveyed in the traditional pipelines (Ji et al., 2020; Vinyals and Le, 2015) . However, it remains unclear how the reasoning is performed and how prior knowledge is learned in the training phase (Rajani et al., 2020).",
"cite_spans": [
{
"start": 211,
"end": 228,
"text": "(Ji et al., 2020;",
"ref_id": "BIBREF13"
},
{
"start": 229,
"end": 250,
"text": "Vinyals and Le, 2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense Knowledge in NLG",
"sec_num": "2.1"
},
{
"text": "In the last few years, several attempts have been made to incorporate commonsense knowledge in NLG systems, using external knowledge bases, such as ConceptNet or Atomic (Bauer et al., 2018; Ji et al., 2020) . ConceptNet consists of nearly 120K triples obtained from the Open Mind Commonsense knowledge entries in ConceptNet 5 (Speer and Havasi, 2012 ) that contains world facts and informal relationships between common concepts that convey some prior knowledge (Zhou et al., 2018) . ATOMIC is an atlas of everyday commonsense knowledge and contains 880k triples about causes and effects of human activities and annotated by crowd-sourced workers. ATOMIC is organized as if-then relations and can be categorised based on causal relations Guan et al., 2020) . COMET is a framework for automatic construction of commonsense knowledge bases, known also as COMmonsense Transformers. This model generates commonsense knowledge based on pre-trained language models (Bosselut et al., 2019) . Recent research has also focused on injecting triples into sentences in order to create domainspecific knowledge (Liu et al., 2020; Wang et al., 2020b) or incorporating commonsense knowledge directly in the training data (Huang et al., 2019) .",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "(Bauer et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 190,
"end": 206,
"text": "Ji et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 326,
"end": 349,
"text": "(Speer and Havasi, 2012",
"ref_id": "BIBREF35"
},
{
"start": 462,
"end": 481,
"text": "(Zhou et al., 2018)",
"ref_id": "BIBREF45"
},
{
"start": 738,
"end": 756,
"text": "Guan et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 959,
"end": 982,
"text": "(Bosselut et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1098,
"end": 1116,
"text": "(Liu et al., 2020;",
"ref_id": "BIBREF44"
},
{
"start": 1117,
"end": 1136,
"text": "Wang et al., 2020b)",
"ref_id": null
},
{
"start": 1206,
"end": 1226,
"text": "(Huang et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "External Knowledge",
"sec_num": "2.2"
},
{
"text": "An alternative to using explicit external models for commonsense knowledge is the use of PTLMs. Training deep learning models requires extensive amounts of data to prevent over-fitting. This can be problematic for NLG tasks, where collecting and annotating data represents a time-consuming and costly process (Qiu et al., 2020) . PTLMs, on the other hand, have the potential to solve the problem of data scarcity, as they do not rely on many resources for training models' parameters.",
"cite_spans": [
{
"start": 309,
"end": 327,
"text": "(Qiu et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained language models (PTLMs)",
"sec_num": "2.3"
},
{
"text": "In the field of NLG, PTLMs have been applied to open-ended non-expert domains, such as question answering, where commonsense knowledge should serve as a link between the performance of these models and human evaluation (Lin et al., 2019) . However, transferring commonsense knowledge using PTLMs comes with certain limitations corresponding to each pre-trained model.",
"cite_spans": [
{
"start": 219,
"end": 237,
"text": "(Lin et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained language models (PTLMs)",
"sec_num": "2.3"
},
{
"text": "PTLMs using domain-specific information from knowledge graphs or unstructured information are highly dependent on the training data quality. For instance, the knowledge extracted from the triples is unable to capture semantic relationships between entities (Zhou et al., 2018; Ji et al., 2020) and solving this can instil commonsense knowledge in NLG systems.",
"cite_spans": [
{
"start": 257,
"end": 276,
"text": "(Zhou et al., 2018;",
"ref_id": "BIBREF45"
},
{
"start": 277,
"end": 293,
"text": "Ji et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained language models (PTLMs)",
"sec_num": "2.3"
},
{
"text": "An ongoing discussion about the inherent biases of the training data exposed different types of bias that significantly influence natural language generation systems, such as gender bias, geographical and political bias among others (Papakyriakopoulos et al., 2020) . Also, the frequency of the words that influence training data might not correspond to the real-life scenarios and can lead to false facts (Shah et al., 2019) . This is also known as \"the black sheep problem\": when querying a system using GPT\u22123 to tell the colour of sheep, it will suggest \"black\" as often as \"white\", being impossible to distinguish between the linguistic meaning and the visual recognition of \"a black sheep\" (Gordon and Van Durme, 2013). Solving these issues can represent a first step in building NLG systems that integrate commonsense knowledge.",
"cite_spans": [
{
"start": 233,
"end": 265,
"text": "(Papakyriakopoulos et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 406,
"end": 425,
"text": "(Shah et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained language models (PTLMs)",
"sec_num": "2.3"
},
{
"text": "Understanding commonsense knowledge of natural language text is still a limited task. For humans, it is easy to understand both implicit and explicit meanings of a given sentence, whereas for machines this still remains a challenging task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense knowledge evaluation",
"sec_num": "2.4"
},
{
"text": "Due to the uncertainty of defining what implies commonsense knowledge in a natural language text, human evaluation by specialists or lay users might be the only way of providing a more comprehensive evaluation. On the other hand, human evaluation of commonsense knowledge can have some drawbacks as humans may have conflicting opinions and perspectives. In addition, the process of evaluating with humans can be time-consuming and costly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense knowledge evaluation",
"sec_num": "2.4"
},
{
"text": "Many papers report automatic evaluations of pretrained models for specific commonsense knowledge tasks. However, based on a gold standard, natural language text annotated by humans as correct for a given task may not capture all of the commonsense knowledge nuances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense knowledge evaluation",
"sec_num": "2.4"
},
{
"text": "We used the PRISMA method (Moher et al., 2009) to select papers to be included in this study following Howcroft et al. (2020) and (Reiter, 2018) . We began by considering all papers published in ACL venues (ACL, CL, CoNLL, EMNLP, Findings, NAACL, SemEval, *SEM, TACL and INLG) in the past three years (2018-2020). We screened the papers using the following search terms (in their title): commonsense, generation, reasoning, domain knowledge, expert, expertise, sensible, ontology, knowledge. This left us with 129 papers. From these, we randomly pick 55 papers that were annotated by the authors of this paper, following the annotation scheme proposed by Howcroft et al. (2020) . Papers on commonsense reasoning can either focus on language generation or understanding. For instance, commonsense reasoning can be ad-dressed as a classification task, where based on the context, a reasoning system can choose an option from a set of options (Talmor et al., 2019) . During annotation, such papers were omitted.",
"cite_spans": [
{
"start": 26,
"end": 46,
"text": "(Moher et al., 2009)",
"ref_id": "BIBREF21"
},
{
"start": 103,
"end": 125,
"text": "Howcroft et al. (2020)",
"ref_id": null
},
{
"start": 130,
"end": 144,
"text": "(Reiter, 2018)",
"ref_id": "BIBREF28"
},
{
"start": 655,
"end": 677,
"text": "Howcroft et al. (2020)",
"ref_id": null
},
{
"start": 940,
"end": 961,
"text": "(Talmor et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paper Selection & Annotation",
"sec_num": "3"
},
{
"text": "Following Howcroft et al. (2020) , papers were annotated using the three broad categories: (1) system attributes (input, output, task and language) which describe evaluated NLG systems, (2) quality criterion attributes (Verbatim Criterion Name, Definition and Paraphrase), and (3) operationalisation attributes (e.g. type of instruments, type of collected data etc.) which specify how evaluations are performed. In addition to these, we introduced a fourth category, commonsense knowledge, with five new annotation items which are relevant for commonsense-enhanced NLG, namely:",
"cite_spans": [
{
"start": 10,
"end": 32,
"text": "Howcroft et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paper Selection & Annotation",
"sec_num": "3"
},
{
"text": "\u2022 Definition of commonsense knowledge: free text field. Here the annotators either copied the definition as provided in the paper or specified \"None\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paper Selection & Annotation",
"sec_num": "3"
},
{
"text": "\u2022 Type of commonsense knowledge: free text field. Here the annotators had to specify the type of commonsense knowledge that the paper tried to address, for instance, sarcasm or reasoning about the order of events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paper Selection & Annotation",
"sec_num": "3"
},
{
"text": "\u2022 External knowledge: free text field. Examples of external knowledge can include commonsense knowledge bases such as ConceptNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paper Selection & Annotation",
"sec_num": "3"
},
{
"text": "\u2022 Was the knowledge evaluated in the generated text? (Yes/No): The annotators specified whether the underlying knowledge was evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paper Selection & Annotation",
"sec_num": "3"
},
{
"text": "\u2022 Criterion name for evaluation of external knowledge: The annotators could specify the criterion used to evaluate the knowledge base, for instance in terms of coverage or correctness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paper Selection & Annotation",
"sec_num": "3"
},
{
"text": "These additional items were deemed important to investigate whether there is a relationship between the human evaluation criteria and the type of commonsense knowledge covered by the NLG system. In addition, when evaluating generated text, it is vital to know whether errors in the generated text arise from the underlying data or the text generator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paper Selection & Annotation",
"sec_num": "3"
},
{
"text": "Following (Howcroft et al., 2020) , ten papers were annotated by all three annotators and Inter-Annotator Agreement (IAA) was calculated. The papers were randomly selected by proportionally accounting for the year and the publication venue.",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "(Howcroft et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "3.1"
},
{
"text": "Pre-processing: We pre-processed the annotations by normalising capitalisation, spelling and stripping extra spaces. We also removed papers that did not report a system that generates text. Calculating agreement: The data resulted from the annotation process was a 10 (papers) \u00d7n (evaluation criteria identified by annotator for each paper) \u00d719 (attribute value pairs) data frame, for each of the annotators. As such, IAA aims to measure the agreement across all annotators given the aforementioned data frames. The agreement was calculated using Krippendorff's alpha with Jaccard as the distance measure (Artstein and Poesio, 2008) .",
"cite_spans": [
{
"start": 605,
"end": 632,
"text": "(Artstein and Poesio, 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "3.1"
},
{
"text": "Results are presented in Table 1 . For system attributes (system input, system output and system task) IAA agreement is good, although the score for the system task is lower. The latter might be affected by the multitude of tasks presented in papers, as the evolution of NLG led to the need for proposing different tasks for generating text in new domains. Surprisingly, external knowledge attributes received a low IAA agreement which might indicate that there is vagueness in what constitutes external knowledge. Also, relatively low agreement scores were obtained for the two attributes elicit form and instrument type. The majority of the papers do not provide enough detail about the operationalisation attributes; our findings are not very different from the ones presented by Howcroft et al. (2020 ",
"cite_spans": [
{
"start": 783,
"end": 804,
"text": "Howcroft et al. (2020",
"ref_id": null
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "3.1"
},
{
"text": "In this section, we present the results from the analysis of the annotated papers. The annotations and the developed code can be found in the projects' repository 1 . Count fluency 6 coherence 4 informativeness 3 grammaticality, correctness, diversity, appropriateness, accuracy 2 commonsense, topic-consistency, sarcasticness, interpretability, engagement, commonsense plausibility, commonsense reasoning, reasonability, novelty, usefulness, intention, information, naturalness, logicality, humour, relevance, common ground, answerability, plausible, effect, validity, quality, eventcentered commonsense reasoning, bestworst scaling, consistency, attribute, creativity, effectiveness 1 mixed: grammatical correctness and fluency 2 none given 3 The 34 papers in the dataset corresponded to 70 individual evaluations, amounting to 2.05 evaluations per paper. This dataset was annotated between three annotators taking approximately 20 minutes or more to annotate each paper.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 290,
"text": "Count fluency 6 coherence 4 informativeness 3 grammaticality, correctness, diversity, appropriateness, accuracy 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis and Results",
"sec_num": "4"
},
{
"text": "In the following subsections we will first report the paper and system level statistics (Section 4.1), followed by evaluation-level statistics for the quality-criterion (Section 4.2), then the operationalisation attributes (Section 4.3), and finally the commonsense criteria findings (Section 4.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBATIM CRITERION NAME",
"sec_num": null
},
{
"text": "All the papers analysed reported English as the system language. Only two papers in our dataset reported Chinese as an additional system language to English. All the papers in our dataset were published recently between 2018-2020 with most being published in 2019 (58%). Figure 1 and Appendix A gives a break down of the publication venues for our dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 279,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Papers and Systems",
"sec_num": "4.1"
},
{
"text": "In terms of the system task attribute, our analysis reveals that question answering and dialogue turn generation are the top two system task types within our dataset. This differs from the findings made by Howcroft et al. (2020) who found that data-to-text generation as being the most frequent system task in their analysis leading to 50% more than second-placed dialogue turn generation. This difference may indicate that commonsense NLG is more focused on domain problems with direct applicability to general end-users. Appendix B shows the system input, Appendix C for system output, and Appendix D task frequencies in more detail.",
"cite_spans": [
{
"start": 206,
"end": 228,
"text": "Howcroft et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Papers and Systems",
"sec_num": "4.1"
},
{
"text": "In this section, we present the results related to the quality criteria, focusing on the verbatim criterion names and the paraphrase of criterion names based on our annotation. Table 2 shows the verbatim criterion names, as mentioned in the papers by the authors. We found that although most papers mention the quality criterion used for human evaluation a small subset does not. These findings are on par with Howcroft et al. (2020) , demonstrating that this is a common issue for NLG. We also found that only a subset of papers define the quality criteria used. The most cited criterion is fluency, followed by coherence.",
"cite_spans": [
{
"start": 411,
"end": 433,
"text": "Howcroft et al. (2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Quality criteria",
"sec_num": "4.2"
},
{
"text": "We further examined how often the normalised criteria occurred in the annotations as shown in Table 3 . Most commonly, the evaluations considered a specific text property. The type of properties that evaluations considered are the following: complexity/simplicity (mentioned twice), creativity, novelty, sarcasticness, diversity and humour.",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Quality criteria",
"sec_num": "4.2"
},
{
"text": "Although there is a lot of variability within one category, it actually shows that commonsense is generally a vague term and it can be interpreted in a plethora of ways and hence it is evaluated differently. Using a text property as an evaluation metric is an interesting finding. In broad human NLG evaluations, this criterion is not very prevalentin fact, it is one of the rarest criteria. However, other criteria such as fluency, goodness of outputs, grammaticality and correctness are equally found in both commonsense-enhanced NLG systems and broad NLG systems (as reported by Howcroft et al. (2020)) . Surprisingly, commonsense, commonsense reasoning and commonsense plausibility have only been named 4 times as criteria in the 34 annotated papers. We would expect to come across criteria names related to commonsense or reasoning more often, as we only examined papers reporting commonsense and reasoning NLG tasks. In Section 4.5, we discuss why this might be the case. Table 4 presents the most frequent forms used for response elicitation. Relative quality estimation was the most frequent form of response elicitation (21 times), followed by direct quality estimation (14 times). Unforeseen, as a reason for not providing enough details of how the evaluation was implemented, in the third place we have the value \"unclear\" (7 times). The most frequent values for the type of rating scale were numerical rating scale (12 times), rank-ordering (8 times), followed by the Likert scale (7 times).",
"cite_spans": [
{
"start": 582,
"end": 605,
"text": "Howcroft et al. (2020))",
"ref_id": null
}
],
"ref_spans": [
{
"start": 978,
"end": 985,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Quality criteria",
"sec_num": "4.2"
},
{
"text": "In addition, nearly half of the investigated papers did not provide a verbatim question/prompt (30 out of 56 evaluation entries). This can be problematic for reproducibility, as results obtained with a different question cannot be directly compared to the original results if the same question hasn't been asked. In addition, this can also hinder the comparability of future work, since, for the same reason, results obtained on new systems cannot be meaningfully compared to previous work. Similar to Howcroft et al. (2020) , we also found two cases where fluency and grammaticality were both mentioned in a question put to evaluators. van der Lee et al. (2021) discuss how this can lead to mixed results as evaluators may put more emphasis on one criterion over the other. ",
"cite_spans": [
{
"start": 502,
"end": 524,
"text": "Howcroft et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operationalisation",
"sec_num": "4.3"
},
{
"text": "The commonsense category includes the criteria defined in Section 3 namely, (1) definition of commonsense;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense criteria",
"sec_num": "4.4"
},
{
"text": "(2) type of commonsense;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense criteria",
"sec_num": "4.4"
},
{
"text": "(3) external knowledge; (4) whether the external knowledge was evaluated; and (5) the criterion name of the external knowledge evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense criteria",
"sec_num": "4.4"
},
{
"text": "Definition of Commonsense Unexpectedly, out of the 70 evaluations, only 4 provide a written definition of commonsense with the majority providing no definition whatsoever. Table 5 presents the verbatim definitions from these papers.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Commonsense criteria",
"sec_num": "4.4"
},
{
"text": "\"Commonsense reasoning, the ability to make acceptable and logical assumptions about ordinary scenes in our daily life\" (Lin et al., 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEFINITIONS",
"sec_num": null
},
{
"text": "\"Machine common sense, or the knowledge of and ability to reason about an open ended world\" (Talmor et al., 2019) .",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "(Talmor et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DEFINITIONS",
"sec_num": null
},
{
"text": "\"commonsense evidence is intuitive to humans, the agent's ability to select the right kind of commonsense evidence will allow the human and the agent to come to a common understanding of actions and their justifications, in other words, common ground\" (Yang et al., 2018) .",
"cite_spans": [
{
"start": 252,
"end": 271,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DEFINITIONS",
"sec_num": null
},
{
"text": "\"counterfactual reasoning: the ability to predict causal changes in future events given a counterfactual condition applied to the original chain of events\" (Qin et al., 2020) . Type of commonsense Almost half of the papers did not contain a definition of commonsense neither mentioned the type of commonsense that their task was addressing (n = 16). The second most prevalent type of commonsense was reasoningeight paper reported that the focus of the task is to perform some form of reasoning (n = 8). Other types of reported commonsense included temporal and spatial commonsense reasoning, social com-monsense, and underlying commonsense abilities such as sarcasm and humour.",
"cite_spans": [
{
"start": 156,
"end": 174,
"text": "(Qin et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DEFINITIONS",
"sec_num": null
},
{
"text": "External knowledge External knowledge bases are usually incorporated into NLG systems in order to provide commonsense capabilities. As shown in Figure 2 , the most used common knowledge base is ConceptNet (13 times), own developed KB most often in the form of triples that describe the connection between entities) (14 times), followed by ATOMIC (5 times), COMET (once) and Cosmos (once). Although pre-trained language models have been shown to encode commonsense knowledge in some situations, we did not consider them here as external knowledge. The most used pre-trained model though is GPT-2. Was the external knowledge evaluated? External knowledge was evaluated less than half of the time (14 out of 34). An assumption for this is that authors might consider external knowledge bases such as ConceptNet and ATOMIC accurate and they do not normally evaluate them in their domains. Bauer et al. (2018) argue that even when using a large pre-trained dataset, it might be hard for a model to not only find but also look at the correct relationships between concepts and apply them in reasoning tasks. They further conducted a human evaluation where they report how many cases their system would require external knowledge and in what percentage of these cases, their system selected the relevant/correct commonsense knowledge. From their results, it can be inferred that in a small set of cases, some errors in the generated text can be a result of the underlying erroneously inferred commonsense relationships. Wang et al. (2020a) also report a human evaluation of their commonsense knowledge in terms of validity and relevance, where they also show that the extracted commonsense relationships might contain errors (or be irrelevant). As such, it is clear that there should be a distinction between errors resulting from the text generation models or the external knowledge bases (note that here we have used the term external knowledge bases to refer to any form of external knowledge, including graphs).",
"cite_spans": [
{
"start": 885,
"end": 904,
"text": "Bauer et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 1513,
"end": 1532,
"text": "Wang et al. (2020a)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "DEFINITIONS",
"sec_num": null
},
{
"text": "Criterion name of external knowledge evaluation External knowledge has been evaluated in a number of ways (the following is not an exhaustive but an indicative list): Bosselut et al. (2019) evaluate whether their model can adequately produce a triple of a subject, object and their relationship in terms of plausibility; Wang et al. (2020a) evaluate commonsense knowledge in terms of validity (\"How valid are the paths?\") and relevance (\"How relevant are the paths to the question?\"); Bauer et al. (2018) evaluated the commonsense relationships between concepts. In other evaluation settings, evaluators are given the top related underlying concepts and are instructed to pick the ones that describe or explain the text better (e.g. (Sydorova et al., 2019) ).",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "Bosselut et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 485,
"end": 504,
"text": "Bauer et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 733,
"end": 756,
"text": "(Sydorova et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DEFINITIONS",
"sec_num": null
},
{
"text": "From the evidence we gathered through our annotations, there are several key observations. Firstly, only a subset of authors actually provide definitions of the quality criteria used for human evaluations. As Howcroft et al. (2020) found in their survey, there can be a significant mismatch between what authors specify as the quality criterion name and definition provided. Therefore, there is a need for definitions to be included in papers to give readers an unambiguous understanding of the quality criterion being evaluated. Secondly, there is a need to provide complete and accurate information for reproducing the human evaluation. Our analysis has shown that nearly half of the papers did not provide the prompt with the verbatim question/prompt given to the human participants. Thirdly, and finally, our analysis has shown that very few papers investigate the correctness or plausibility of commonsense reasoning in their evaluations with humans. This analysis has shown the need for better reporting of human evaluations. The low levels of inter-annotating agreement for annotating some of the attributes might be a strong indication of the challenges of how hard it is to locate information about evaluations in a given paper.",
"cite_spans": [
{
"start": 209,
"end": 231,
"text": "Howcroft et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "Given our experiences, we believe that researcher working on commonsense-enhanced NLG systems should go beyond evaluating their systems using standard NLG quality criteria such as naturalness, grammaticality etc. In addition, researchers should further:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "\u2022 evaluate the generated text of a commonsenseenhanced NLG system in terms of commonsense or reasoning capabilities in order to verify that the system actually displays commonsense capabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "\u2022 make an effort to investigate the correctness or plausibility of the commonsense knowledge/reasoning implemented with human assessors. As discussed in Section 4.4, not always the external knowledge is useful and it might even contain erroneous information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "Our analysis has motivated the creation of the Commonsense Evaluation Card which serves two roles. It firstly aims to motivate researchers to evaluate their systems in terms of common sense (i.e. are they fit for purpose?) and secondly, it aims to promote better practices and evaluation standardisation by introducing reporting recommendations (i.e. how was the evaluation done?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "The Commonsense Evaluation Card (CEC) ( Table 6 ) aims to standardise human evaluation and reporting of commonsense-enhanced NLG systems, enabling researchers to compare models not only in terms of classic NLG quality criteria, but also by focusing on the core capabilities of such models. CEC has been inspired by recent work on model reporting (Mitchell et al., 2019) , datasheets for datasets (Gebru et al., 2018) and The Human Evaluation Datasheet 1.0 (Shimorina and Belz, 2021) . It is not designed to replace these, but rather complement them.",
"cite_spans": [
{
"start": 347,
"end": 370,
"text": "(Mitchell et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 397,
"end": 417,
"text": "(Gebru et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 457,
"end": 483,
"text": "(Shimorina and Belz, 2021)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "The Commonsense Evaluation Card",
"sec_num": "5"
},
{
"text": "CEC includes three main sections: (1) definition of common sense in the context of the reported work and the type of commonsense knowledge; (2) evaluation of the validity of external commonsense knowledge; and (3) evaluation of commonsense knowledge in a generated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Commonsense Evaluation Card",
"sec_num": "5"
},
{
"text": "Commonsense Knowledge Definition: Basic definition of commonsense knowledge in the reported work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense Evaluation Card (CEC)",
"sec_num": null
},
{
"text": "-Definition -Type of commonsense -Example output of generated text that displays the intended commonsense capabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense Evaluation Card (CEC)",
"sec_num": null
},
{
"text": "External Knowledge: Basic information regarding the use of external knowledge and its evaluation -Structured Knowledge -Pre-trained Language Models -Other -Metrics for Evaluation of External Knowledge",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense Evaluation Card (CEC)",
"sec_num": null
},
{
"text": "Commonsense Knowledge in Generated Text: Evaluation Settings -Automatic Metrics for Evaluation of commonsense knowledge in generated text -Human Evaluation of commonsense knowledge in generated text Next, we describe each of these sections in more details with guidelines on how to complete the evaluation card.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense Evaluation Card (CEC)",
"sec_num": null
},
{
"text": "This section should answer basic questions regarding the presented work as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Common Sense",
"sec_num": "5.1"
},
{
"text": "How do you define commonsense knowledge in the context of this work? Here, researchers should provide a definition of commonsense knowledge that is relevant to their reported work. Our analysis showed that common sense is hard to define since its definition is highly dependent on the context. Providing a definition of common sense will help researchers better understand the setting in which work was evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Common Sense",
"sec_num": "5.1"
},
{
"text": "What type of commonsense knowledge do you address? For standardisation reasons, choose one of the following high-level categories: (1) Commonsense knowledge of entities in the environment including their properties and the relationship between entities; (2) Entities interactions and procedural knowledge; (3) Figurative language such as irony, humour, sarcasm, emotion etc; (4) Causal relationships, e.g. X will cause Y; (5) General knowledge such as facts, e.g. the water boils at 100C; (6) Reasoning; or (7) Other, not covered by any of the categories above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Common Sense",
"sec_num": "5.1"
},
{
"text": "Example output of generated text that displays the intended commonsense capabilities: An example of the expected output with an explanation on why this constitutes commonsense knowledge, for instance, the information in the output is not represented in the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Common Sense",
"sec_num": "5.1"
},
{
"text": "There are cases where commonsense might refer to more than one of the types mentioned above. The authors can specify more than one types of commonsense or create separate evaluation cards if it is more appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Common Sense",
"sec_num": "5.1"
},
{
"text": "This section should provide information regarding external commonsense knowledge bases and their evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "External Commonsense Knowledge",
"sec_num": "5.2"
},
{
"text": "Structured Knowledge: Does the proposed work make any use of an external structured knowledge base such as ConceptNet? If yes, provide details on how to access the knowledge base and its version if public, or alternatively. If the external knowledge base is subjected to privacy concerns or is private, then provide a detailed description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "External Commonsense Knowledge",
"sec_num": "5.2"
},
{
"text": "Pre-trained language models: Does the proposed work make use of any pre-trained language models? If yes, provide a detailed description, such as the version used, the API, hyperparameters etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "External Commonsense Knowledge",
"sec_num": "5.2"
},
{
"text": "Other: Was commonsense knowledge represented in any other way? How? If none of the above is applicable, explain how the system displays commonsense knowledge. For instance, knowledge might be encoded as rules or it might be inferred from the input training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "External Commonsense Knowledge",
"sec_num": "5.2"
},
{
"text": ": Was the external knowledge evaluated? Describe whether the external knowledge was evaluated and in what way. Essentially this section should answer whether the external knowledge was fit for purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics for Evaluation of External Knowledge",
"sec_num": null
},
{
"text": "Automatic Metrics for Evaluation of commonsense knowledge in generated text: Provide the metrics and the evaluation details such as the samples used for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense knowledge in generated text",
"sec_num": "5.3"
},
{
"text": "Human Evaluation of commonsense knowledge in generated text: Does your human evaluation include any metrics specifically related to commonsense knowledge? Provide their definition and include the evaluation details, including a detailed description of the experimental setup, the definition of the metric(s) and the questions asked to participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense knowledge in generated text",
"sec_num": "5.3"
},
{
"text": "This paper presented a human evaluation analysis on works describing systems that incorporate commonsense knowledge or other external knowledge bases with the aim to enhance the reasoning abilities of NLG systems. We have utilised an annotation scheme that has been verified in previous work and we have enhanced it with five additional criteria relevant for commonsense-enhanced NLG systems and we have reported our analysis of the annotations. Our analysis showed that there is a large variability on how such systems are evaluated, the type of evaluation criteria that are selected and we questioned whether standard NLG criteria are fit for purpose when evaluating reasoning abilities. We have therefore recommended that researchers should evaluate the reasoning ability of their systems (in addition to standard NLG metrics). We did not specify how these evaluations should be performed as this can vary depending on the task. We recommend nevertheless, that authors provide their definition(s) of commonsense knowledge to their evaluators. Additionally, we recommend that researchers validate their external knowledge bases to ensure that any errors present in generated output are not derived from the underlying knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Finally, as this field grows in the future and attracts further attention, it would be useful to document commonsense knowledge errors in a more structured way, as for instance in . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "https://github.com/nlgknowledge/ commonsense",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their insightful feedback. Gkatzia's contribution was supported under the EPSRC projects CiViL (EP/T014598/1) and Natural Language Generation for Low-resource Domains (EP/T024917/1). Clinciu's contribution is supported by the EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems at Heriot-Watt University and the University of Edinburgh. Clinciu's PhD is funded by Schlumberger Cambridge Research Limited (EP/L016834/1, 2018-2021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluation methodologies in automatic question generation",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Amidei",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Piwek",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Willis",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "307--317",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6537"
]
},
"num": null,
"urls": [],
"raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2018. Evaluation methodologies in automatic question generation 2013-2018. In Proceedings of the 11th International Conference on Natural Language Gen- eration, pages 307-317, Tilburg University, The Netherlands. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Inter-Coder Agreement for Computational Linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {
"DOI": [
"10.1162/coli.07-034-R2"
]
},
"num": null,
"urls": [],
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-Coder Agreement for Computational Linguistics. Compu- tational Linguistics, 34(4):555-596.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Commonsense for generative multi-hop question answering tasks",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Yicheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4220--4230",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1454"
]
},
"num": null,
"urls": [],
"raw_text": "Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question an- swering tasks. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 4220-4230, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "COMET: Commonsense transformers for automatic knowledge graph construction",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4762--4779",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1470"
]
},
"num": null,
"urls": [],
"raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for au- tomatic knowledge graph construction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762-4779, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bidirectional attentive memory networks for question answering over knowledge bases",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [
"J"
],
"last": "Zaki",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2913--2923",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1299"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2019. Bidirectional attentive memory networks for ques- tion answering over knowledge bases. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2913-2923, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Commonsense reasoning and commonsense knowledge in artificial intelligence",
"authors": [
{
"first": "Ernest",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 2015,
"venue": "Commun. ACM",
"volume": "58",
"issue": "9",
"pages": "92--103",
"other_ids": {
"DOI": [
"10.1145/2701413"
]
},
"num": null,
"urls": [],
"raw_text": "Ernest Davis and Gary Marcus. 2015. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM, 58(9):92-103.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Datasheets for datasets",
"authors": [
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Morgenstern",
"suffix": ""
},
{
"first": "Briana",
"middle": [],
"last": "Vecchione",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"Wortman"
],
"last": "Vaughan",
"suffix": ""
},
{
"first": "Hanna",
"middle": [
"M"
],
"last": "Wallach",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Crawford",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timnit Gebru, Jamie Morgenstern, Briana Vec- chione, Jennifer Wortman Vaughan, Hanna M. Wal- lach, Hal Daum\u00e9 III, and Kate Crawford. 2018. Datasheets for datasets. CoRR, abs/1803.09010.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A snapshot of NLG evaluation practices 2005 -2014",
"authors": [
{
"first": "Dimitra",
"middle": [],
"last": "Gkatzia",
"suffix": ""
},
{
"first": "Saad",
"middle": [],
"last": "Mahamood",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 15th European Workshop on Natural Language Generation (ENLG)",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {
"DOI": [
"10.18653/v1/W15-4708"
]
},
"num": null,
"urls": [],
"raw_text": "Dimitra Gkatzia and Saad Mahamood. 2015. A snap- shot of NLG evaluation practices 2005 -2014. In Proceedings of the 15th European Workshop on Nat- ural Language Generation (ENLG), pages 57-60, Brighton, UK. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Reporting bias and knowledge acquisition",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {
"DOI": [
"10.1145/2509558.2509563"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge acquisition. In Proceed- ings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13, page 25-30, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "93--108",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00302"
]
},
"num": null,
"urls": [],
"raw_text": "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A Knowledge-Enhanced Pre- training Model for Commonsense Story Generation. Transactions of the Association for Computational Linguistics, 8:93-108.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Howcroft",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Miruna-Adriana",
"middle": [],
"last": "Clinciu",
"suffix": ""
},
{
"first": "Dimitra",
"middle": [],
"last": "Gkatzia",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Saad",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mahamood",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mille",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 13th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "169--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised def- initions. In Proceedings of the 13th International Conference on Natural Language Generation, pages 169-182, Dublin, Ireland. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Cosmos QA: Machine reading comprehension with contextual commonsense reasoning",
"authors": [
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Ronan",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bras",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2391--2401",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1243"
]
},
"num": null,
"urls": [],
"raw_text": "Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2391-2401, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Knowledge structures for natural language generation",
"authors": [
{
"first": "Paul",
"middle": [
"S"
],
"last": "Jacobs",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 11th Coference on Computational Linguistics, COLING '86",
"volume": "",
"issue": "",
"pages": "554--559",
"other_ids": {
"DOI": [
"10.3115/991365.991527"
]
},
"num": null,
"urls": [],
"raw_text": "Paul S. Jacobs. 1986. Knowledge structures for natu- ral language generation. In Proceedings of the 11th Coference on Computational Linguistics, COLING '86, page 554-559, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Language generation with multi-hop reasoning on commonsense knowledge graph",
"authors": [
{
"first": "Haozhe",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "725--736",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.54"
]
},
"num": null,
"urls": [],
"raw_text": "Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, and Minlie Huang. 2020. Language generation with multi-hop reasoning on commonsense knowl- edge graph. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 725-736, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Best practices for the human evaluation of automatically generated text",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Emiel Van Miltenburg",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Wubben",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "355--368",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8643"
]
},
"num": null,
"urls": [],
"raw_text": "Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355-368, Tokyo, Japan. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "KagNet: Knowledge-aware graph networks for commonsense reasoning",
"authors": [
{
"first": "Xinyue",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Jamin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2829--2839",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1282"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xi- ang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2829-2839, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CommonGen: A constrained text generation challenge for generative commonsense reasoning",
"authors": [
{
"first": "Wangchunshu",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1823--1840",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.165"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. CommonGen: A constrained text gen- eration challenge for generative commonsense rea- soning. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 1823-1840, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Haotang Deng, and Ping Wang. 2020. K-BERT: Enabling language representation with knowledge graph",
"authors": [
{
"first": "Weijie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiruo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ju",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of AAAI 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-BERT: Enabling language representation with knowledge graph. In Proceedings of AAAI 2020.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Programs with common sense",
"authors": [
{
"first": "John",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 1959,
"venue": "Proceedings of the Teddington Conference on the Mechanization of Thought Processes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John McCarthy. 1959. Programs with common sense. In Proceedings of the Teddington Conference on the Mechanization of Thought Processes.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Logical versus analogical or symbolic versus connection or neat versus scruffy",
"authors": [
{
"first": "Marvin",
"middle": [],
"last": "Minsky",
"suffix": ""
}
],
"year": 1991,
"venue": "AI Magazine",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marvin Minsky. 1991. Logical versus analogical or symbolic versus connection or neat versus scruffy. AI Magazine, 12(2).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Model cards for model reporting",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zaldivar",
"suffix": ""
},
{
"first": "Parker",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Hutchinson",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Spitzer",
"suffix": ""
},
{
"first": "Deborah",
"middle": [],
"last": "Inioluwa",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Raji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gebru",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19",
"volume": "",
"issue": "",
"pages": "220--229",
"other_ids": {
"DOI": [
"10.1145/3287560.3287596"
]
},
"num": null,
"urls": [],
"raw_text": "Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Account- ability, and Transparency, FAT* '19, page 220-229, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Preferred reporting items for systematic reviews and meta-analyses: the prisma statement",
"authors": [
{
"first": "David",
"middle": [],
"last": "Moher",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Liberati",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Tetzlaff",
"suffix": ""
},
{
"first": "Douglas G",
"middle": [],
"last": "Altman",
"suffix": ""
}
],
"year": 2009,
"venue": "BMJ",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1136/bmj.b2535"
]
},
"num": null,
"urls": [],
"raw_text": "David Moher, Alessandro Liberati, Jennifer Tetzlaff, and Douglas G Altman. 2009. Preferred reporting items for systematic reviews and meta-analyses: the prisma statement. BMJ, 339.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bias in word embeddings",
"authors": [
{
"first": "Orestis",
"middle": [],
"last": "Papakyriakopoulos",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Hegelich",
"suffix": ""
},
{
"first": "Juan Carlos Medina",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "Fabienne",
"middle": [],
"last": "Marco",
"suffix": ""
}
],
"year": 2020,
"venue": "FAT* 2020 -Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3351095.3372843"
]
},
"num": null,
"urls": [],
"raw_text": "Orestis Papakyriakopoulos, Simon Hegelich, Juan Car- los Medina Serrano, and Fabienne Marco. 2020. Bias in word embeddings. In FAT* 2020 -Proceed- ings of the 2020 Conference on Fairness, Account- ability, and Transparency.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2463--2473",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1250"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BabyTalk: A Core Architecture to Summarise ICU Data as Tailored Text",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Portet",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Jim",
"middle": [],
"last": "Hunter",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Somayajulu",
"middle": [],
"last": "Sripada",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2008,
"venue": "21st International Congress of the European Federation for Medical Informatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Portet, Albert Gatt, Jim Hunter, Ehud Re- iter, Somayajulu Sripada, and Feng Gao. 2008. BabyTalk: A Core Architecture to Summarise ICU Data as Tailored Text. In 21st International Congress of the European Federation for Medical In- formatics (MIE 2008), page 1, G\u00f6teborg, Sweden.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Counterfactual story reasoning and generation",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "EMNLP-IJCNLP 2019 -2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/d19-1509"
]
},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2020. Counterfactual story reasoning and generation. In EMNLP-IJCNLP 2019 -2019 Conference on Empir- ical Methods in Natural Language Processing and 9th International Joint Conference on Natural Lan- guage Processing, Proceedings of the Conference.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Pre-trained models for natural language processing: A survey",
"authors": [
{
"first": "Xi",
"middle": [],
"last": "Peng Qiu",
"suffix": ""
},
{
"first": "Tian Xiang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yi",
"middle": [
"Ge"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Yun",
"middle": [
"Fan"
],
"last": "Shao",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Xuan",
"middle": [
"Jing"
],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/s11431-020-1647-3"
]
},
"num": null,
"urls": [],
"raw_text": "Xi Peng Qiu, Tian Xiang Sun, Yi Ge Xu, Yun Fan Shao, Ning Dai, and Xuan Jing Huang. 2020. Pre-trained models for natural language processing: A survey.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Explain Yourself! Leveraging language models for commonsense reasoning",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL 2019 -57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1487"
]
},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Explain Yourself! Leveraging language models for commonsense rea- soning. In ACL 2019 -57th Annual Meeting of the Association for Computational Linguistics, Proceed- ings of the Conference.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A structured review of the validity of bleu",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "3",
"pages": "393--401",
"other_ids": {
"DOI": [
"10.1162/coli_a_00322"
]
},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter. 2018. A structured review of the validity of bleu. Computational Linguistics, 44(3):393-401.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Building applied natural language generation systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 1997,
"venue": "Natural Language Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/S1351324997001502"
]
},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Lan- guage Engineering.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Acquiring correct knowledge for natural language generation",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Somayajulu",
"suffix": ""
},
{
"first": "Roma",
"middle": [],
"last": "Sripada",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robertson",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Artif. Int. Res",
"volume": "18",
"issue": "1",
"pages": "491--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter, Somayajulu G. Sripada, and Roma Robertson. 2003. Acquiring correct knowledge for natural language generation. J. Artif. Int. Res., 18(1):491-516.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "ATOMIC: An atlas of machine commonsense for ifthen reasoning",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Roof",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33013027"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. ATOMIC: An atlas of machine commonsense for if- then reasoning. In 33rd AAAI Conference on Arti- ficial Intelligence, AAAI 2019, 31st Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Predictive biases in natural language processing models: A conceptual framework and overview",
"authors": [
{
"first": "Deven",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "H",
"middle": [
"Andrew"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.468"
]
},
"num": null,
"urls": [],
"raw_text": "Deven Shah, H. Andrew Schwartz, and Dirk Hovy. 2019. Predictive biases in natural language process- ing models: A conceptual framework and overview.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The human evaluation datasheet 1.0: A template for recording details of human evaluation experiments in nlp",
"authors": [
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anastasia Shimorina and Anya Belz. 2021. The human evaluation datasheet 1.0: A template for recording details of human evaluation experiments in nlp.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2016. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. CoRR, abs/1612.03975.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Representing general relational knowledge in concept net 5",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer and Catherine Havasi. 2012. Represent- ing general relational knowledge in concept net 5. In Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Interpretable question answering on knowledge bases and text",
"authors": [
{
"first": "Alona",
"middle": [],
"last": "Sydorova",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Poerner",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4943--4951",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1488"
]
},
"num": null,
"urls": [],
"raw_text": "Alona Sydorova, Nina Poerner, and Benjamin Roth. 2019. Interpretable question answering on knowl- edge bases and text. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4943-4951, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4149--4158",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1421"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Commonsense Knowledge in Machine Intelligence",
"authors": [
{
"first": "Niket",
"middle": [],
"last": "Tandon",
"suffix": ""
},
{
"first": "Aparna",
"middle": [
"S"
],
"last": "Varde",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "De Melo",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM SIGMOD Record",
"volume": "46",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3186549.3186562"
]
},
"num": null,
"urls": [],
"raw_text": "Niket Tandon, Aparna S. Varde, and Gerard de Melo. 2018. Commonsense Knowledge in Machine Intel- ligence. ACM SIGMOD Record, 46(4).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Human evaluation of automatically generated text: Current trends and best practice guidelines",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
}
],
"year": 2021,
"venue": "Computer Speech & Language",
"volume": "67",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.csl.2020.101151"
]
},
"num": null,
"urls": [],
"raw_text": "Chris van der Lee, Albert Gatt, Emiel van Miltenburg, and Emiel Krahmer. 2021. Human evaluation of au- tomatically generated text: Current trends and best practice guidelines. Computer Speech & Language, 67:101151.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A Neural Conversational Model Oriol Vinyals. ICML Deep Learning Workshop",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc V. Le. 2015. A Neural Conver- sational Model Oriol Vinyals. ICML Deep Learning Workshop, 37.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Connecting the dots: A knowledgeable path generator for commonsense question answering",
"authors": [
{
"first": "Peifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ilievski",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Szekely",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4129--4140",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.369"
]
},
"num": null,
"urls": [],
"raw_text": "Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, and Xiang Ren. 2020a. Connecting the dots: A knowledgeable path generator for common- sense question answering. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020, pages 4129-4140, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Daxin Jiang, and Ming Zhou. 2020b. K-ADAPTER: Infusing knowledge",
"authors": [
{
"first": "Ruize",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Guihong",
"middle": [],
"last": "Cao",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xu- anjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2020b. K-ADAPTER: Infusing knowledge into pre-trained models with adapters.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Commonsense justification for action explanation",
"authors": [
{
"first": "Shaohua",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Qiaozi",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Sari",
"middle": [],
"last": "Sadiya",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2627--2637",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1283"
]
},
"num": null,
"urls": [],
"raw_text": "Shaohua Yang, Qiaozi Gao, Sari Sadiya, and Joyce Chai. 2018. Commonsense justification for action explanation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2627-2637, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Grounded conversation generation as guided traverses in commonsense knowledge graphs",
"authors": [
{
"first": "Houyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2031--2043",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.184"
]
},
"num": null,
"urls": [],
"raw_text": "Houyu Zhang, Zhenghao Liu, Chenyan Xiong, and Zhiyuan Liu. 2020. Grounded conversation genera- tion as guided traverses in commonsense knowledge graphs. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2031-2043, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Commonsense knowledge aware conversation generation with graph attention",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jingfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "IJCAI International Joint Conference on Artificial Intelligence",
"volume": "2018",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.24963/ijcai.2018/643"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Com- monsense knowledge aware conversation generation with graph attention. In IJCAI International Joint Conference on Artificial Intelligence, volume 2018- July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Distribution of publication venues across the commonsense paper dataset.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Frequency graph of external knowledge mentions in the commonsense dataset.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": ").",
"content": "<table><tr><td>ATTRIBUTES</td><td>IAA Test</td></tr><tr><td>System Input</td><td>0.70</td></tr><tr><td>External Knowledge</td><td>0.15</td></tr><tr><td>System Output</td><td>1.00</td></tr><tr><td>System task</td><td>0.37</td></tr><tr><td colspan=\"2\">Knowledge Evaluation 0.18</td></tr><tr><td>Paraphrase</td><td>0.39</td></tr><tr><td>Elicit form</td><td>0.05</td></tr><tr><td>Data type</td><td>0.25</td></tr><tr><td>Instrument type</td><td>0.07</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF1": {
"text": "Krippendorff's alpha using Jaccard distance for closed class attributes.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF2": {
"text": "",
"content": "<table><tr><td>NORMALISED CRITERION NAME</td><td>Count</td></tr><tr><td>text property</td><td>7</td></tr><tr><td>fluency</td><td>4</td></tr><tr><td>goodness of outputs relative to input</td><td>4</td></tr><tr><td>goodness of outputs relative to input (content)</td><td>4</td></tr><tr><td>coherence</td><td>4</td></tr><tr><td>information content of outputs</td><td>4</td></tr><tr><td>grammaticality</td><td>3</td></tr><tr><td>correctness of outputs in their own right</td><td>2</td></tr><tr><td>correctness of outputs relative to input (both</td><td>2</td></tr><tr><td>form and content)</td><td/></tr><tr><td>correctness of outputs relative to input (content)</td><td>2</td></tr><tr><td>naturalness (form)</td><td>2</td></tr><tr><td>appropriateness (content)</td><td>2</td></tr><tr><td>Goodness of outputs in their own right</td><td>1</td></tr><tr><td>Appropriateness</td><td>1</td></tr><tr><td>Appropriateness (both form and content)</td><td>1</td></tr><tr><td>Quality of outputs</td><td>1</td></tr><tr><td>Correctness of outputs relative to external frame</td><td>1</td></tr><tr><td>of reference (content)</td><td/></tr><tr><td>Goodness of outputs in their own right (both</td><td>1</td></tr><tr><td>form and content)</td><td/></tr><tr><td>Correctness of outputs relative to input</td><td>1</td></tr><tr><td>35a. Naturalness (both form and content)</td><td>1</td></tr><tr><td>Goodness of outputs relative to system use</td><td>1</td></tr><tr><td>Multiple (list all)</td><td>1</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": "Counts of values selected for form of response elicitation.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Definitions of Commonsense extracted from literature.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF6": {
"text": "Summary of the commonsense evaluation card (CEC).",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF8": {
"text": "Publication venues for commonsense papers.",
"content": "<table><tr><td>B System Input</td><td/></tr><tr><td>INPUT TYPE</td><td>Total</td></tr><tr><td>text:sentence</td><td>9</td></tr><tr><td>text:multiple sentences</td><td>6</td></tr><tr><td>raw/structured data</td><td>6</td></tr><tr><td>text: subsentential units of text</td><td>3</td></tr><tr><td>visual</td><td>2</td></tr><tr><td>Others (8 Input Types)</td><td>8</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF9": {
"text": "Types of system inputs for commonsense papers.",
"content": "<table><tr><td>C System Output</td><td/></tr><tr><td>OUTPUT TYPE</td><td>Total</td></tr><tr><td>text:sentence</td><td>17</td></tr><tr><td>text: subsentential units of text</td><td>4</td></tr><tr><td>text:multiple sentences</td><td>3</td></tr><tr><td>raw/structured data</td><td>2</td></tr><tr><td>text: variable-length</td><td>2</td></tr><tr><td>Others (6 Output Types)</td><td>6</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF10": {
"text": "Types of system outputs for commonsense papers.",
"content": "<table><tr><td>D System Task</td><td/></tr><tr><td>TASK TYPE</td><td>Total</td></tr><tr><td>Question Answering</td><td>12</td></tr><tr><td>Dialogue Turn Generation</td><td>7</td></tr><tr><td>End-to-End Generation</td><td>3</td></tr><tr><td>Other: Story Ending Generation</td><td>2</td></tr><tr><td>Content Selection/Determination</td><td>2</td></tr><tr><td>Feature-Controlled Generation</td><td>2</td></tr><tr><td>Others (6 Task Types)</td><td>6</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF11": {
"text": "Types of system tasks for commonsense papers.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}