{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:35:14.328108Z" }, "title": "Materialized Knowledge Bases from Commonsense Transformers", "authors": [ { "first": "Tuan-Phong", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": {}, "email": "tuanphong@mpi-inf.mpg.de" }, { "first": "Simon", "middle": [], "last": "Razniewski", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Starting from the COMET methodology by Bosselut et al. (2019), generating commonsense knowledge from commonsense transformers has recently received significant attention. Surprisingly, up to now no materialized resource of commonsense knowledge generated this way is publicly available. This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall. Furthermore, we identify common problem cases, and outline use cases enabled by materialized resources. We posit that the availability of these resources is important for the advancement of the field, as it enables an off-the-shelf-use of the resulting knowledge, as well as further analyses on its strengths and weaknesses.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Starting from the COMET methodology by Bosselut et al. (2019), generating commonsense knowledge from commonsense transformers has recently received significant attention. Surprisingly, up to now no materialized resource of commonsense knowledge generated this way is publicly available. This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall. Furthermore, we identify common problem cases, and outline use cases enabled by materialized resources. We posit that the availability of these resources is important for the advancement of the field, as it enables an off-the-shelf-use of the resulting knowledge, as well as further analyses on its strengths and weaknesses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Compiling comprehensive collections of commonsense knowledge (CSK) is an old dream of AI. Besides attempts at manual compilation (Liu and Singh, 2004; Lenat, 1995; Sap et al., 2018 ) and text extraction (Schubert, 2002; Tandon et al., 2014; Mishra et al., 2017; Romero et al., 2019; Nguyen et al., 2021a) , commonsense knowledge compilation from pretrained language models (Bosselut et al., 2019; Hwang et al., 2021; West et al., 2021) has recently emerged. In 2019, Bosselut et al. introduced Commonsense Transformers (COMET) , an approach for fine-tuning language models on existing corpora of commonsense assertions. These models have shown promising performance in generating commonsense assertions after being trained on established human-authored commonsense resources such as ATOMIC (Sap et al., 2018) and ATOMIC 20 20 (Hwang et al., 2021) . More recently, West et al. (2021) extracts commonsense assertions from a general language model, GPT-3 (Brown et al., 2020) , using simple prompting techniques. Surprisingly, using this machine-authored commonsense corpus to finetune COMET helps it outperform GPT-3, which is 100x larger in size, in terms of commonsense capabilities.", "cite_spans": [ { "start": 129, "end": 150, "text": "(Liu and Singh, 2004;", "ref_id": "BIBREF12" }, { "start": 151, "end": 163, "text": "Lenat, 1995;", "ref_id": "BIBREF10" }, { "start": 164, "end": 180, "text": "Sap et al., 2018", "ref_id": "BIBREF20" }, { "start": 203, "end": 219, "text": "(Schubert, 2002;", "ref_id": "BIBREF21" }, { "start": 220, "end": 240, "text": "Tandon et al., 2014;", "ref_id": null }, { "start": 241, "end": 261, "text": "Mishra et al., 2017;", "ref_id": "BIBREF13" }, { "start": 262, "end": 282, "text": "Romero et al., 2019;", "ref_id": "BIBREF19" }, { "start": 283, "end": 304, "text": "Nguyen et al., 2021a)", "ref_id": null }, { "start": 373, "end": 396, "text": "(Bosselut et al., 2019;", "ref_id": "BIBREF2" }, { "start": 397, "end": 416, "text": "Hwang et al., 2021;", "ref_id": null }, { "start": 417, "end": 435, "text": "West et al., 2021)", "ref_id": null }, { "start": 467, "end": 526, "text": "Bosselut et al. introduced Commonsense Transformers (COMET)", "ref_id": null }, { "start": 790, "end": 808, "text": "(Sap et al., 2018)", "ref_id": "BIBREF20" }, { "start": 826, "end": 846, "text": "(Hwang et al., 2021)", "ref_id": null }, { "start": 864, "end": 882, "text": "West et al. (2021)", "ref_id": null }, { "start": 952, "end": 972, "text": "(Brown et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite the prominence of this approach (the seminal COMET paper (Bosselut et al., 2019) receiving over 300 citations in just two years), to date, no resource containing commonsense knowledge compiled from any COMET model is publicly available. As compilation of such a resource is a non-trivial endeavour, this is a major impediment to research that aims to understand the potentials of the approach, or intends to employ its outputs in downstream tasks.", "cite_spans": [ { "start": 65, "end": 88, "text": "(Bosselut et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This resource paper fills this gap. We fine-tune the COMET pipeline on two established resources of concept-centric CSK assertions, CONCEPTNET (Speer et al., 2017) and ASCENT++ (Nguyen et al., 2021a) , and execute the pipeline for 10K prominent subjects. Unlike the ATOMIC resources, which were used to train COMET in (Bosselut et al., 2019; Hwang et al., 2021) and have their main focus on events and social interactions, the two resources of choice are mostly about general concepts (e.g., lions can roar, or a car has four wheels). Furthermore, as those two resources were constructed using two fundamentally different methods, crowdsourcing and web text extraction, it enables us to discover potentially different impacts they have on the COMET models.", "cite_spans": [ { "start": 143, "end": 163, "text": "(Speer et al., 2017)", "ref_id": "BIBREF22" }, { "start": 177, "end": 199, "text": "(Nguyen et al., 2021a)", "ref_id": null }, { "start": 318, "end": 341, "text": "(Bosselut et al., 2019;", "ref_id": "BIBREF2" }, { "start": 342, "end": 361, "text": "Hwang et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "By taking the top-10 inferences for each subjectpredicate pair, we obtain four resources, CONCEPT-NET (GPT2-XL, BART) and ASCENT++ (GPT2-XL, BART), containing 900K to 1.4M ranked assertions of CSK. We perform a detailed evaluation of the intrinsic quality, including fine-grained precision (typicality and saliency) and recall of each resource, derive qualitative insights into the strengths and weaknesses of the approach, and highlight extrinsic use cases enabled by the resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. The materialization of the COMET approach for two language models (GPT2-XL, BART) on two concept-centered commonsense knowledge bases (CONCEPTNET, AS-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CENT++);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Quantitative and qualitative evaluations of the resulting resources in terms of precision, recall and error categories, showing that in terms of recall, COMET models outperform crowdsourced construction and are competitive with web text extraction, while exhibiting moderate gaps in terms of precision to both;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Illustrative use cases of the materialized resources in statement aggregation, join queries, and search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The materialized resources, as well as an interactive browsing interface, are available at https://ascentpp.mpi-inf.mpg.de/comet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early approaches at CSK compilation relied on expert knowledge engineers (Lenat, 1995) or crowdsourcing (Liu and Singh, 2004) , and the latter approach has recently been revived (Sap et al., 2018) . To overcome scalability limitations of manual compilation, text extraction is a second popular paradigm. Following early attempts on linguistic corpora (Mishra et al., 2017), increasingly approaches have targeted larger text corpora like Wikipedia, book scans, or web documents (Tandon et al., 2014; Romero et al., 2019; Nguyen et al., 2021a,b) , to build CSK resources of wide coverage and quality. Recently, both approaches have been complemented by knowledge extraction from pretrained language models: Language models like BERT (Devlin et al., 2019) or GPT (Radford et al., 2019; Brown et al., 2020) have seen millions of documents, and latently store associations among terms. While West et al. (2021) used prompting to extract symbolic CSK from GPT-3, Bosselut et al. (2019) proposed to tap this knowledge by supervised learning: The language models are finetuned on statements from existing knowledge resources, e.g., trained to predict the object Africa when given the subject-predicate pair elephant, At-Location, based on the ConceptNet triple elephant, AtLocation, Africa . After training, they can be used to predict objects for unseen subject-predicate pairs, e.g., locations of wombats.", "cite_spans": [ { "start": 73, "end": 86, "text": "(Lenat, 1995)", "ref_id": "BIBREF10" }, { "start": 104, "end": 125, "text": "(Liu and Singh, 2004)", "ref_id": "BIBREF12" }, { "start": 178, "end": 196, "text": "(Sap et al., 2018)", "ref_id": "BIBREF20" }, { "start": 477, "end": 498, "text": "(Tandon et al., 2014;", "ref_id": null }, { "start": 499, "end": 519, "text": "Romero et al., 2019;", "ref_id": "BIBREF19" }, { "start": 520, "end": 543, "text": "Nguyen et al., 2021a,b)", "ref_id": null }, { "start": 731, "end": 752, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 760, "end": 782, "text": "(Radford et al., 2019;", "ref_id": "BIBREF17" }, { "start": 783, "end": 802, "text": "Brown et al., 2020)", "ref_id": "BIBREF3" }, { "start": 887, "end": 905, "text": "West et al. (2021)", "ref_id": null }, { "start": 957, "end": 979, "text": "Bosselut et al. (2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The approach gained significant attention, and variants are employed in a range of downstream tasks, e.g., commonsense question answering (Bosselut and Choi, 2020) , commonsense explanation (Wang et al., 2020) , story generation (Guan et al., 2020) , or video captioning (Fang et al., 2020 ).", "cite_spans": [ { "start": 138, "end": 163, "text": "(Bosselut and Choi, 2020)", "ref_id": "BIBREF1" }, { "start": 190, "end": 209, "text": "(Wang et al., 2020)", "ref_id": "BIBREF24" }, { "start": 229, "end": 248, "text": "(Guan et al., 2020)", "ref_id": "BIBREF8" }, { "start": 271, "end": 289, "text": "(Fang et al., 2020", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Yet, to date, no materialized knowledge resource produced by any COMET model is available (AUTOTOMIC from (West et al., 2021) being based on prompting GPT-3). The closest to this is a web interface hosted by the AllenAI institute at https://mosaickg.apps.allenai.org/ model_comet2020_entities. However, this visualizes only predictions for a single subject, making, e.g., aggregations or count impossible, and only shows top-5 predictions, and without scores.", "cite_spans": [ { "start": 106, "end": 125, "text": "(West et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We follow the implementations in the official code repository 1 of the COMET-ATOMIC 20 20 project (Hwang et al., 2021) to compute assertions, and decide on output thresholds.", "cite_spans": [ { "start": 98, "end": 118, "text": "(Hwang et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Training CSKBs. We use two established concept-centered commonsense knowledge bases (CSKBs), CONCEPTNET 5.7 (Speer et al., 2017) and ASCENT++ (Nguyen et al., 2021a) as training resources, considering 13 CSK predicates from each of them: AtLocation, CapableOf, Causes, Desires, HasA, HasPrerequisite, HasProperty, HasSubevent, MadeOf, MotivatedByGoal, PartOf, UsedFor and ReceivesAction.", "cite_spans": [ { "start": 108, "end": 128, "text": "(Speer et al., 2017)", "ref_id": "BIBREF22" }, { "start": 142, "end": 164, "text": "(Nguyen et al., 2021a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "1. CONCEPTNET (Speer et al., 2017) is arguably the most widely used CSKB, built by crowdsourcing. CONCEPTNET 5.7 is its lastest version 2 , consisting of 21 million multilingual assertions, spanning CSK as well as general linguistic and taxonomic knowledge. We retain English assertions only, resulting in 207,210 training assertions for the above-mentioned predicates.", "cite_spans": [ { "start": 14, "end": 34, "text": "(Speer et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "2. ASCENT++ (Nguyen et al., 2021a cleaning and ranking approaches. The AS-CENT++ KB consists of 2 million English CSK assertions for the 13 mentioned predicates.", "cite_spans": [ { "start": 12, "end": 33, "text": "(Nguyen et al., 2021a", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Language models. We consider two autoregressive language models (LMs) that were also used in the original COMET paper, GPT2-XL (Radford et al., 2019) and BART .", "cite_spans": [ { "start": 127, "end": 149, "text": "(Radford et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Materialization process. We query the finetuned COMET models for 10,926 subjects in CON-CEPTNET which have at least two assertions for the 13 CSK predicates. For each subject-predicate pair, we use beam search to obtain completions, with different configurations (see Table 1 ) for BART and GPT2-XL, following the parameters specified in the published code repository and models. We retain the top-10 completions for each subjectpredicate pair, with their beam scores (i.e., sum of log softmax of all generated tokens) returned by the generate function 3 of the Transformers library (Wolf et al., 2020) .", "cite_spans": [ { "start": 583, "end": 602, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 268, "end": 275, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Output. The resulting resources, CONCEPTNET (GPT2-XL, BART) and ASCENT++ (GPT2-XL, BART), contain a total of 976,296 and 1,420,380 and 1,271,295 and 1,420,380 assertions after deduplication, respectively, as well as their corresponding beam scores. All are available for browsing, as well as for download, at https://ascentpp. mpi-inf.mpg.de/comet (see screenshot of browsing interface in Figure 2 ).", "cite_spans": [], "ref_spans": [ { "start": 389, "end": 397, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We perform three kind of analyses: (1) a quantitative evaluation of the intrinsic quality of the assertions, based on crowdsourcing, (2) a qualitative evaluation that outlines major strengths and weaknesses, and (3) an illustration of use cases enabled by both resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4" }, { "text": "The original paper (Bosselut et al., 2019) only evaluated the top-1 triple per subject-predicate pair. Furthermore, it solely evaluated triples by plausibility, which is a necessary, but only partly a sufficient criterion for being considered commonsense (Chalier et al., 2020) .", "cite_spans": [ { "start": 19, "end": 42, "text": "(Bosselut et al., 2019)", "ref_id": "BIBREF2" }, { "start": 255, "end": 277, "text": "(Chalier et al., 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "In the following, we evaluate samples from the generated resources along two precision dimensions, typicality (top-100 assertions per subject) and saliency (top-10 assertions per subject). We also evaluate recall, by measuring the degree to which each resource covers the statements in a human-generated ground truth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "Precision: Typicality and saliency. Following Romero et al. 2019; Nguyen et al. 2021a, we assess assertions in the CSK resources along two precision dimensions: typicality and saliency, which measure the degree of truth and the degree of relevance of assertions, respectively. We use the Amazon Mechanical Turk (AMT) platform to obtain human judgements. Each dimension is evaluated based on a 4-point Likert scale and an option for no judgement if the annotator is not familiar with the concepts. Assertions are transformed into human-readable sentences using the templates introduced by Hwang et al. (2021) . Each assignment is done by three different workers. Following Hwang et al. (2021) , any CSK assertion that receives the two higher scores in the Likert scale is labelled as Typical or Salient, and the two lower scores as Untypical or Unsalient. The final judgements is based on majority vote.", "cite_spans": [ { "start": 588, "end": 607, "text": "Hwang et al. (2021)", "ref_id": null }, { "start": 672, "end": 691, "text": "Hwang et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "In terms of sampling process, for typicality, we draw 500 assertions from each resource when restricting to top-100 assertions per subject. For saliency, we pick 500 random samples from the pool of top-10 assertions per subject.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "Results are reported in the left part of Table 2 . We see a significant drop in the quality of assertions in the LM-based generations compared to the training resources. In terms of the neural models, for both training CSKBs, the BART models demonstrate better typicality than the GPT2-XL ones. Assertions in BART-ASCENT++ also have significantly better saliency than in GPT2-XL-ASCENT++. Interestingly, BART-CONCEPTNET is nearly on par with ASCENT++ on both metrics.", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 48, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "Recall. We reuse the CSLB dataset (Devereux et al., 2014) that was processed by Nguyen et al. (2021a) as ground truth for recall evaluation. The CSLB dataset consists of 22.6K human-written sentences about property norms of 638 concepts. To account for minor reformulations, following Nguyen et al. (2021a) , we also use embeddingbased similarity to match ground-truth sentences with statements in the CSK resources. We specifically rely on precomputed SentenceTransformers embeddings (Reimers and Gurevych, 2019) . We also restrict all CSK resources to top-100 assertions per subject.", "cite_spans": [ { "start": 80, "end": 101, "text": "Nguyen et al. (2021a)", "ref_id": null }, { "start": 285, "end": 306, "text": "Nguyen et al. (2021a)", "ref_id": null }, { "start": 485, "end": 513, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "The evaluation results are shown in the right part of Table 2 , where we report recall at similarity thresholds 0.96, 0.98 and 1.0, as well as resource size. We also plot the recall values at different top-N assertions per subject in Figure 1 with similarity threshold t = 0.98. As one can see, ASCENT++ outperforms both COMET models trained on it even though it is significantly smaller. We see opposite results with the CONCEPTNET-based resources, where the COMET models generate resources of better coverage than its training data. Our presumption is that the LMs profits more from manually curated resources like CONCEPTNET, but hardly add values to resources that were extracted from the web, as LMs have not seen fundamentally different text. Furthermore, in contrast to precision, GPT2-XL models have better results than BART models in terms of recall, on both input CSKBs.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 234, "end": 242, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "LMs have the strength to generate an open-ended set of objects, even for subjects seen rarely or not at all in the training data. For example, while CONCEPTNET stores only one location for rabbit: \"a meadow\", both BART-and GPT2-XL-CONCEPTNET can generalize to other correct locations, such as wilderness, zoo, cage, pet store, etc. In the recall evaluation, we pointed out that CON-CEPTNET, a manually-built CSK resource with relatively small size, considerably benefits from LMs generations as they improve the coverage of the resource substantially.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative observations", "sec_num": "4.2" }, { "text": "However, as indicated in the precision evaluation, LM generations are generally of lower precision than those in the training data. Common error categories we observe are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative observations", "sec_num": "4.2" }, { "text": "\u2022 Co-occurrence misreadings: LMs frequently predict values that merely frequently co-occur, e.g., locomotive, atLocation, bus stop , running, capableOf, put on shoes , war, desires, kill people , supermarket, ca-pableOf, buy milk .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative observations", "sec_num": "4.2" }, { "text": "\u2022 Subject-object-copying: LMs too often repeat the given subject in predictions. For instance, 45 of 130 objects generated by BART-CONCEPTNET for the subject chicken also contain chicken, such as chicken, CapableOf, kill/eat/cook chicken or chicken, UsedFor, feed chicken .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative observations", "sec_num": "4.2" }, { "text": "\u2022 Quantity confusion: LMs struggle to distinguish quantities. For example, GPT2-XL-CONCEPTNET generates that bike has four wheels (top-1 prediction), and then also two wheels (rank 3), three wheels (rank 4) and twelve wheels (rank 5). The weakness of dealing with numbers is known as a common issue of embeddings-based approaches (Berg-Kirkpatrick and Spokoyny, 2020 ).", "cite_spans": [ { "start": 330, "end": 366, "text": "(Berg-Kirkpatrick and Spokoyny, 2020", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative observations", "sec_num": "4.2" }, { "text": "\u2022 Redundancy: Generated objects often overlap, bloating the output with redundancies. Most common are repetitions of singular/plural nouns, e.g., the top-2 generations by BART-CONCEPTNET for doctor-CapableOf : \"visit patient\" and \"visit patients\". Redundancies also include paraphrases, e.g., doctor, CapableOf, visit patients / see patients ; or doctor, CapableOf, prescribe medication / prescribe drug / prescribe medicine (GPT2-XL-ASCENT++ generations). Clustering might alleviate this issue (Nguyen et al., 2021a) . ", "cite_spans": [ { "start": 495, "end": 517, "text": "(Nguyen et al., 2021a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative observations", "sec_num": "4.2" }, { "text": "Beyond systematic evaluation, materialized resources enable a wide set of downstream use cases, for example context-enriched zero-shot question answering (Petroni et al., 2020) , or KB-based commonsense explanation (Wang et al., 2020) . We exemplarily illustrate four enabled types of basic analyses, (1) frequency aggregation, (2) join queries, (3) ranking and (4) text search.", "cite_spans": [ { "start": 154, "end": 176, "text": "(Petroni et al., 2020)", "ref_id": "BIBREF16" }, { "start": 215, "end": 234, "text": "(Wang et al., 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Downstream use of materialized resources", "sec_num": "4.3" }, { "text": "Frequency aggregation. Materialized resources enable to count frequencies. In Table 3 , we demonstrate the three most common objects for each predicate in the GPT2-XL-CONCEPTNET resource. Interestingly, the third most common location of items in the KB is \"sock drawer\", which is only ranked as the 190 th most common location in CON-CEPTNET. Similarly, the top-3 objects for Capa-bleOf in the generated KB rarely occur the training data.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Downstream use of materialized resources", "sec_num": "4.3" }, { "text": "Join queries. One level further, materialized knowledge enables the construction of join queries. For example, we can formulate conjunctive queries like:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream use of materialized resources", "sec_num": "4.3" }, { "text": "\u2022 Animals that eat themselves include chicken, flies, grasshopper, mice, penguin, worm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream use of materialized resources", "sec_num": "4.3" }, { "text": "\u2022 The most frequent subevents of subevents are: breathe, swallow, hold breath, think, smile.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream use of materialized resources", "sec_num": "4.3" }, { "text": "\u2022 The most common parts of locations are: beaches, seeds, lot of trees, peel, more than one meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream use of materialized resources", "sec_num": "4.3" }, { "text": "Ranking. Since statements in our materialized resources come with scores, it becomes possible to locally and globally rank assertions, or to compare statements pairwise. For example, in GPT2-XL-CONCEPTNET, the triple librarian, AtLocation, library , which is at rank 140, has a score of \u22120.048, which is much higher than that of elephant, CapableOf, climb tree (score = \u22120.839, ranked 638,048 globally).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream use of materialized resources", "sec_num": "4.3" }, { "text": "Text search. Finally, we can use materialized resources for text search. For example, we can search in GPT2-XL-CONCEPTNET for all assertions that include the term \"airplane\", finding expected matches like airplane, AtLocation, airport and flight attendant, CapableOf, travel on airplane , as well as surprising ones like scrap paper, Used-For, making paper airplane and traveling, Has-Subevent, sleeping on airplane .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream use of materialized resources", "sec_num": "4.3" }, { "text": "We introduced four CSKBs computed using two COMET models (BART and GPT2-XL) trained on two existing CSK resources (CONCEPTNET and ASCENT++). Our findings are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "1. The COMET methodology produces better results on modest manually curated resources (CONCEPTNET) than on larger web-extracted resources (ASCENT++).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "2. COMET's recall can significantly outperform that of modest manually curated ones (CONCEPTNET), and reach that of large webextracted ones (ASCENT++).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "3. In terms of precision, a significant gap remains to manual curation, both in typicality and saliency. To web extraction, a moderate gap remains in terms of statement typicality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We also identified common problems of the COMET generations, such as co-occurrence misreadings, subject copying, and redundancies, which may be subject of further research regarding postfiltering and clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://github.com/allenai/ comet-atomic-2020/ 2 https://github.com/commonsense/ conceptnet5/wiki/Downloads", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/docs/ transformers/main/en/main_classes/text_ generation#transformers.generation_utils. GenerationMixin.generate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An empirical investigation of contextualized number prediction", "authors": [ { "first": "Taylor", "middle": [], "last": "Berg", "suffix": "" }, { "first": "-", "middle": [], "last": "Kirkpatrick", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Spokoyny", "suffix": "" } ], "year": 2020, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor Berg-Kirkpatrick and Daniel Spokoyny. 2020. An empirical investigation of contextualized number prediction. In EMNLP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dynamic knowledge graph construction for zero-shot commonsense question answering", "authors": [ { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bosselut and Yejin Choi. 2020. Dynamic knowledge graph construction for zero-shot com- monsense question answering. In AAAI.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "COMET: commonsense transformers for automatic knowledge graph construction", "authors": [ { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Malaviya", "suffix": "" }, { "first": "Asli", "middle": [], "last": "\u00c7elikyilmaz", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli \u00c7elikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for au- tomatic knowledge graph construction. In ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language models are fewshot learners", "authors": [ { "first": "Tom", "middle": [ "B" ], "last": "Brown", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom B. Brown et al. 2020. Language models are few- shot learners. In NeurIPS.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Joint reasoning for multi-faceted commonsense knowledge", "authors": [ { "first": "Yohan", "middle": [], "last": "Chalier", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Razniewski", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2020, "venue": "AKBC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yohan Chalier, Simon Razniewski, and Gerhard Weikum. 2020. Joint reasoning for multi-faceted commonsense knowledge. In AKBC.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The centre for speech, language and the brain (CSLB) concept property norms", "authors": [ { "first": "J", "middle": [], "last": "Barry", "suffix": "" }, { "first": "Lorraine", "middle": [ "K" ], "last": "Devereux", "suffix": "" }, { "first": "Jeroen", "middle": [], "last": "Tyler", "suffix": "" }, { "first": "Billi", "middle": [], "last": "Geertzen", "suffix": "" }, { "first": "", "middle": [], "last": "Randall", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barry J Devereux, Lorraine K Tyler, Jeroen Geertzen, and Billi Randall. 2014. The centre for speech, lan- guage and the brain (CSLB) concept property norms. Behavior research methods.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Video2commonsense: Generating commonsense descriptions to enrich video captioning", "authors": [ { "first": "Zhiyuan", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Tejas", "middle": [], "last": "Gokhale", "suffix": "" }, { "first": "Pratyay", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Chitta", "middle": [], "last": "Baral", "suffix": "" }, { "first": "Yezhou", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiyuan Fang, Tejas Gokhale, Pratyay Baner- jee, Chitta Baral, and Yezhou Yang. 2020. Video2commonsense: Generating commonsense descriptions to enrich video captioning. In EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A knowledge-enhanced pretraining model for commonsense story generation", "authors": [ { "first": "Jian", "middle": [], "last": "Guan", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zhihao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A knowledge-enhanced pre- training model for commonsense story generation. TACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Comet-)Atomic 2020: On symbolic and neural commonsense knowledge graphs", "authors": [ { "first": "Jena", "middle": [ "D" ], "last": "Hwang", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "", "middle": [], "last": "Da", "suffix": "" } ], "year": null, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (Comet-)Atomic 2020: On sym- bolic and neural commonsense knowledge graphs. In AAAI.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Cyc: A large-scale investment in knowledge infrastructure", "authors": [ { "first": "B", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "", "middle": [], "last": "Lenat", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas B Lenat. 1995. Cyc: A large-scale investment in knowledge infrastructure. CACM.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ves", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Conceptnet-a practical commonsense reasoning tool-kit", "authors": [ { "first": "Hugo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Push", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2004, "venue": "BT technology journal", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Liu and Push Singh. 2004. Conceptnet-a practi- cal commonsense reasoning tool-kit. BT technology journal.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Domain-targeted, high precision knowledge extraction", "authors": [ { "first": "Niket", "middle": [], "last": "Bhavana Dalvi Mishra", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Tandon", "suffix": "" }, { "first": "", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhavana Dalvi Mishra, Niket Tandon, and Peter Clark. 2017. Domain-targeted, high precision knowledge extraction. TACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Advanced semantics for commonsense knowledge extraction", "authors": [ { "first": "Tuan-Phong", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Razniewski", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tuan-Phong Nguyen, Simon Razniewski, and Gerhard Weikum. 2021b. Advanced semantics for common- sense knowledge extraction. In WWW.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "How context affects language models' factual predictions", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Aleksandra", "middle": [], "last": "Piktus", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [ "H" ], "last": "Miller", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "AKBC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rockt\u00e4schel, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2020. How context affects lan- guage models' factual predictions. In AKBC.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Lan- guage models are unsupervised multitask learners. OpenAI blog, 1(8):9.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In EMNLP.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Commonsense properties from query logs and question answering forums", "authors": [ { "first": "Julien", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Razniewski", "suffix": "" }, { "first": "Koninika", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Jeff", "middle": [ "Z" ], "last": "Pan", "suffix": "" }, { "first": "Archit", "middle": [], "last": "Sakhadeo", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2019, "venue": "CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julien Romero, Simon Razniewski, Koninika Pal, Jeff Z. Pan, Archit Sakhadeo, and Gerhard Weikum. 2019. Commonsense properties from query logs and question answering forums. In CIKM.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Atomic: An atlas of machine commonsense for ifthen reasoning", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Lebras", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Allaway", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Roof", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maarten Sap, Ronan LeBras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2018. Atomic: An atlas of machine commonsense for if- then reasoning. In AAAI.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Can we derive general world knowledge from texts", "authors": [ { "first": "Lenhart", "middle": [], "last": "Schubert", "suffix": "" } ], "year": 2002, "venue": "HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lenhart Schubert. 2002. Can we derive general world knowledge from texts. In HLT.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "authors": [ { "first": "Robyn", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chin", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In AAAI.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Figure 2: Web interface showing top-10 assertions per predicate in six CSK resources. The number in grey next to a CSKB indicates the total number of assertions for the corresponding subject-predicate pair in the KB", "authors": [ { "first": "Fabian", "middle": [ "M" ], "last": "Gerard De Melo", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Suchanek", "suffix": "" }, { "first": "", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2014, "venue": "WSDM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Figure 2: Web interface showing top-10 assertions per predicate in six CSK resources. The number in grey next to a CSKB indicates the total number of assertions for the corresponding subject-predicate pair in the KB. Niket Tandon, Gerard de Melo, Fabian M. Suchanek, and Gerhard Weikum. 2014. WebChild: harvesting and organizing commonsense knowledge from the web. In WSDM.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Semeval-2020 task 4: Commonsense validation and explanation", "authors": [ { "first": "Cunxiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shuailong", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Yili", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Yilong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cunxiang Wang, Shuailong Liang, Yili Jin, Yi- long Wang, Xiaodan Zhu, and Yue Zhang. 2020. Semeval-2020 task 4: Commonsense validation and explanation. In SemEval.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Sean Welleck, and Yejin Choi. 2021. Symbolic knowledge distillation: from general language models to commonsense models", "authors": [ { "first": "Peter", "middle": [], "last": "West", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Hessel", "suffix": "" }, { "first": "Jena", "middle": [ "D" ], "last": "Hwang", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ximing", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "", "middle": [], "last": "Lu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2110.07178" ] }, "num": null, "urls": [], "raw_text": "Peter West, Chandra Bhagavatula, Jack Hessel, Jena D Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2021. Sym- bolic knowledge distillation: from general language models to commonsense models. arXiv preprint arXiv:2110.07178.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "", "middle": [], "last": "Gugger", "suffix": "" } ], "year": 2020, "venue": "EMLNP: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In EMLNP: System Demonstrations.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Resource recall in relation to resource size, at similarity threshold t = 0.98.", "uris": null, "type_str": "figure" }, "TABREF1": { "text": "Configurations for beam-search decoders.", "type_str": "table", "html": null, "num": null, "content": "