{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:57:36.846895Z" }, "title": "How Can We Know What Language Models Know?", "authors": [ { "first": "Zhengbao", "middle": [], "last": "Jiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "zhengbaj@cs.cmu.edu" }, { "first": "Frank", "middle": [ "F" ], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "" }, { "first": "Jun", "middle": [], "last": "Araki", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bosch Research North America", "location": {} }, "email": "jun.araki@us.bosch.com" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "gneubig@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as ''Obama is a by profession''. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as ''Obama worked as a '' may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as ''Obama is a by profession''. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as ''Obama worked as a '' may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent years have seen the primary role of language models (LMs) transition from generating or evaluating the fluency of natural text (Mikolov and Zweig, 2012; Merity et al., 2018; Melis et al., 2018; Gamon et al., 2005) to being a powerful tool for text understanding. This understanding has mainly been achieved through the use of language modeling as a pre-training task for feature extractors, where the hidden vectors learned through a language modeling objective are then used in * The first two authors contributed equally. down-stream language understanding systems (Dai and Le, 2015; Melamud et al., 2016; Peters et al., 2018; Devlin et al., 2019) .", "cite_spans": [ { "start": 134, "end": 159, "text": "(Mikolov and Zweig, 2012;", "ref_id": "BIBREF32" }, { "start": 160, "end": 180, "text": "Merity et al., 2018;", "ref_id": "BIBREF31" }, { "start": 181, "end": 200, "text": "Melis et al., 2018;", "ref_id": "BIBREF30" }, { "start": 201, "end": 220, "text": "Gamon et al., 2005)", "ref_id": "BIBREF14" }, { "start": 574, "end": 592, "text": "(Dai and Le, 2015;", "ref_id": "BIBREF9" }, { "start": 593, "end": 614, "text": "Melamud et al., 2016;", "ref_id": "BIBREF29" }, { "start": 615, "end": 635, "text": "Peters et al., 2018;", "ref_id": "BIBREF34" }, { "start": 636, "end": 656, "text": "Devlin et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Interestingly, it is also becoming apparent that LMs 1 themselves can be used as a tool for text understanding by formulating queries in natural language and either generating textual answers directly (McCann et al., 2018; Radford et al., 2019) , or assessing multiple choices and picking the most likely one (Zweig and Burges, 2011; Rajani et al., 2019) . For example, LMs have been used to answer factoid questions (Radford et al., 2019) , answer common sense queries (Trinh and Le, 2018; Sap et al., 2019) , or extract factual knowledge about relations between entities (Petroni et al., 2019; Baldini Soares et al., 2019) . Regardless of the end task, the knowledge contained in LMs is probed by providing a prompt, and letting the LM either generate the continuation of a prefix (e.g., ''Barack Obama was born in ''), or predict missing words in a cloze-style template (e.g., ''Barack Obama is a by profession'').", "cite_spans": [ { "start": 201, "end": 222, "text": "(McCann et al., 2018;", "ref_id": "BIBREF28" }, { "start": 223, "end": 244, "text": "Radford et al., 2019)", "ref_id": "BIBREF38" }, { "start": 309, "end": 333, "text": "(Zweig and Burges, 2011;", "ref_id": "BIBREF54" }, { "start": 334, "end": 354, "text": "Rajani et al., 2019)", "ref_id": "BIBREF39" }, { "start": 417, "end": 439, "text": "(Radford et al., 2019)", "ref_id": "BIBREF38" }, { "start": 470, "end": 490, "text": "(Trinh and Le, 2018;", "ref_id": "BIBREF50" }, { "start": 491, "end": 508, "text": "Sap et al., 2019)", "ref_id": "BIBREF42" }, { "start": 573, "end": 595, "text": "(Petroni et al., 2019;", "ref_id": "BIBREF36" }, { "start": 596, "end": 624, "text": "Baldini Soares et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, while this paradigm has been used to achieve a number of intriguing results regarding the knowledge expressed by LMs, they usually rely on prompts that were manually created based on the intuition of the experimenter. These manually created prompts (e.g., ''Barack Obama was born in '') might be sub-optimal because LMs might have learned target knowledge from substantially different contexts (e.g., ''The birth place of Barack Obama is Honolulu, Hawaii.'') during their training. Thus it is quite possible that a fact that the LM does know cannot be retrieved due to the prompts not being effective queries for the fact. Thus, existing results are simply a lower bound on the extent of knowledge contained Figure 1 : Top-5 predictions and their log probabilities using different prompts (manual, mined, and paraphrased) to query BERT. Correct answer is underlined.", "cite_spans": [ { "start": 447, "end": 467, "text": "Honolulu, Hawaii.'')", "ref_id": null }, { "start": 798, "end": 830, "text": "(manual, mined, and paraphrased)", "ref_id": null } ], "ref_spans": [ { "start": 717, "end": 725, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "in LMs, and in fact, LMs may be even more knowledgeable than these initial results indicate. In this paper we ask the question: ''How can we tighten this lower bound and get a more accurate estimate of the knowledge contained in state-of-the-art LMs?'' This is interesting both scientifically, as a probe of the knowledge that LMs contain, and from an engineering perspective, as it will result in higher recall when using LMs as part of a knowledge extraction system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In particular, we focus on the setting of Petroni et al. (2019) who examine extracting knowledge regarding the relations between entities (definitions in \u00a7 2). We propose two automatic methods to systematically improve the breadth and quality of the prompts used to query the existence of a relation ( \u00a7 3). Specifically, as shown in Figure 1 , these are mining-based methods inspired by previous relation extraction methods (Ravichandran and Hovy, 2002) , and paraphrasing-based methods that take a seed prompt (either manually created or automatically mined), and paraphrase it into several other semantically similar expressions. Further, because different prompts may work better when querying for different subjectobject pairs, we also investigate lightweight ensemble methods to combine the answers from different prompts together ( \u00a7 4).", "cite_spans": [ { "start": 57, "end": 63, "text": "(2019)", "ref_id": "BIBREF21" }, { "start": 425, "end": 454, "text": "(Ravichandran and Hovy, 2002)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 334, "end": 342, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We experiment on the LAMA benchmark (Petroniet al., 2019) , which is an English-language benchmark devised to test the ability of LMs to retrieve relations between entities ( \u00a7 5). We first demonstrate that improved prompts significantly improve accuracy on this task, with the one-best prompt extracted by our method raising accuracy from 31.1% to 34.1% on BERT-base (Devlin et al., 2019) , with similar gains being obtained with BERT-large as well. We further demonstrate that using a diversity of prompts through ensembling further improves accuracy to 39.6%. We perform extensive analysis and ablations, gleaning insights both about how to best query the knowledge stored in LMs and about potential directions for incorporating knowledge into LMs themselves. Finally, we have released the resulting LM Prompt And Query Archive (LPAQA) to facilitate future experiments on probing knowledge contained in LMs.", "cite_spans": [ { "start": 36, "end": 57, "text": "(Petroniet al., 2019)", "ref_id": null }, { "start": 368, "end": 389, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Retrieving factual knowledge from LMs is quite different from querying standard declarative knowledge bases (KBs). In standard KBs, users formulate their information needs as a structured query defined by the KB schema and query language. For example, SELECT ?y WHERE {wd:Q76 wdt:P19 ?y} is a SPARQL query to search the birth place of Barack Obama. In contrast, LMs must be queried by natural language prompts, such as ''Barack Obama was born in '', and the word assigned the highest probability in the blank will be returned as the answer. Unlike deterministic queries on KBs, this provides no guarantees of correctness or success.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval from LMs", "sec_num": "2" }, { "text": "While the idea of prompts is common to methods for extracting many varieties of knowledge from LMs, in this paper we specifically follow the formulation of Petroni et al. (2019) , where factual knowledge is in the form of triples x, r, y . Here x indicates the subject, y indicates the object, and r is their corresponding relation. To query the LM, r is associated with a cloze-style prompt t r consisting of a sequence of tokens, two of which are placeholders for subjects and objects (e.g., ''x plays at y position''). The existence of the fact in the LM is assessed by replacing x with the surface form of the subject, and letting the model predict the missing object (e.g., ''LeBron James plays at position''): 2 y = arg max", "cite_spans": [ { "start": 156, "end": 177, "text": "Petroni et al. (2019)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval from LMs", "sec_num": "2" }, { "text": "y \u2032 \u2208V P LM (y \u2032 |x, t r ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval from LMs", "sec_num": "2" }, { "text": "where V is the vocabulary, and P LM (y \u2032 |x, t r ) is the LM probability of predicting y \u2032 in the blank conditioned on the other tokens (i.e., the subject and the prompt). 3 We say that an LM has knowledge of a fact if\u0177 is the same as the groundtruth y. Because we would like our prompts to most effectively elicit any knowledge contained in the LM itself, a ''good'' prompt should trigger the LM to predict the ground-truth objects as often as possible.", "cite_spans": [ { "start": 172, "end": 173, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval from LMs", "sec_num": "2" }, { "text": "In previous work (McCann et al., 2018; Radford et al., 2019; Petroni et al., 2019) , t r has been a single manually defined prompt based on the intuition of the experimenter. As noted in the introduction, this method has no guarantee of being optimal, and thus we propose methods that learn effective prompts from a small set of training data consisting of gold subject-object pairs for each relation.", "cite_spans": [ { "start": 17, "end": 38, "text": "(McCann et al., 2018;", "ref_id": "BIBREF28" }, { "start": 39, "end": 60, "text": "Radford et al., 2019;", "ref_id": "BIBREF38" }, { "start": 61, "end": 82, "text": "Petroni et al., 2019)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Retrieval from LMs", "sec_num": "2" }, { "text": "First, we tackle prompt generation: the task of generating a set of prompts {t r,i } T i=1 for each relation r, where at least some of the prompts effectively trigger LMs to predict ground-truth objects. We employ two practical methods to either mine prompt candidates from a large corpus ( \u00a7 3.1) or diversify a seed prompt through paraphrasing ( \u00a7 3.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prompt Generation", "sec_num": "3" }, { "text": "Our first method is inspired by templatebased relation extraction methods (Agichtein and Gravano, 2000; Ravichandran and Hovy, 2002) , which are based on the observation that words in the vicinity of the subject x and object y in a large corpus often describe the relation r. Based on this intuition, we first identify all the Wikipedia sentences that contain both subjects and objects of a specific relation r using the assumption of distant supervision, then propose two methods to extract prompts.", "cite_spans": [ { "start": 74, "end": 103, "text": "(Agichtein and Gravano, 2000;", "ref_id": "BIBREF0" }, { "start": 104, "end": 132, "text": "Ravichandran and Hovy, 2002)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Mining-based Generation", "sec_num": "3.1" }, { "text": "Middle-word Prompts Following the observation that words in the middle of the subject and object are often indicative of the relation, we directly use those words as prompts. For example, ''Barack Obama was born in Hawaii'' is converted into a prompt ''x was born in y'' by replacing the subject and the object with placeholders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining-based Generation", "sec_num": "3.1" }, { "text": "Dependency-based Prompts Toutanova et al. (2015) note that in cases of templates where words do not appear in the middle (e.g., ''The capital of France is Paris''), templates based on syntactic analysis of the sentence can be more effective for relation extraction. We follow this insight in our second strategy for prompt creation, which parses sentences with a dependency parser to identify the shortest dependency path between the subject and object, then uses the phrase spanning from the leftmost word to the rightmost word in the dependency path as a prompt. For instance, the dependency path in the above example is ''France", "cite_spans": [ { "start": 25, "end": 48, "text": "Toutanova et al. (2015)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Mining-based Generation", "sec_num": "3.1" }, { "text": "pobj \u2190 \u2212 \u2212 of prep \u2190 \u2212 \u2212 capital nsubj \u2190\u2212\u2212 is attr \u2212 \u2212 \u2192 Paris'',", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining-based Generation", "sec_num": "3.1" }, { "text": "where the leftmost and rightmost words are ''capital'' and ''Paris'', giving a prompt of ''capital of x is y''.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining-based Generation", "sec_num": "3.1" }, { "text": "Notably, these mining-based methods do not rely on any manually created prompts, and can thus be flexibly applied to any relation where we can obtain a set of subject-object pairs. This will result in diverse prompts, covering a wide variety of ways that the relation may be expressed in text. However, it may also be prone to noise, as many prompts acquired in this way may not be very indicative of the relation (e.g., ''x, y''), even if they are frequent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mining-based Generation", "sec_num": "3.1" }, { "text": "Our second method for generating prompts is more targeted-it aims to improve lexical diversity while remaining relatively faithful to the original prompt. Specifically, we do so by performing paraphrasing over the original prompt into other semantically similar or identical expressions. For example, if our original prompt is ''x shares a border with y'', it may be paraphrased into ''x has a common border with y'' and ''x adjoins y''. This is conceptually similar to query expansion techniques used in information retrieval that reformulate a given query to improve retrieval performance (Carpineto and Romano, 2012) .", "cite_spans": [ { "start": 591, "end": 619, "text": "(Carpineto and Romano, 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing-based Generation", "sec_num": "3.2" }, { "text": "Although many methods could be used for paraphrasing (Romano et al., 2006; Bhagat and Ravichandran, 2008) , we follow the simple method of using back-translation (Sennrich et al., 2016; Mallinson et al., 2017) to first translate the initial prompt into B candidates in another language, each of which is then back-translated into B candidates in the original language. We then rank B 2 candidates based on their roundtrip probability (i.e., P forward (t|t ) \u2022 P backward (t|t ), wheret is the initial prompt,t is the translated prompt in the other language, and t is the final prompt), and keep the top T prompts.", "cite_spans": [ { "start": 53, "end": 74, "text": "(Romano et al., 2006;", "ref_id": "BIBREF41" }, { "start": 75, "end": 105, "text": "Bhagat and Ravichandran, 2008)", "ref_id": "BIBREF6" }, { "start": 162, "end": 185, "text": "(Sennrich et al., 2016;", "ref_id": "BIBREF43" }, { "start": 186, "end": 209, "text": "Mallinson et al., 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing-based Generation", "sec_num": "3.2" }, { "text": "In the previous section, we described methods to generate a set of candidate prompts {t r,i } T i = 1 for a particular relation r. Each of these prompts may be more or less effective at eliciting knowledge from the LM, and thus it is necessary to decide how to use these generated prompts at test time. In this section, we describe three methods to do so.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prompt Selection and Ensembling", "sec_num": "4" }, { "text": "For each prompt, we can measure its accuracy of predicting the ground-truth objects (on a training dataset) using:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Top-1 Prompt Selection", "sec_num": "4.1" }, { "text": "A(t r,i ) = x,y \u2208R \u03b4(y=arg max y \u2032 P LM (y \u2032 |x,t r,i )) |R| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Top-1 Prompt Selection", "sec_num": "4.1" }, { "text": "where R is a set of subject-object pairs with relation r, and \u03b4(\u2022) is Kronecker's delta function, returning 1 if the internal condition is true and 0 otherwise. In the simplest method for querying the LM, we choose the prompt with the highest accuracy and query using only this prompt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Top-1 Prompt Selection", "sec_num": "4.1" }, { "text": "Next we examine methods that use not only the top-1 prompt, but combine together multiple prompts. The advantage to this is that the LM may have observed different entity pairs in different contexts within its training data, and having a variety of prompts may allow for elicitation of knowledge that appeared in these different contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rank-based Ensemble", "sec_num": "4.2" }, { "text": "Our first method for ensembling is a parameterfree method that averages the predictions of the top-ranked prompts. We rank all the prompts based on their accuracy of predicting the objects on the training set, and use the average log probabilities 4 from the top K prompts to calculate the probability of the object:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rank-based Ensemble", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s(y|x, r) = K i=1 1 K log P LM (y|x, t r,i ),", "eq_num": "(1)" } ], "section": "Rank-based Ensemble", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (y|x, r) = softmax(s(\u2022|x, r)) y ,", "eq_num": "(2)" } ], "section": "Rank-based Ensemble", "sec_num": "4.2" }, { "text": "where t r,i is the prompt ranked at the i-th position.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rank-based Ensemble", "sec_num": "4.2" }, { "text": "Here, K is a hyper-parameter, where a small K focuses on the few most accurate prompts, and a large K increases diversity of the prompts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rank-based Ensemble", "sec_num": "4.2" }, { "text": "The above method treats the top K prompts equally, which is sub-optimal given some prompts are more reliable than others. Thus, we also propose a method that directly optimizes prompt weights. Formally, we re-define the score in Equation 1 as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimized Ensemble", "sec_num": "4.3" }, { "text": "s(y|x, r) = T i=1 P \u03b8 r (t r,i |r) log P LM (y|x, t r,i ), (3) where P \u03b8 r (t r,i |r) = softmax(\u03b8 r )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimized Ensemble", "sec_num": "4.3" }, { "text": "is a distribution over prompts parameterized by \u03b8 r , a T -sized realvalue vector. For every relation, we learn to score a different set of T candidate prompts, so the total number of parameters is T times the number of relations. The parameter \u03b8 r is optimized to maximize the probability of the gold-standard objects P (y|x, r) over training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimized Ensemble", "sec_num": "4.3" }, { "text": "In this section, we assess the extent to which our prompts can improve fact prediction performance, raising the lower bound on the knowledge we discern is contained in LMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "Dataset As data, we use the T-REx subset (ElSahar et al., 2018) of the LAMA benchmark (Petroni et al., 2019) , which has a broader set of 41 relations (compared with the Google-RE subset, which only covers 3). Each relation is associated with at most 1000 subject-object pairs from Wikidata, and a single manually designed prompt. To learn to mine prompts ( \u00a7 3.1), rank prompts ( \u00a7 4.2), or learn ensemble weights ( \u00a7 4.3), we create a separate training set of subject-object pairs also from Wikidata for each relation that has no overlap with the T-REx dataset. We denote the training set as T-REx-train. For consistency with the T-REx dataset in LAMA, T-REx-train also is chosen to contain only single-token objects. To investigate the generality of our method, we also report the performance of our methods on the Google-RE subset, 5 which takes a similar form to T-REx but is relatively small and only covers three relations.", "cite_spans": [ { "start": 86, "end": 108, "text": "(Petroni et al., 2019)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "P\u00f6rner et al. (2019) note that some facts in LAMA can be recalled solely based on surface forms of entities, without memorizing facts. They filter out those easy-to-guess facts and create a more difficult benchmark, denoted as LAMA-UHN. We also conduct experiments on the T-REx subset of LAMA-UHN (i.e., T-REx-UHN) to investigate whether our methods can still obtain improvements on this harder benchmark. Dataset statistics are summarized in Table 1 .", "cite_spans": [ { "start": 14, "end": 20, "text": "(2019)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 443, "end": 450, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "Models As for the models to probe, in our main experiments we use the standard BERT-base and BERT-large models (Devlin et al., 2019) . We also perform some experiments with other pretrained models enhanced with external entity representations, namely, ERNIE (Zhang et al., 2019) and KnowBert (Peters et al., 2019) , which we believe may do better on recall of entities.", "cite_spans": [ { "start": 111, "end": 132, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF10" }, { "start": 258, "end": 278, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF53" }, { "start": 292, "end": 313, "text": "(Peters et al., 2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "We use two metrics to evaluate the success of prompts in probing LMs. The first evaluation metric, micro-averaged accuracy, follows the LAMA benchmark 6 in calculating the accuracy of all subject-object pairs for relation r:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "1 |R| x,y \u2208R \u03b4(\u0177 = y),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "where\u0177 is the prediction and y is the ground truth. Then we average across all relations. However, we found that the object distributions of some relations are extremely skewed (e.g., more than half of the objects in relation native language are French). This can lead to deceptively high scores, even for a majorityclass baseline that picks the most common object for each relation, which achieves a score of 22.0%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "To mitigate this problem, we also report macroaveraged accuracy, which computes accuracy for each unique object separately, then averages them together to get the relation-level accuracy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "1 |uni obj(R)| y \u2032 \u2208uni obj(R) x,y \u2208R,y = y \u2032 \u03b4(\u0177 = y) |{y| x, y \u2208 R, y = y \u2032 }| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "where uni obj(R) returns a set of unique objects from relation r. This is a much stricter metric, with the majority-class baseline only achieving a score of 2.2%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Methods We attempted different methods for prompt generation and selection/ensembling, and compare them with the manually designed prompts used in Petroni et al. (2019) . Majority refers to predicting the majority object for each relation, as mentioned above. Man is the baseline from Petroni et al. (2019) that only uses the manually designed prompts for retrieval. Mine ( \u00a7 3.1) uses the prompts mined from Wikipedia through both middle words and dependency paths, and Mine+Man combines them with the manual prompts. Mine+Para ( \u00a7 3.2) paraphrases the highest-ranked mined prompt for each relation, while Man+Para uses the manual one instead. The prompts are combined either by averaging the log probabilities from the TopK highestranked prompts ( \u00a7 4.2) or the weights after optimization ( \u00a7 4.3; Opti.). Oracle represents the upper bound of the performance of the generated prompts, where a fact is judged as correct if any one of the prompts allows the LM to successfully predict the object.", "cite_spans": [ { "start": 162, "end": 168, "text": "(2019)", "ref_id": "BIBREF21" }, { "start": 300, "end": 306, "text": "(2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Implementation Details We use T = 40 most frequent prompts either generated through mining or paraphrasing in all experiments, and the number of candidates in back-translation is set to B = 7. We remove prompts only containing stopwords/ punctuations or longer than 10 words to reduce noise. We use the round-trip English-German neural machine translation models pre-trained on WMT'19 (Ng et al., 2019) for back-translation, as English-German is one of the most highly resourced language pairs. 7 When optimizing ensemble parameters, we use Adam (Kingma and Ba, 2015) with default parameters and batch size of 32.", "cite_spans": [ { "start": 385, "end": 402, "text": "(Ng et al., 2019)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Micro-and macro-averaged accuracy of different methods are reported in Tables 2 and 3 , respectively.", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 85, "text": "Tables 2 and 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Single Prompt Experiments When only one prompt is used (in the first Top1 column in both tables), the best of the proposed prompt generation methods increases micro-averaged accuracy from 31.1% to 34.1% on BERT-base, and from 32.3% to 39.4% on BERT-large. This demonstrates that the manually created prompts are a somewhat weak lower bound; there are other prompts that further improve the ability to query knowledge from LMs. Table 4 shows some of the mined prompts that resulted in a large performance gain compared with the manual ones. For the relation religion, ''x who converted to y'' improved 60.0% over the manually defined prompt of ''x is affiliated with the y religion'', and for the relation subclass of, ''x is a type of y'' raised the accuracy by 22.7% over ''x is a subclass of y''. It can be seen that the largest gains from using mined prompts seem to occur in cases where the manually defined prompt is more complicated syntactically (e.g., the former), or when it uses less common wording (e.g., the latter) than the mined prompt.", "cite_spans": [], "ref_spans": [ { "start": 427, "end": 434, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Prompt Ensembling Next we turn to experiments that use multiple prompts to query the LM. Comparing the single-prompt results in column 1 to the ensembled results in the following three columns, we can see that ensembling multiple prompts almost always leads to better performance. The simple average used in Top3 and Top5 outperforms Top1 across different prompt generation methods. The optimized ensemble further raises micro-averaged accuracy to 38.9% and 43.7% on BERT-base and BERT-large respectively, outperforming the rank-based ensemble by a large margin. These two sets of results demonstrate that diverse prompts can indeed query the LM in different ways, and that the optimizationbased method is able to find weights that effectively combine different prompts together. We list the learned weights of top-3 mined prompts and accuracy gain over only using the top-1 prompt in Table 5 . Weights tend to concentrate on one particular prompt, and the other prompts serve as complements. We also depict the performance of the rank-based ensemble method", "cite_spans": [], "ref_spans": [ { "start": 885, "end": 892, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Manual", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relations", "sec_num": null }, { "text": "Prompts Mined Prompts Acc. Gain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relations", "sec_num": null }, { "text": "x is affiliated with the y religion x who converted to y +60.0 P159 headquarters location The headquarter of x is in y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "x is based in y +4.9 P20 place of death", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "x died in y x died at his home in y +4.6 P264 record label", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "x is represented by music label y x recorded for y +17.2 P279 subclass of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "x is a subclass of y x is a type of y +22.7 P39 position held", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "x has the position of y x is elected y +7.9 +7.0 Table 5 : Weights of top-3 mined prompts, and the micro-averaged accuracy gain (%) over using the top-1 prompt.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "with respect to the number of prompts in Figure 2 . For mined prompts, top-2 or top-3 usually gives us the best results, while for paraphrased prompts, top-5 is the best. Incorporating more prompts does not always improve accuracy, a finding consistent with the rapidly decreasing weights learned by the optimization-based method. The gap between Oracle and Opti. indicates that there is still space for improvement using better ensemble methods.", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 49, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "Mining vs. Paraphrasing For the rank-based ensembles (Top1, 3, 5), prompts generated by paraphrasing usually perform better than mined prompts, while for the optimization-based ensemble (Opti.), mined prompts perform better. We conjecture this is because mined prompts exhibit more variation compared to paraphrases, and proper weighting is of central importance. This difference in the variation can be observed in the average edit distance between the prompts of each class, which is 3.27 and 2.73 for mined and paraphrased prompts respectively. However, the improvement led by ensembling paraphrases is still significant over just using one prompt (Top1 vs. Opti.), raising microaveraged accuracy from 32.7% to 36.2% on BERT-base, and from 37.8% to 40.1% on BERTlarge. This indicates that even small modifications to prompts can result in relatively large changes in predictions. Table 6 demonstrates cases where modification of one word (either function or content word) leads to significant accuracy BERT-base 9.8 10.0 10.4 9.6 10.0 BERT-large 10.5 10.6 11.3", "cite_spans": [], "ref_spans": [ { "start": 883, "end": 890, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "10.4 10.7 Table 10 : Micro-averaged accuracy (%) on Google-RE.", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 18, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "indicates that if LMs are queried effectively, the differences between highly performant models may become more clear. KnowBert underperforms BERT on LAMA, which is opposite to the observation made in Peters et al. (2019) . This is probably because that multi token subjects/objects are used to evaluate KnowBert in Peters et al. 2019, while LAMA contains only single-token objects.", "cite_spans": [ { "start": 201, "end": 221, "text": "Peters et al. (2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "LAMA-UHN Evaluation The performances on LAMA-UHN benchmark are reported in Table 9 . Although the overall performances drop dramatically compared to the performances on the original LAMA benchmark (Table 2) , optimized ensembles can still outperform manual prompts by a large margin, indicating that our methods are effective in retrieving knowledge that cannot be inferred based on surface forms.", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 9", "ref_id": "TABREF10" }, { "start": 197, "end": 206, "text": "(Table 2)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "P140 religion", "sec_num": null }, { "text": "Next, we perform further analysis to better understand what type of prompts proved most suitable for facilitating retrieval of knowledge from LMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5.3" }, { "text": "We first analyze the conditions under which prompts will yield different predictions. We define the divergence between predictions of two prompts t r,i and t r,j using the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "Div(t r,i , t r,j ) = x,y \u2208R \u03b4(C(x, y, t r,i ) = C(x, y, t r,j )) |R| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "where C(x, y, t r,i ) = 1 if prompt t r,i can successfully predict y and 0 otherwise, and \u03b4(\u2022) is Figure 3 : Correlation of edit distance between prompts and their prediction divergence. Kronecker's delta. For each relation, we normalize the edit distance of two prompts into [0, 1] and bucket the normalized distance into five bins with intervals of 0.2. We plot a box chart for each bin to visualize the distribution of prediction divergence in Figure 3 , with the green triangles representing mean values and the green bars in the box representing median values. As the edit distance becomes larger, the divergence increases, which confirms our intuition that very different prompts tend to cause different prediction results. The Pearson correlation coefficient is 0.25, which shows that there is a weak correlation between these two quantities.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 106, "text": "Figure 3", "ref_id": null }, { "start": 447, "end": 455, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "x/y V y/x | x/y V P y/x | x/y V W* P y/x V = verb particle? adv? W = (noun | adj | adv | pron | det) P = (prep | particle | inf. marker)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "Performance on Google-RE We also report the performance of optimized ensemble on the Google-RE subset in Table 10 . Again, ensembling diverse prompts improves accuracies for both the BERT-base and BERT-large models. The gains are somewhat smaller than those on the T-REx subset, which might be caused by the fact that there are only three relations and one of them (predicting the birth date of a person) is particularly hard to the extent that only one prompt yields non-zero accuracy. POS-based Analysis Next, we try to examine which types of prompts tend to be effective in the abstract by examining the part-of-speech (POS) patterns of prompts that successfully extract knowledge from LMs. In open information extraction systems (Banko et al., 2007) , manually defined patterns are often leveraged to filter out noisy relational phrases. For example, ReVerb (Fader et al., 2011) incorporates three syntactic constraints listed in Table 11 to improve the coherence and informativeness of the mined relational phrases. To test whether these patterns are also indicative of the ability of a prompt to retrieve knowledge from LMs, we use these three patterns to group prompts generated by our methods into four clusters, where the ''other'' cluster contains prompts that do not match any pattern. We then calculate the rank of each prompt within the extracted prompts, and plot the distribution of rank using box plots in Figure 4 . 8 We can see that the average rank of prompts matching these patterns is better than those in the ''other'' group, confirming our intuitions that good prompts should conform with those patterns. Some of the best performing prompts' POS signatures are ''x VBD VBN IN y'' (e.g., ''x was born in y'') and ''x VBZ DT NN IN y'' (e.g., ''x is the capital of y'').", "cite_spans": [ { "start": 733, "end": 753, "text": "(Banko et al., 2007)", "ref_id": "BIBREF3" }, { "start": 862, "end": 882, "text": "(Fader et al., 2011)", "ref_id": "BIBREF13" }, { "start": 1433, "end": 1434, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 105, "end": 113, "text": "Table 10", "ref_id": "TABREF1" }, { "start": 934, "end": 942, "text": "Table 11", "ref_id": "TABREF1" }, { "start": 1422, "end": 1430, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "Cross-model Consistency Finally, it is of interest to know whether the prompts that we are extracting are highly tailored to a Table 12 : Cross-model micro-averaged accuracy (%). The first row is the model to test, and the second row is the model on which prompt weights are learned.", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 135, "text": "Table 12", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "specific model, or whether they can generalize across models. To do so, we use two settings: One compares BERT-base and BERT-large, the same model architecture with different sizes; the other compares BERT-base and ERNIE, different model architectures with a comparable size. In each setting, we compare when the optimization-based ensembles are trained on the same model, or when they are trained on one model and tested on the other. As shown in Tables 12 and 13, we found that in general there is usually some drop in performance in the cross-model scenario (third and fifth columns), but the losses tend to be small, and the highest performance when querying BERTbase is actually achieved by the weights optimized on BERT-large. Notably, the best accuracies of 40.1% and 42.2% (Table 12 ) and 39.5% and 40.5% (Table 13 ) with the weights optimized on the other model are still much higher than those obtained by the manual prompts, indicating that optimized prompts still afford large gains across models. Another interesting observation is that the drop in performance on ERNIE (last two columns in Table 13 ) is larger than that on BERT-large (last two columns in Table 12 ) using weights optimized on BERT-base, indicating that models sharing the same architecture benefit more from the same prompts.", "cite_spans": [], "ref_spans": [ { "start": 781, "end": 790, "text": "(Table 12", "ref_id": "TABREF1" }, { "start": 813, "end": 822, "text": "(Table 13", "ref_id": "TABREF1" }, { "start": 1104, "end": 1112, "text": "Table 13", "ref_id": "TABREF1" }, { "start": 1170, "end": 1178, "text": "Table 12", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "Linear vs. Log-linear Combination As mentioned in \u00a7 4.2, we use log-linear combination of probabilities in our main experiments. However, it is also possible to calculate probabilities through regular linear interpolation: Table 13 : Cross-model micro-averaged accuracy (%). The first row is the model to test, and the second row is the model on which prompt weights are learned. We compare these two ways to combine predictions from multiple mined prompts in Figure 5 ( \u00a7 4.2). We assume that log-linear combination outperforms linear combination because log probabilities make it possible to penalize objects that are very unlikely given any certain prompt.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 231, "text": "Table 13", "ref_id": "TABREF1" }, { "start": 460, "end": 468, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (y|x, r) = K i=1 1 K P LM (y|x, t r,i )", "eq_num": "(4)" } ], "section": "Prediction Consistency by Prompt", "sec_num": null }, { "text": "Finally, in addition to the elements of our main proposed methodology in \u00a7 3 and \u00a7 4, we experimented with a few additional methods that did not prove highly effective, and thus were omitted from our final design. We briefly describe these below, along with cursory experimental results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Omitted Design Elements", "sec_num": "6" }, { "text": "We examined methods to generate prompts by solving an optimization problem that maximizes the probability of producing the ground-truth objects with respect to the prompts: Table 14 : Micro-averaged accuracy (%) before and after LM-aware prompt fine-tuning.", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 181, "text": "Table 14", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "LM-aware Prompt Generation", "sec_num": "6.1" }, { "text": "Solving this problem of finding text sequences that optimize some continuous objective has been studied both in the context of end-to-end sequence generation (Hoang et al., 2017) , and in the context of making small changes to an existing input for adversarial attacks (Ebrahimi et al., 2018; Wallace et al., 2019) . However, we found that directly optimizing prompts guided by gradients was unstable and often yielded prompts in unnatural English in our preliminary experiments. Thus, we instead resorted to a more straightforward hillclimbing method that starts with an initial prompt, then masks out one token at a time and replaces it with the most probable token conditioned on the other tokens, inspired by the mask-predict decoding algorithm used in non-autoregressive machine translation (Ghazvininejad et al., 2019) : 9", "cite_spans": [ { "start": 158, "end": 178, "text": "(Hoang et al., 2017)", "ref_id": "BIBREF19" }, { "start": 269, "end": 292, "text": "(Ebrahimi et al., 2018;", "ref_id": "BIBREF11" }, { "start": 293, "end": 314, "text": "Wallace et al., 2019)", "ref_id": "BIBREF51" }, { "start": 796, "end": 824, "text": "(Ghazvininejad et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "LM-aware Prompt Generation", "sec_num": "6.1" }, { "text": "P LM (w i |t r \\ i) = x,y \u2208R P LM (w i |x, t r \\ i, y) |R| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-aware Prompt Generation", "sec_num": "6.1" }, { "text": "where w i is the i-th token in the prompt and t r \\ i is the prompt with the i-th token masked out. We followed a simple rule that modifies a prompt from left to right, and this is repeated until convergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-aware Prompt Generation", "sec_num": "6.1" }, { "text": "We used this method to refine all the mined and manual prompts on the T-REx-train dataset, and display theirperformance on the T-REx dataset in Table 14 . After fine-tuning, the oracle performance increased significantly, while the ensemble performances (both rank-based and optimizationbased) dropped slightly. This indicates that LM-aware fine-tuning has the potential to discover better prompts, but some portion of the refined prompts may have over-fit to the training set upon which they were optimized. Table 15 : Performance (%) of using forward and backward features with BERT-base.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Table 14", "ref_id": "TABREF1" }, { "start": 509, "end": 517, "text": "Table 15", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "LM-aware Prompt Generation", "sec_num": "6.1" }, { "text": "Finally, given class imbalance and the propensity of the model to over-predict the majority object, we examine a method to encourage the model to predict subject-object pairs that are more strongly aligned. Inspired by the maximum mutual information objective used in Li et al. (2016a) , we add the backward log probability log P LM (x|y, t r,i ) of each prompt to our optimization-based scoring function in Equation 3. Due to the large search space for objects, we turn to an approximation approach that only computes backward probability for the most probable B objects given by the forward probability at both training and test time. As shown in Table 15 , the improvement resulting from backward probability is small, indicating that a diversity-promoting scoring function might not be necessary for knowledge retrieval from LMs.", "cite_spans": [ { "start": 268, "end": 285, "text": "Li et al. (2016a)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 649, "end": 657, "text": "Table 15", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Forward and Backward Probabilities", "sec_num": "6.2" }, { "text": "Much work has focused on understanding the internal representations in neural NLP models (Belinkov and Glass, 2019) , either by using extrinsic probing tasks to examine whether certain linguistic properties can be predicted from those representations (Shi et al., 2016; Linzen et al., 2016; Belinkov et al., 2017) , or by ablations to the models to investigate how behavior varies (Li et al., 2016b; Smith et al., 2017) . For contextualized representations in particular, a broad suite of NLP tasks are used to analyze both syntactic and semantic properties, providing evidence that contextualized representations encode linguistic knowledge in different layers (Hewitt and Manning, 2019; Tenney et al., 2019a; Tenney et al., 2019b; Jawahar et al., 2019; Goldberg, 2019) . Different from analyses probing the representations themselves, our work follows Petroni et al. (2019) ; P\u00f6rner et al. (2019) in probing for factual knowledge. They use manually defined prompts, which may be under-estimating the true performance obtainable by LMs. Concurrently to this work, Bouraoui et al. (2020) made a similar observation that using different prompts can help better extract relational knowledge from LMs, but they use models explicitly trained for relation extraction whereas our methods examine the knowledge included in LMs without any additional training.", "cite_spans": [ { "start": 89, "end": 115, "text": "(Belinkov and Glass, 2019)", "ref_id": "BIBREF5" }, { "start": 251, "end": 269, "text": "(Shi et al., 2016;", "ref_id": "BIBREF44" }, { "start": 270, "end": 290, "text": "Linzen et al., 2016;", "ref_id": "BIBREF26" }, { "start": 291, "end": 313, "text": "Belinkov et al., 2017)", "ref_id": "BIBREF4" }, { "start": 381, "end": 399, "text": "(Li et al., 2016b;", "ref_id": "BIBREF25" }, { "start": 400, "end": 419, "text": "Smith et al., 2017)", "ref_id": "BIBREF45" }, { "start": 662, "end": 688, "text": "(Hewitt and Manning, 2019;", "ref_id": "BIBREF18" }, { "start": 689, "end": 710, "text": "Tenney et al., 2019a;", "ref_id": "BIBREF46" }, { "start": 711, "end": 732, "text": "Tenney et al., 2019b;", "ref_id": null }, { "start": 733, "end": 754, "text": "Jawahar et al., 2019;", "ref_id": "BIBREF22" }, { "start": 755, "end": 770, "text": "Goldberg, 2019)", "ref_id": "BIBREF16" }, { "start": 854, "end": 875, "text": "Petroni et al. (2019)", "ref_id": "BIBREF36" }, { "start": 892, "end": 898, "text": "(2019)", "ref_id": "BIBREF21" }, { "start": 1065, "end": 1087, "text": "Bouraoui et al. (2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Orthogonally, some previous works integrate external knowledge bases so that the language generation process is explicitly conditioned on symbolic knowledge (Ahn et al., 2016; Yang et al., 2017; Logan et al., 2019; Hayashi et al., 2020) . Similar extensions have been applied to pre-trained LMs like BERT, where contextualized representations are enhanced with entity embeddings (Zhang et al., 2019; Peters et al., 2019; P\u00f6rner et al., 2019) . In contrast, we focus on better knowledge retrieval through prompts from LMs as-is, without modifying them.", "cite_spans": [ { "start": 157, "end": 175, "text": "(Ahn et al., 2016;", "ref_id": "BIBREF1" }, { "start": 176, "end": 194, "text": "Yang et al., 2017;", "ref_id": "BIBREF52" }, { "start": 195, "end": 214, "text": "Logan et al., 2019;", "ref_id": "BIBREF35" }, { "start": 215, "end": 236, "text": "Hayashi et al., 2020)", "ref_id": "BIBREF17" }, { "start": 379, "end": 399, "text": "(Zhang et al., 2019;", "ref_id": "BIBREF53" }, { "start": 400, "end": 420, "text": "Peters et al., 2019;", "ref_id": "BIBREF35" }, { "start": 421, "end": 441, "text": "P\u00f6rner et al., 2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In this paper, we examined the importance of the prompts used in retrieving factual knowledge from language models. We propose mining-based and paraphrasing-based methods to systematically generate diverse prompts to query specific pieces of relational knowledge. Those prompts, when combined together, improve factual knowledge retrieval accuracy by 8%, outperforming manually designed prompts by a large margin. Our analysis indicates that LMs are indeed more knowledgeable than initially indicated by previous results, but they are also quite sensitive to how we query them. This indicates potential future directions such as (1) more robust LMs that can be queried in different ways but still return similar results, (2) methods to incorporate factual knowledge in LMs, and (3) further improvements in optimizing methods to query LMs for knowledge. Finally, we have released all our learned prompts to the community as the LM Prompt and Query Archive (LPAQA), available at: https://github.com/jzbjyb/LPAQA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Some models we use in this paper, e.g., BERT(Devlin et al., 2019), are bi-directional, and do not directly define probability distribution over text, which is the underlying definition of an LM. Nonetheless, we call them LMs for simplicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We can also go the other way around by filling in the objects and predicting the missing subjects. Since our focus is on improving prompts, we choose to be consistent withPetroni et al. (2019) to make a fair comparison, and leave exploring other settings to future work. Also notably, Petroni et al.(2019) only use objects consisting of a single token, so we only need to predict one word for the missing slot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We restrict to masked LMs in this paper because the missing slot might not be the last token in the sentence and computing this probability in traditional left-to-right LMs using Bayes' theorem is not tractable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Intuitively, because we are combining together scores in the log space, this has the effect of penalizing objects that are very unlikely given any certain prompt in the collection. We also compare with linear combination in ablations in \u00a7 5.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://code.google.com/archive/p/ relation-extraction-corpus/.6 In LAMA, it is called ''P@1.'' There might be multiple correct answers for some cases, e.g., a person speaking multiple languages, but we only use one ground truth. We will leave exploring more advanced evaluation methods to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/pytorch/fairseq/tree/ master/examples/wmt19.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the ranking position of a prompt to represent its quality instead of its accuracy because accuracy distributions of different relations might span different ranges, making accuracy not directly comparable across relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "t * r = arg max t r P LM (y|x, t r ),whereP LM (y|x, t r )is parameterized with a pretrained LM. In other words, this method directly searches for a prompt that causes the LM to assign ground-truth objects the highest probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In theory, this algorithm can be applied to both masked LMs like BERT and traditional left-to-right LMs, since the masked probability can be computed using Bayes' theorem for traditional LMs. However, in practice, due to the large size of vocabulary, it can only be approximated with beam search, or computed with more complicated continuous optimization algorithms(Hoang et al., 2017).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by a gift from Bosch Research and NSF award no. 1815287. We would like to thank Paul Michel, Hiroaki Hayashi, Pengcheng Yin, and Shuyan Zhou for their insightful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Snowball: Extracting relations from large plaintext collections", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Gravano", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Fifth ACM Conference on Digital Libraries", "volume": "", "issue": "", "pages": "85--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain- text collections. In Proceedings of the Fifth ACM Conference on Digital Libraries, June 2-7, 2000, San Antonio, TX, USA, pages 85-94. ACM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A neural knowledge language model", "authors": [ { "first": "Heeyoul", "middle": [], "last": "Sungjin Ahn", "suffix": "" }, { "first": "Tanel", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "P\u00e4rnamaa", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sungjin Ahn, Heeyoul Choi, Tanel P\u00e4rnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. CoRR, abs/1608.00318v2.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Matching the blanks: Distributional similarity for relation learning", "authors": [ { "first": "", "middle": [], "last": "Livio Baldini", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Soares", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Fitzgerald", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Ling", "suffix": "" }, { "first": "", "middle": [], "last": "Kwiatkowski", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2895--2905", "other_ids": {}, "num": null, "urls": [], "raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895-2905, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Open information extraction from the web", "authors": [ { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Cafarella", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Broadhead", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2007, "venue": "IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2670--2676", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007, pages 2670-2676.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "What do neural machine translation models learn about morphology?", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "861--872", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861-872, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Analysis methods in neural language processing: A survey", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "49--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and James R. Glass. 2019. Analysis methods in neural language process- ing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Large scale acquisition of paraphrases for learning surface patterns", "authors": [ { "first": "Rahul", "middle": [], "last": "Bhagat", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Ravichandran", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "674--682", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahul Bhagat and Deepak Ravichandran. 2008. Large scale acquisition of paraphrases for learning surface patterns. In Proceedings of ACL-08: HLT, pages 674-682, Columbus, Ohio. Association for Com- putational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Inducing relational knowledge from BERT", "authors": [ { "first": "Zied", "middle": [], "last": "Bouraoui", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Schockaert", "suffix": "" } ], "year": 2020, "venue": "Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zied Bouraoui, Jose Camacho-Collados, and Steven Schockaert. 2020. Inducing relational knowledge from BERT. In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), New York, USA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A survey of automatic query expansion in information retrieval", "authors": [ { "first": "Claudio", "middle": [], "last": "Carpineto", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Romano", "suffix": "" } ], "year": 2012, "venue": "ACM, Computing Surveys", "volume": "44", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudio Carpineto and Giovanni Romano. 2012. A survey of automatic query expansion in information retrieval. ACM, Computing Surveys, 44(1):1:1-1:50.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semi-supervised sequence learning", "authors": [ { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "", "middle": [], "last": "Dai", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12", "volume": "", "issue": "", "pages": "3079--3087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew M. Dai and Quoc V. Le. 2015. Semi-supervised sequence learning. In Ad- vances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Decem- ber 7-12, 2015, Montreal, Quebec, Canada, pages 3079-3087.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "BERT: Pretraining of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre- training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "HotFlip: White-box adversarial examples for text classification", "authors": [ { "first": "Javid", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Anyi", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Lowd", "suffix": "" }, { "first": "Dejing", "middle": [], "last": "Dou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "31--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adver- sarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 31-36, Melbourne, Australia, Association for Compu- tational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "T-REx: A large scale alignment of natural language with knowledge base triples", "authors": [ { "first": "Hady", "middle": [], "last": "Elsahar", "suffix": "" }, { "first": "Pavlos", "middle": [], "last": "Vougiouklis", "suffix": "" }, { "first": "Arslen", "middle": [], "last": "Remaci", "suffix": "" }, { "first": "Christophe", "middle": [], "last": "Gravier", "suffix": "" }, { "first": "Jonathon", "middle": [ "S" ], "last": "Hare", "suffix": "" }, { "first": "Fr\u00e9d\u00e9rique", "middle": [], "last": "Laforest", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Simperl", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hady ElSahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon S. Hare, Fr\u00e9d\u00e9rique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Identifying relations for open information extraction", "authors": [ { "first": "Anthony", "middle": [], "last": "Fader", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "2011", "issue": "", "pages": "1535--1545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1535-1545.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Sentence-level MT evaluation without reference translations: Beyond language modeling", "authors": [ { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Aue", "suffix": "" }, { "first": "Martine", "middle": [], "last": "Smets", "suffix": "" } ], "year": 2005, "venue": "Proceedings of EAMT", "volume": "", "issue": "", "pages": "103--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gamon, Anthony Aue, and Martine Smets. 2005. Sentence-level MT evaluation without reference translations: Beyond lan- guage modeling. In Proceedings of EAMT, pages 103-111.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mask-predict: Parallel decoding of conditional masked language models", "authors": [ { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6114--6123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6114-6123, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Assessing BERT's syntactic abilities. CoRR, abs", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 1901, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. CoRR, abs/1901.05287v1.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Latent relation language models", "authors": [ { "first": "Hiroaki", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "Zecong", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, and Graham Neubig. 2020. Latent relation language models. In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), New York, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4129-4138.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Towards decoding as continuous optimisation in neural machine translation", "authors": [ { "first": "Cong Duy Vu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "146--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cong Duy Vu Hoang, Gholamreza Haffari, and Trevor Cohn. 2017. Towards decoding as continuous optimisation in neural machine translation. In Proceedings of the 2017 Con- ference on Empirical Methods in Natu- ral Language Processing, pages 146-156, Copenhagen, Denmark. Association for Com- putational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using knowledge graphs for fact-aware language modeling", "authors": [ { "first": "Hillary", "middle": [], "last": "Barack's Wife", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "5962--5971", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barack's wife Hillary: Using knowledge graphs for fact-aware language modeling. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5962-5971.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "What does BERT learn about the structure of language?", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "3651--3657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 3651-3657.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Repre- sentations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A diversitypromoting objective function for neural conversation models", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2016, "venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "110--119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity- promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110-119.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Understanding neural networks through representation erasure", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding neural networks through repre- sentation erasure. CoRR, abs/1612.08220v3.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "521--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Trans- actions of the Association for Computational Linguistics, 4:521-535.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Paraphrasing revisited with neural machine translation", "authors": [ { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "881--893", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881-893, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The natural language decathlon: Multitask learning as question answering", "authors": [ { "first": "Bryan", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. CoRR, abs/1806.08730v1.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "context2vec: Learning generic context embedding with bidirectional LSTM", "authors": [ { "first": "Oren", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Goldberger", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "51--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 51-61.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "On the state of the art of evaluation in neural language models", "authors": [ { "first": "G\u00e1bor", "middle": [], "last": "Melis", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00e1bor Melis, Chris Dyer, and Phil Blunsom. 2018. On the state of the art of evaluation in neural language models. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Regularizing and optimizing LSTM language models", "authors": [ { "first": "Stephen", "middle": [], "last": "Merity", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and opti- mizing LSTM language models. In 6th International Conference on Learning Rep- resentations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Context dependent recurrent neural network language model", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2012, "venue": "2012 IEEE Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "234--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 234-239. IEEE.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Facebook FAIR's WMT19 news translation task submission", "authors": [ { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Kyra", "middle": [], "last": "Yee", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation, WMT 2019", "volume": "2", "issue": "", "pages": "314--319", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation, WMT 2019, Florence, Italy, August 1-2, 2019 - Volume 2: Shared Task Papers, Day 1, pages 314-319.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep con- textualized word representations. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227-2237.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Knowledge enhanced contextual word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Logan", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Vidur", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "43--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowl- edge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 43-54, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Language models as knowledge bases?", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Bakhtin", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2463--2473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "BERT is not a knowledge base (yet): Factual knowledge vs. Namebased reasoning in unsupervised QA", "authors": [ { "first": "Nina", "middle": [], "last": "P\u00f6rner", "suffix": "" }, { "first": "Ulli", "middle": [], "last": "Waltinger", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2019, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina P\u00f6rner, Ulli Waltinger, and Hinrich Sch\u00fctze. 2019. BERT is not a knowledge base (yet): Factual knowledge vs. Name- based reasoning in unsupervised QA. CoRR, abs/1911.03681v1.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Explain yourself! Leveraging language models for commonsense reasoning", "authors": [ { "first": "Bryan", "middle": [], "last": "Nazneen Fatema Rajani", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4932--4942", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! Leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Learning surface text patterns for a question answering system", "authors": [ { "first": "Deepak", "middle": [], "last": "Ravichandran", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "41--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a ques- tion answering system. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 41-47. Association for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Investigating a generic paraphrasebased approach for relation extraction", "authors": [ { "first": "Lorenza", "middle": [], "last": "Romano", "suffix": "" }, { "first": "Milen", "middle": [], "last": "Kouylekov", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Lavelli", "suffix": "" } ], "year": 2006, "venue": "11th Conference of the European Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido Dagan, and Alberto Lavelli. 2006. Investigating a generic paraphrase- based approach for relation extraction. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Atomic: An atlas of machine commonsense for if-then reasoning", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Allaway", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Roof", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "3027--3035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Does string-based neural MT learn source syntax?", "authors": [ { "first": "Xing", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Inkit", "middle": [], "last": "Padhi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1526--1534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526-1534, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "What do recurrent neural network grammars learn about syntax?", "authors": [ { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Lingpeng", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "1", "issue": "", "pages": "1249--1258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah A. Smith, Chris Dyer, Miguel Ballesteros, Graham Neubig, Lingpeng Kong, and Adhiguna Kuncoro. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 1249-1258.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "4593--4601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 4593-4601.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "What do you learn from context? Probing for sentence structure in contextualized word representations", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Das", "suffix": "" }, { "first": "", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sentence structure in contextualized word representations. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Representing text for joint embedding of text and knowledge bases", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Pallavi", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1499--1509", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1499-1509.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "A simple method for commonsense reasoning", "authors": [ { "first": "H", "middle": [], "last": "Trieu", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Trinh", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. CoRR, abs/1806.02847v2.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Universal adversarial triggers for attacking and analyzing NLP", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Kandpal", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2153--2162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153-2162, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Reference-aware language models", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "2017", "issue": "", "pages": "1850--1859", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copen- hagen, Denmark, September 9-11, 2017, pages 1850-1859.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "ERNIE: Enhanced language representation with informative entities", "authors": [ { "first": "Zhengyan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "1441--1451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 1441-1451.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "The Microsoft Research sentence completion challenge", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Burges", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Zweig and Christopher J. C. Burges. 2011. The Microsoft Research sentence completion challenge. Microsoft Research, Redmond, WA, USA, Technical Report MSR- TR-2011-129.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Ranking position distribution of prompts with different patterns. Lower is better.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "Performance of two interpolation methods.", "uris": null }, "TABREF1": { "num": null, "type_str": "table", "content": "", "html": null, "text": "Dataset statistics. All the values are averaged across 41 relations." }, "TABREF3": { "num": null, "type_str": "table", "content": "
: Micro-averaged accuracy of different
methods (%). Majority gives us 22.0%. Italic
indicates best single-prompt accuracy, and bold
indicates the best non-oracle accuracy overall.
PromptsTop1 Top3 Top5 Opti. Oracle
BERT-base (Man=22.8)
Mine20.7 22.7 23.9 25.736.2
Mine+Man 21.3 23.8 24.8 26.638.0
Mine+Para 21.2 22.4 23.0 23.634.1
Man+Para 22.8 23.8 24.6 25.034.9
BERT-large (Man=25.7)
Mine26.4 26.3 25.9 30.140.7
Mine+Man 28.1 28.3 27.3 30.742.2
Mine+Para 26.2 27.1 27.0 27.138.3
Man+Para 25.9 27.8 28.3 28.039.3
", "html": null, "text": "" }, "TABREF4": { "num": null, "type_str": "table", "content": "
: Macro-averaged accuracy of different
methods (%). Majority gives us 2.2%. Italic
indicates best single-prompt accuracy, and bold
indicates the best non-oracle accuracy overall.
", "html": null, "text": "" }, "TABREF5": { "num": null, "type_str": "table", "content": "
IDRelationsPrompts and WeightsAcc. Gain
P127 owned by.151+7.0
P140 religion
", "html": null, "text": "Micro-averaged accuracy gain (%) of the mined prompts over the manual prompts. is owned by y .485 x was acquired by y .151 x division of y who converted to y .615 y tirthankara x .190 y dedicated to x .110+12.2 P176 manufacturer y introduced the x .594 y announced the x .286 x attributed to the y .111" }, "TABREF7": { "num": null, "type_str": "table", "content": "
: Small modifications (update, insert,
and delete) in paraphrase lead to large accuracy
gain (%).
improvements, indicating that large-scale LMs
are still brittle to small changes in the ways they
are queried.
Middle-word vs. Dependency-based We com-
pare the performance of only using middle-
word prompts and concatenating them with
dependency-based prompts in Table 7. The
", "html": null, "text": "" }, "TABREF8": { "num": null, "type_str": "table", "content": "
ModelMan MineMine Mine Man +Man +Para +Para
BERT31.1 38.939.636.237.3
ERNIE32.1 42.343.840.141.1
KnowBert 26.2 34.134.631.932.1
", "html": null, "text": "Ablation study of middle-word and dependency-based prompts on BERT-base." }, "TABREF9": { "num": null, "type_str": "table", "content": "
ModelMan MineMine Mine Man +Man +Para +Para
BERT-base 21.3 28.7 29.426.827.0
BERT-large 24.2 34.5 34.531.629.8
: Micro-averaged accuracy (%) of various
LMs
improvements confirm our intuition that words
belonging to the dependency path but not in the
middle of the subject and object are also indicative
of the relation.
Micro vs. Macro Comparing Tables 2 and
3, we can see that macro-averaged accuracy is
much lower than micro-averaged accuracy,
indicating that macro-averaged accuracy is a
more challenging metric that evaluates how many
unique objects LMs know. Our optimization-
based method improves macro-averaged accuracy
from 22.8% to 25.7% on BERT-base, and
from 25.7% to 30.1% on BERT-base. This
again confirms the effectiveness of ensembling
multiple prompts, but the gains are somewhat
smaller. Notably, in our optimization-based
methods, the ensemble weights are optimized
on each example in the training set, which is
more conducive to optimizing micro-averaged
accuracy. Optimization to improve macro-
averaged accuracy is potentially an interesting
direction for future work that may result in
prompts more generally applicable to different
types of objects.
Performance of Different LMs In Table 8,
we compare BERT with ERNIE and KnowBert,
which are enhanced with external knowledge
by explicitly incorporating entity embeddings.
ERNIE outperforms BERT by 1 point even
with the manually defined prompts, but our
prompt generation methods further emphasize
the difference between the two methods, with
the highest accuracy numbers differing by 4.2
points using the Mine+Man method. This
", "html": null, "text": "" }, "TABREF10": { "num": null, "type_str": "table", "content": "
ModelMan MineMine Mine Man +Man +Para +Para
", "html": null, "text": "Micro-averaged accuracy (%) on LAMA-UHN." }, "TABREF11": { "num": null, "type_str": "table", "content": "", "html": null, "text": "Three part-of-speech-based regular expressions used in ReVerb to identify relational phrases." } } } }