ACL-OCL / Base_JSON /prefixG /json /gem /2021.gem-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:05:32.097872Z"
},
"title": "GOT: Testing for Originality in Natural Language Generation",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Brooks",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University Washington",
"location": {
"region": "DC"
}
},
"email": "jtbrooks@gwu.edu"
},
{
"first": "Abdou",
"middle": [],
"last": "Youssef",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {
"settlement": "Washington",
"region": "DC"
}
},
"email": "ayoussef@gwu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose an approach to automatically test for originality in generation tasks where no standard automatic measures exist. Our proposal addresses original uses of language, not necessarily original ideas. We provide an algorithm for our approach and a run-time analysis. The algorithm, which finds all of the original fragments in a ground-truth corpus and can reveal whether a generated fragment copies an original without attribution, has a run-time complexity of \u03b8(n log n) where n is the number of sentences in the ground truth.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose an approach to automatically test for originality in generation tasks where no standard automatic measures exist. Our proposal addresses original uses of language, not necessarily original ideas. We provide an algorithm for our approach and a run-time analysis. The algorithm, which finds all of the original fragments in a ground-truth corpus and can reveal whether a generated fragment copies an original without attribution, has a run-time complexity of \u03b8(n log n) where n is the number of sentences in the ground truth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This research addresses an ethical consideration for Natural Language Generation, namely, plagiarism. The Oxford English Dictionary defines original (adjective) as \"present or existing from the beginning; first or earliest\" and \"created directly and personally by a particular artist; not a copy or imitation\". But, if we apply the definitions of \"original\" to language, then there are two ways in which a piece of generated text may be original. For one, the text may express an \"original idea\", such as Einstein did in 1905 with \"E = mc 2 \". On the other hand, a non-original idea may be expressed in an original way, via, for example, figurative language. Our proposed approach addresses original uses of language. It does not necessarily address original ideas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "How do we protect intellectual property when it comes to language generators that are trained on a world-wide-web of data? Our language generators have to be held accountable. They should also be protected. What if a language generator generates an original analogy? What if it writes a poem that is so great that it ends up in the history books? Multiple language generators may be trained on the same ground truth (e.g., Wikipedia) with the same embedding vectors (e.g., BERT (Devlin et al., 2018) and GPT (Vaswani et al., 2017; Radford et al., 2018) ) and the same technologies (deep neural networks, LSTM cells (Hochreiter and Schmidhuber, 1997) , transformers (Vaswani et al., 2017) ). It will become a question of \"Whose generator said it first?\" With automatic language generation, we need a way to automatically measure, store, and reference original ideas and language. We propose one possible solution to these originality-related problems.",
"cite_spans": [
{
"start": 478,
"end": 499,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 508,
"end": 530,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 531,
"end": 552,
"text": "Radford et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 615,
"end": 649,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 665,
"end": 687,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the purposes of our analyses, we define ground truth as the set of sentences that are compared with the generated sentences. The ground truth may be larger than the training set, but should include the training set. The gound truth would also, ideally, grow. For example, the ground truth could start out as the training set, but as new sentences are generated with a trained model, then the new sentences may be added to the ground truth. We also claim that generated sentences should only be added to the ground truth if they are original or include citations where appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our criteria and basis for evaluating measurements of originality are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "1. Can we tell whether a generated sentence is an original use of language?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "2. Can we tell whether the sentence contains a fragment from the ground truth that is a candidate for protection as intellectual property?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Therefore, when measuring generation originality by comparing the generated sentence with the sentences in the ground truth, then the answers to numbers 1 and 2 above are binary. Either the generated sentence is an original use of language or it is not. Either the generation is at risk of plagiarism or it is not. However, if we consider that the ground truth may not be representative of all the sentences that have ever been generated, then there is a measure of uncertainty that may be added to the binary outcome.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "There are no standard automatic measures for novelty and originality in stylized language generation (Mou and Vechtomova, 2020) . High perplexity (PPL) and a low BLEU (Papineni et al., 2002) score may suggest novelty, but they are not sufficient for testing for originality. High PPL and a low BLEU score may be achieved when there is little overlap between the generated language and the ground truth, but nonesense and off-topic sentences are rewarded. While nonesense sentences may be novel, they may be grammatically incorrect, and sentences that are grammatically correct will likely have some overlap with fragments (n-grams) in the ground truth, such as using phrases like \"she said that\". So, we want a generation originality test that doesn't penalize n-gram overlap. (An original use of language may combine common n-grams in a new way.) We also want a generation originality test that flags potential plagiarism of original fragments in the ground truth, which neither BLEU nor PPL does.",
"cite_spans": [
{
"start": 101,
"end": 127,
"text": "(Mou and Vechtomova, 2020)",
"ref_id": "BIBREF6"
},
{
"start": 167,
"end": 190,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "We propose a generation originality test (GOT) that addresses original uses of language. It does not necessarily address original ideas. GOT is equally appropriate for stylized text generation, where novelty is desirable, and for other generation tasks where there is not an imposed style but the generation is open-ended, including summarization tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our proposed generation originality test (GOT) determines whether:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "1. any fragment in a generated sentence equals an \"original\" fragment in the ground truth, in which case the generation may be in violation of a copyright law, if no citation of the original source is included; or, 2. the generated sentence is \"original\", per Definition 1, below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "Definition 1 (Original Sentence). A sentence, whether generated or in the ground truth, of n tokens is original if there exists an original k-gram within the sentence for some k\u2264n. The originality of k-grams is defined next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "The definition of originality of a fragment (or k-gram) depends on whether we are referring to a generated fragment or to a fragment in the ground truth. Generated fragments are tested against the ground truth. If the generated fragment does not appear in the ground truth, then the generated fragment is considered original. If it appears once in the ground truth, then it is considered not original and so a citation may be needed. See Table 1 for a summary of the criterion for each type of fragment to be true. In Table 1 , C equals the number of times that fragment appears in the ground truth.",
"cite_spans": [],
"ref_spans": [
{
"start": 438,
"end": 445,
"text": "Table 1",
"ref_id": null
},
{
"start": 518,
"end": 525,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "Ground Truth Fragment Table 1 : Criterion per fragment type, where C is the number of times the fragment appears in the ground truth. Note, C is always with respect to counts in the ground truth, even when evaluating generated fragments.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Type Criterion",
"sec_num": null
},
{
"text": "Original C = 1 Not Original C \u2265 2 Generated Fragment Original C = 0 Not Original, Citation Needed C = 1 Not Original, No Citation Needed C \u2265 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Criterion",
"sec_num": null
},
{
"text": "Ground-truth fragments that appear once and only once in the ground truth are considered original. 1 Likewise, fragments that appear more than once in the ground truth are considered \"not original\". For example, \"lengthened shadow\" appeared twice in our ground truth and so it is not considered an original phrase in the ground truth. Combining non-original fragments to generate a new idea or analogy, however, could be considered an original use of language. For example, \"the writer is the lengthened shadow of a man\" contains the fragments \"the writer is\" and \"the lengthened shadow\" and \"of a man\" which are not original fragments in our ground truth. However, the way in which they are combined in this example creates an original use of language -in this case, a metaphor. (Examples of fragments that appeared many times in our training set are \"it is\" and \"human life\".)",
"cite_spans": [
{
"start": 99,
"end": 100,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Type Criterion",
"sec_num": null
},
{
"text": "Here is one possible use of GOT. If a generated sentence contains a fragment that appears once and only once in the ground truth (after duplicate sentences are removed from the ground truth), then the generated sentence may be discarded because it contains a fragment from the ground truth that is a candidate for protection as intellectual property. In other words, the sentence may be in violation of a copyright law. Otherwise, the sentence could include a citation of the source for the original fragment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Criterion",
"sec_num": null
},
{
"text": "The definition of ground-truth original fragments actually calls for more nuance, which we will elaborate and explain how to compute next. We maintain a count per fragment that is incremented each time the fragment appears in a new sentence in a new document or by a different author (if the author can be determined in both instances) in the ground truth. In other words, if a fragment in the ground truth is repeated in the same document, or by the same author across documents, then the count for that fragment is incremented only once. (Therefore, an author, if known, should also be stored for each fragment, at least until the count for that fragment is greater than 1. When the count for a fragment is greater than 1, then it has already been determined that the fragment was seen a second time in a different document by a different known, or unknown, author.) The count for a fragment will be 1 if it occurs just once in the ground truth, or if all of its occurrences are in the same document or by the same author; otherwise, the count will be greater than 1. Now, a ground-truth fragment is said to be original if and only if its count is 1. See Algorithm 1 for psuedo-code to test for originality and find all original fragements in a dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Criterion",
"sec_num": null
},
{
"text": "To examine fragments, we use a window length of wl varying between 2 and the sentence length, where wl is the number of words in the fragment. If the first or last word in the window is a determinant (e.g., 'a' or 'the'), any use of the verbs to be and to have ('is', 'are', 'am', 'was', 'were', 'has', 'had', 'have'), punctuation mark, or preposition/subordinating conjunction (e.g., 'to', 'of', or 'from'), the window is moved one step to the right. (Shortening the window to get rid of the determinant, special verb, special character, or preposition would result in a window size already covered in the previous step.) All words and characters are allowed in the other positions of the window, so, for example, a comma or preposition may appear in the middle of a window of size 3 or more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Criterion",
"sec_num": null
},
{
"text": "The following complexity analysis is with respect to Algorithm 1. We are representing F and O with balanced binary search trees (e.g., red-black tree (Guibas and Sedgewick, 1978; OKASAKI, 1999)) where the comparator is lexicographic ordering. Searching, insertion and deletion in such trees take \u03b8(log n) comparisons. Since the length of fragments is assumed to be constant on average, then each comparison takes constant time, implying that each search/insert/delete operation in O and F take \u03b8(log n) time.",
"cite_spans": [
{
"start": 150,
"end": 178,
"text": "(Guibas and Sedgewick, 1978;",
"ref_id": "BIBREF4"
},
{
"start": 179,
"end": 194,
"text": "OKASAKI, 1999))",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "Given our representation of F and O with balanced binary search trees, consider the following time complexity analysis:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "\u2022 Let n = number of sentences in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "The first for-loop (line 1) iterates n times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "\u2022 Let c = the average length (i.e., number of tokens) of a sentence in our ground truth. We found that c = 25, a fairly small constant. Therefore, the two for-loops in Steps 4 and 5 iterate on average a constant number of times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "\u2022 The binary search in F (line 10) has a runtime complexity of \u03b8(log n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "\u2022 Depending on the result of the binary search of F (line 10) there may be an insertion to F (line 14) which has a runtime complexity of \u03b8(log n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "\u2022 Then the number of calculations in lines 1-20 is the following function of n: 2c 2 n log n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "\u2022 The code segment of lines 21-26 takes \u03b8(n) time because the number of wl-token fragments in the ground truth dataset (of n sentences where each sentence consists of c tokens on average) is at most cn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "\u2022 Therefore, the runtime complexity is: \u03b8(n log n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "This algorithm would be executed before generation tasks, but may also be executed whenever the Algorithm 1 Find Original Fragments in the Ground Truth Require: Input S, the sentences in the ground truth to evaluate Require: Input F , list of fragments already discovered, may be empty set; Require: Input CountP erF rag(f ), for all f \u2208 F Require: O, list of original fragments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "Count per o \u2208 O should always be 1 1: for each s \u2208 S do 2: l = number of tokens in sentence s 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "sentP arts = set of tokens in s 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "for each wl in range 2 to l do wl = length of window 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "for each i in range 0 to l \u2212 wl + 1 do assume zero-based indexing 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "if sentP arts [i] or sentP arts[i + wl \u2212 1] = special token 2 then 7:",
"cite_spans": [
{
"start": 14,
"end": 17,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "Continue to next i 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "else 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "f rag = sentP arts[i : i + wl] 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "if f rag \u2208 F then binary search of F 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "CountP erF rag[f rag] = CountP erF rag[f rag] + 1 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "Break from for-loop in line 5 13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "else frag was not found in F 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "Add f rag to F 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "CountP erF rag[f rag] = 1 16:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "end if 17:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "end if 18: end for 19: end for 20: end for 21: Set O to the empty set; 22: for each f rag in F do 23:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "if CountP erF rag[f rag] == 1 then 24:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "Add f rag to O; 25:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "end if 26: end for reference set changes or is updated (for example, based on generated language).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Complexity",
"sec_num": "3.1"
},
{
"text": "To see how GOT performed on a generation task, we applied it to a metaphor generator that we built, based on an RNN (Elman, 1990) architecture with LSTM cells (Hochreiter and Schmidhuber, 1997) for training a language model on the language of metaphors, using only metaphors and their topics as input. (A topic was inserted at the beginning of each input sentence.)",
"cite_spans": [
{
"start": 116,
"end": 129,
"text": "(Elman, 1990)",
"ref_id": "BIBREF2"
},
{
"start": 159,
"end": 193,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example: Results on One Application",
"sec_num": "4"
},
{
"text": "The model was trained to predict the next word in the sentences from our ground truth-a set of 22,113 quotes, where each quote contains at least one metaphor and is labeled with a topic. There are 1,684 unique topics (e.g., \"animals\", \"fear\", \"fishing\", \"grandparents\", \"happiness\", \"motives\", \"politics\", and more examples listed in Table 2 ) and the dataset is currently available to the public online as part of \"Dr. Mardy's Dictionary of Metaphorical Quotations\" (Grothe, 2008) .",
"cite_spans": [
{
"start": 467,
"end": 481,
"text": "(Grothe, 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Example: Results on One Application",
"sec_num": "4"
},
{
"text": "To the trained language model, we apply an inference engine that uses weighted random choice with a \"constraining factor\" to encourage language coherence and originality in the output, and pat-terns of metaphors to encourage the generation of grammatically correct metaphors (Brooks and Youssef, 2020) . The constraining factor, c (for c \u2265 1), causes the inference engine to select-with a probability of 1 c -the most likely word to appear next. Otherwise, and with a probability of 1 \u2212 1 c , the inference engine will make a weighted random selection. Selecting the most likely next word encourages language coherencey in the output, while weighted random selection encourages originality. (We found that a constraining factor of 3 or 4 worked best with our model.) A generated sentence failed the GOT if a fragment of at least 2 words appeared as an \"original\" fragment in the training set; that is, if the fragment appeared just once in the ground truth. Using our metaphor generator, we generated 500 metaphors from randomly chosen topics. Applying GOT on each of the 500 generated metaphors, we found that only 32 repeated an \"original\" fragment from the training set. From this experiment, we conclude that out of the 500 generated metaphors, 468 of them, or just over 93%, can be considered original. (Table 2 provides examples from our metaphor generator on randomly generated topics.)",
"cite_spans": [
{
"start": 275,
"end": 301,
"text": "(Brooks and Youssef, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1308,
"end": 1316,
"text": "(Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Example: Results on One Application",
"sec_num": "4"
},
{
"text": "The arrested waters shone and danced. fathers Expectations are premeditated resentments. character Today is the companion of genius. friends",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Metaphor tears",
"sec_num": null
},
{
"text": "Assumptions are the termites of relationships. writers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Metaphor tears",
"sec_num": null
},
{
"text": "The writer is the lengthened shadow of a man. world This world is the rainbow of us. truth The brain is the eden of a star. innocence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Metaphor tears",
"sec_num": null
},
{
"text": "The cure for silence is the salt of speech. imagination Success is the only deadline. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generated Metaphor tears",
"sec_num": null
},
{
"text": "Our approach to originality testing includes two contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "\u2022 An automatic test, where no standard existed, for originality in generated language",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "\u2022 An automatic test, where no standard existed, for identifying where generators are in violation of copying an original use of language without attribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The first contribution tells us whether a generation is an original use of language. The second contribution tells us whether a generation is, at least, not at risk of committing plagiarism. For example, the sentence \"A bird built a nest\" is not an original use of language; however, it is at least probably not in violation of plagiarism since it does not contain a fragment that is so rare that it should be protected as an original use of language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For simplicity of explanation, we qualify a fragment as \"original\", and therefore a candidate for protection of intellectual property, if it appears \"once and only once\" in the ground truth. However, with very large datasets, it may be necessary to relax the criteria from \"once and only once\" to a relatively small number of occurrences, in order to consider a fragment a candidate for protection of intellectual property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If the first or last word in the window is a determinant (e.g., 'a' or 'the'), special verb ('is', 'are', 'am', 'was', 'were', 'has', 'had', 'have'), punctuation mark, or preposition/subordinating conjunction (e.g., 'to', 'of', or 'from'), the window is moved one step to the right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Discriminative pattern mining for natural language metaphor generation",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Brooks",
"suffix": ""
},
{
"first": "Abdou",
"middle": [],
"last": "Youssef",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Discriminative Pattern Mining Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Brooks and Abdou Youssef. 2020. Discrimi- native pattern mining for natural language metaphor generation. In Proceedings of the Discriminative Pattern Mining Workshop.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Finding structure in time",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "COGNITIVE SCIENCE",
"volume": "14",
"issue": "2",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. COGNITIVE SCIENCE, 14(2):179-211.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "I Never Metaphor I Didn't Like: A Comprehensive Compilation of History's Greatest Analogies, Metaphors, and Similes",
"authors": [
{
"first": "Mardy",
"middle": [],
"last": "Grothe",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mardy Grothe. 2008. I Never Metaphor I Didn't Like: A Comprehensive Compilation of History's Greatest Analogies, Metaphors, and Similes. Harper Collins.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A dichromatic framework for balanced trees",
"authors": [
{
"first": "J",
"middle": [],
"last": "Leo",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Guibas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sedgewick",
"suffix": ""
}
],
"year": 1978,
"venue": "19th Annual Symposium on Foundations of Computer Science (sfcs 1978)",
"volume": "",
"issue": "",
"pages": "8--21",
"other_ids": {
"DOI": [
"10.1109/SFCS.1978.3"
]
},
"num": null,
"urls": [],
"raw_text": "Leo J. Guibas and Robert Sedgewick. 1978. A dichro- matic framework for balanced trees. In 19th An- nual Symposium on Foundations of Computer Sci- ence (sfcs 1978), pages 8-21.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long shortterm memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hochreiter and J. Schmidhuber. 1997. Long short- term memory. Neural Computation, 9:1735-1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Stylized text generation: Approaches and applications",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Vechtomova",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "19--22",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-tutorials.5"
]
},
"num": null,
"urls": [],
"raw_text": "Lili Mou and Olga Vechtomova. 2020. Stylized text generation: Approaches and applications. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics: Tutorial Ab- stracts, pages 19-22, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Red-black trees in a functional setting",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Okasaki",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Functional Programming",
"volume": "9",
"issue": "4",
"pages": "471--477",
"other_ids": {
"DOI": [
"10.1017/S0956796899003494"
]
},
"num": null,
"urls": [],
"raw_text": "CHRIS OKASAKI. 1999. Red-black trees in a func- tional setting. Journal of Functional Programming, 9(4):471-477.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. pages 311-318.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "Examples of Generated Metaphors",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}