{ "paper_id": "P17-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:20:02.896582Z" }, "title": "Automatically Generating Rhythmic Verse with Neural Networks", "authors": [ { "first": "Jack", "middle": [], "last": "Hopkins", "suffix": "", "affiliation": { "laboratory": "", "institution": "Computer Laboratory University of Cambridge", "location": {} }, "email": "jack.hopkins@me.com" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "", "affiliation": { "laboratory": "", "institution": "Computer Laboratory University of Cambridge", "location": {} }, "email": "dkiela@fb.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms. The first approach uses a neural language model trained on a phonetic encoding to learn an implicit representation of both the form and content of English poetry. This model can effectively learn common poetic devices such as rhyme, rhythm and alliteration. The second approach considers poetry generation as a constraint satisfaction problem where a generative neural language model is tasked with learning a representation of content, and a discriminative weighted finite state machine constrains it on the basis of form. By manipulating the constraints of the latter model, we can generate coherent poetry with arbitrary forms and themes. A large-scale extrinsic evaluation demonstrated that participants consider machine-generated poems to be written by humans 54% of the time. In addition, participants rated a machinegenerated poem to be the most human-like amongst all evaluated.", "pdf_parse": { "paper_id": "P17-1016", "_pdf_hash": "", "abstract": [ { "text": "We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms. The first approach uses a neural language model trained on a phonetic encoding to learn an implicit representation of both the form and content of English poetry. This model can effectively learn common poetic devices such as rhyme, rhythm and alliteration. The second approach considers poetry generation as a constraint satisfaction problem where a generative neural language model is tasked with learning a representation of content, and a discriminative weighted finite state machine constrains it on the basis of form. By manipulating the constraints of the latter model, we can generate coherent poetry with arbitrary forms and themes. A large-scale extrinsic evaluation demonstrated that participants consider machine-generated poems to be written by humans 54% of the time. In addition, participants rated a machinegenerated poem to be the most human-like amongst all evaluated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Poetry is an advanced form of linguistic communication, in which a message is conveyed that satisfies both aesthetic and semantic constraints. As poetry is one of the most expressive forms of language, the automatic creation of texts recognisable as poetry is difficult. In addition to requiring an understanding of many aspects of language including phonetic patterns such as rhyme, rhythm and alliteration, poetry composition also requires a deep understanding of the meaning of language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Poetry generation can be divided into two subtasks, namely the problem of content, which is concerned with a poem's semantics, and the problem of form, which is concerned with the aesthetic rules that a poem follows. These rules may describe aspects of the literary devices used, and are usually highly prescriptive. Examples of different forms of poetry are limericks, ballads and sonnets. Limericks, for example, are characterised by their strict rhyme scheme (AABBA), their rhythm (two unstressed syllables followed by one stressed syllable) and their shorter third and fourth lines. Creating such poetry requires not only an understanding of the language itself, but also of how it sounds when spoken aloud.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Statistical text generation usually requires the construction of a generative language model that explicitly learns the probability of any given word given previous context. Neural language models (Schwenk and Gauvain, 2005; Bengio et al., 2006) have garnered signficant research interest for their ability to learn complex syntactic and semantic representations of natural language (Mikolov et al., 2010; Sutskever et al., 2014; Cho et al., 2014; Kim et al., 2015) . Poetry generation is an interesting application, since performing this task automatically requires the creation of models that not only focus on what is being written (content), but also on how it is being written (form).", "cite_spans": [ { "start": 197, "end": 224, "text": "(Schwenk and Gauvain, 2005;", "ref_id": "BIBREF25" }, { "start": 225, "end": 245, "text": "Bengio et al., 2006)", "ref_id": "BIBREF2" }, { "start": 383, "end": 405, "text": "(Mikolov et al., 2010;", "ref_id": "BIBREF20" }, { "start": 406, "end": 429, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF27" }, { "start": 430, "end": 447, "text": "Cho et al., 2014;", "ref_id": "BIBREF4" }, { "start": 448, "end": 465, "text": "Kim et al., 2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We experiment with two novel methodologies for solving this task. The first involves training a model to learn an implicit representation of content and form through the use of a phonological encoding. The second involves training a generative language model to represent content, which is then constrained by a discriminative pronunciation model, representing form. This second model is of particular interest because poetry with arbitrary rhyme, rhythm, repetition and themes can be generated by tuning the pronunciation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automatic poetry generation is an important task due to the significant challenges involved. Most systems that have been proposed can loosely be categorised as rule-based expert systems, or statistical approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Rule-based poetry generation attempts include case-based reasoning (Gerv\u00e1s, 2000) , templatebased generation (Colton et al., 2012) , constraint satisfaction (Toivanen et al., 2013; Barbieri et al., 2012 ) and text mining (Netzer et al., 2009) . These approaches are often inspired by how humans might generate poetry.", "cite_spans": [ { "start": 67, "end": 81, "text": "(Gerv\u00e1s, 2000)", "ref_id": "BIBREF8" }, { "start": 109, "end": 130, "text": "(Colton et al., 2012)", "ref_id": "BIBREF5" }, { "start": 157, "end": 180, "text": "(Toivanen et al., 2013;", "ref_id": "BIBREF28" }, { "start": 181, "end": 202, "text": "Barbieri et al., 2012", "ref_id": "BIBREF1" }, { "start": 221, "end": 242, "text": "(Netzer et al., 2009)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Statistical approaches, conversely, make no assumptions about the creative process. Instead, they attempt to extract statistical patterns from existing poetry corpora in order to construct a language model, which can then be used to generate new poetic variants (Yi et al., 2016; Greene et al., 2010) . Neural language models have been increasingly applied to the task of poetry generation. The work of Zhang and Lapata 2014is one such example, where they were able to outperform all other classical Chinese poetry generation systems with both manual and automatic evaluation. Ghazvininejad et al. (2016) and Goyal et al. (2016) apply neural language models with regularising finite state machines. However, in the former case the rhythm of the output cannot be defined at sample time, and in the latter case the finite state machine is not trained on rhythm at all, as it is trained on dialogue acts. McGregor et al. (2016) construct a phonological model for generating prosodic texts, however there is no attempt to embed semantics into this model.", "cite_spans": [ { "start": 262, "end": 279, "text": "(Yi et al., 2016;", "ref_id": "BIBREF32" }, { "start": 280, "end": 300, "text": "Greene et al., 2010)", "ref_id": "BIBREF11" }, { "start": 577, "end": 604, "text": "Ghazvininejad et al. (2016)", "ref_id": "BIBREF9" }, { "start": 609, "end": 628, "text": "Goyal et al. (2016)", "ref_id": "BIBREF10" }, { "start": 902, "end": 924, "text": "McGregor et al. (2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our first model is a pure neural language model, trained on a phonetic encoding of poetry in order to represent both form and content. Phonetic encodings of language represent information as sequences of around 40 basic acoustic symbols. Training on phonetic symbols allows the model to learn effective representations of pronunciation, including rhyme and rhythm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic-level Model", "sec_num": "3" }, { "text": "However, just training on a large corpus of poetry data is not enough. Specifically, two problems need to be overcome. 1) Phonetic encoding results in information loss: words that have the same pronunciation (homophones) cannot be perfectly reconstructed from the corresponding phonemes. This means that we require an additional probabilistic model in order to determine the most likely word given a sequence of phonemes. 2) The variety of poetry and poetic devices one can usee.g., rhyme, rhythm, repetition-means that poems sampled from a model trained on all poetry would be unlikely to maintain internal consistency of meter and rhyme. It is therefore important to train the model on poetry which has its own internal consistency. Thus, the model comprises three steps: transliterating an orthographic sequence to its phonetic representation, training a neural language model on the phonetic encoding, and decoding the generated sequence back from phonemes to orthographic symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic-level Model", "sec_num": "3" }, { "text": "Phonetic encoding To solve the first step, we apply a combination of word lookups from the CMU pronunciation dictionary (Weide, 2005) with letter-to-sound rules for handling out-ofvocabulary words. These rules are based on the CART techniques described by Black et al. (1998) , and are represented with a simple Finite State Transducer 1 . The number of letters and number of phones in a word are rarely a one-to-one match: letters may match with up to three phones. In addition, virtually all letters can, in some contexts, map to zero phones, which is known as 'wild' or epsilon. Expectation Maximisation is used to compute the probability of a single letter matching a single phone, which is maximised through the application of Dynamic Time Warping (Myers et al., 1980) to determine the most likely position of epsilon characters.", "cite_spans": [ { "start": 120, "end": 133, "text": "(Weide, 2005)", "ref_id": "BIBREF30" }, { "start": 256, "end": 275, "text": "Black et al. (1998)", "ref_id": "BIBREF3" }, { "start": 753, "end": 773, "text": "(Myers et al., 1980)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Phonetic-level Model", "sec_num": "3" }, { "text": "Although this approach offers full coverage over the training corpus-even for abbreviated words like ask'd and archaic words like renewest-it has several limitations. Irregularities in the English language result in difficulty determining general letter-to-sound rules that can manage words with unusual pronunciations such as \"colonel\" and \"receipt\" 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic-level Model", "sec_num": "3" }, { "text": "In addition to transliterating words into phoneme sequences, we also represent word break characters as a specific symbol. This makes decipherment, when converting back into an orthographic representation, much easier. Phonetic transliteration allows us to construct a phonetic poetry corpus comprising 1,046,536 phonemes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic-level Model", "sec_num": "3" }, { "text": "We train a Long-Short Term Memory network (Hochreiter and Schmidhuber, 1997) on the phonetic representation of our poetry corpus. The model is trained using stochastic gradient descent to predict the next phoneme given a sequence of phonemes. Specifically, we maximize a multinomial logistic regression objective over the final softmax prediction. Each phoneme is represented as a 256-dimensional embedding, and the model consists of two hidden layers of size 256. We apply backpropagationthrough-time (Werbos, 1990) for 150 timesteps, which roughly equates to four lines of poetry in sonnet form. This allows the network to learn features like rhyme even when spread over multiple lines. Training is preemptively stopped at 25 epochs to prevent overfitting.", "cite_spans": [ { "start": 42, "end": 76, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Neural language model", "sec_num": null }, { "text": "Orthographic decoding When decoding from phonemes back to orthographic symbols, the goal is to compute the most likely word corresponding to a sequence of phonemes. That is, we compute the most probable hypothesis word W given a phoneme sequence \u03c1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural language model", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "arg max i P ( W i | \u03c1 )", "eq_num": "(1)" } ], "section": "Neural language model", "sec_num": null }, { "text": "We can consider the phonetic encoding of plaintext to be a homophonic cipher; that is, a cipher in which each symbol can correspond to one or more possible decodings. The problem of homophonic decipherment has received significant research attention in the past; with approaches utilising Expectation Maximisation (Knight et al., 2006) , Integer Programming (Ravi and Knight, 2009) and A* search (Corlett and Penn, 2010) . Transliteration from phonetic to an orthographic representation is done by constructing a Hidden Markov Model using the CMU pronunciation dictionary (Weide, 2005) and an n-gram language model. We calculate the transition probabilities (using the n-gram model) and the emission matrix (using the CMU pronunciation dictionary) to determine pronunciations that correspond to a single word. All pronunciations are naively considered equiprobable. We perform Viterbi decoding to find the most likely sequence of words. This means finding the most likely word w t+1 given a previous word sequence (w t\u2212n , ..., w t ).", "cite_spans": [ { "start": 314, "end": 335, "text": "(Knight et al., 2006)", "ref_id": "BIBREF14" }, { "start": 358, "end": 381, "text": "(Ravi and Knight, 2009)", "ref_id": "BIBREF24" }, { "start": 396, "end": 420, "text": "(Corlett and Penn, 2010)", "ref_id": "BIBREF6" }, { "start": 572, "end": 585, "text": "(Weide, 2005)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Neural language model", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "arg max w t+1 P ( w t+1 | w 1 , ... , w t )", "eq_num": "(2)" } ], "section": "Neural language model", "sec_num": null }, { "text": "If a phonetic sequence does not map to any word, we apply the heuristic of artificially breaking the sequence up into two subsequences at index n, such that n maximises the n-gram frequency of the subsequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural language model", "sec_num": null }, { "text": "Output A popular form of poetry with strict internal structure is the sonnet. Popularised in English by Shakespeare, the sonnet is characterised by a strict rhyme scheme and exactly fourteen lines of Iambic Pentameter (Greene et al., 2010) .", "cite_spans": [ { "start": 218, "end": 239, "text": "(Greene et al., 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Neural language model", "sec_num": null }, { "text": "Since the 17,134 word tokens in Shakespeare's 153 sonnets are insufficient to train an effective model, we augment this corpus with poetry taken from the website sonnets.org, yielding a training set of 288,326 words and 1,563,457 characters. An example of the output when training on this sonnets corpus is provided in Figure 1 . Not only is it mostly in strict Iambic Pentameter, but the grammar of the output is mostly correct and the poetry contains rhyme.", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 327, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Neural language model", "sec_num": null }, { "text": "As the example shows, phonetic-level language models are effective at learning poetic form, despite small training sets and relatively few parameters. However, the fact that they require training data with internal poetic consistency implies that they do not generalise to other forms of poetry. That is, in order to generate poetry in Dactylic Hexameter (for example), a phonetic model must be trained on a corpus of Dactylic poetry. Not only is this impractical, but in many cases no corpus of adequate size even exists. Even when such poetic corpora are available, a new model must be trained for each type of poetry. This precludes tweaking the form of the output, which is important when generating poetry automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "We now explore an alternative approach. Instead of attempting to represent both form and content in a single model, we construct a pipeline containing a generative language model representing content, and a discriminative model representing form. This allows us to represent the problem of creating poetry as a constraint satisfaction problem, where we can modify constraints to restrict the types of poetry we generate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "Character Language Model Rather than train a model on data representing features of both content and form, we now use a simple character-level model (Sutskever et al., 2011) focused solely on content. This approach offers several benefits over the word-level models that are prevalent in the literature. Namely, their more compact vocabulary allows for more efficient training; they can learn common prefixes and suffixes to allow us to sample words that are not present in the training corpus and can learn effective language representations from relatively small corpora; and they can handle archaic and incorrect spellings of words.", "cite_spans": [ { "start": 149, "end": 173, "text": "(Sutskever et al., 2011)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "As we no longer need the model to explicitly represent the form of generated poetry, we can loosen our constraints when choosing a training corpus. Instead of relying on poetry only in sonnet form, we can instead construct a generic corpus of poetry taken from online sources. This corpus is composed of 7.56 million words and 34.34 million characters, taken largely from 20 th Century poetry books found online. The increase in corpus size facilitates a corresponding increase in the number of permissible model parameters. This allows us to train a 3-layer LSTM model with 2048dimensional hidden layers, with embeddings in 128 dimensions. The model was trained to predict the next character given a sequence of characters, using stochastic gradient descent. We attenuate the learning rate over time, and by 20 epochs the model converges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "Rhythm Modeling Although a character-level language model trained on a corpus of generic poetry allows us to generate interesting text, internal irregularities and noise in the training data prevent the model from learning important features such as rhythm. Hence, we require an additional classifier to constrain our model by either accepting or rejecting sampled lines based on the presence or absence of these features. As the presence of meter (rhythm) is the most characteristic feature of poetry, it therefore must be our primary focus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "Pronunciation dictionaries have often been used to determine the syllabic stresses of words (Colton et al., 2012; Manurung et al., 2000; Misztal and Indurkhya, 2014) , but suffer from some limitations for constructing a classifier. All word pronunciations are considered equiprobable, including archaic and uncommon pronunciations, and pronunciations are provided context free, despite the importance of context for pronunciation 3 . Furthermore, they are constructed from American English, meaning that British English may be misclassified.", "cite_spans": [ { "start": 92, "end": 113, "text": "(Colton et al., 2012;", "ref_id": "BIBREF5" }, { "start": 114, "end": 136, "text": "Manurung et al., 2000;", "ref_id": "BIBREF17" }, { "start": 137, "end": 165, "text": "Misztal and Indurkhya, 2014)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "These issues are circumvented by applying lightly supervised learning to determine the contextual stress pattern of any word. That is, we exploit the latent structure in our corpus of sonnet poetry, namely, the fact that sonnets are composed of lines in rigid Iambic Pentameter, and are therefore exactly ten syllables long with alternating syllabic stress. This allows us to derive a syllablestress distribution. Although we use the sonnets corpus for this, it is important to note that any corpus with such a latent structure could be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "We represent each line of poetry as a cascade of Weighted Finite State Transducers (WFST). A WFST is a finite-state automaton that maps between two sets of symbols. It is defined as an eight-tuple where \u27e8Q, \u03a3, \u03c1, I, F, \u2206, \u03bb, p\u27e9: A WFST assigns a probability (or weight, in the general case) to each path through it, going from an initial state to an end state. Every path corresponds to an input and output label sequence, and there can be many such paths for each sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "WFSTs are often used in a cascade, where a number of machines are executed in series, such that the output tape of one machine is the input tape for the next. Formally, a cascade is represented by the functional composition of several machines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W (x, z) = A(x|y) \u2022 B(y|z) \u2022 C(z)", "eq_num": "(3)" } ], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "Where W (x, z) is defined as the \u2295 sum of the path probabilities through the cascade, and x and z are an input sequence and output sequence respectively. In the real semiring (where the product of probabilities are taken in series, and the sum of the probabilities are taken in parallel), we can rewrite the definition of weighted composition to produce the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "W (x, z) = \u2295 y A(x | y) \u2297 B(y | z) \u2297 C(z) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "As we are dealing with probabilities, this can be rewritten as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "P (x, z) = \u2211 y P (x | y)P (y | z)P (z) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "We can perform Expectation Maximisation over the poetry corpus to obtain a probabilistic classifier which enables us to determine the most likely stress patterns for each word. Every word is represented by a single transducer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "In each cascade, a sequence of input words is mapped onto a sequence of stress patterns \u27e8\u00d7, /\u27e9 where each pattern is between 1 and 5 syllables in length 4 . We initially set all transition probabilities equally, as we make no assumptions about the stress distributions in our training set. We then iterate over each line of the sonnet corpus, using Expectation Maximisation to train the cascades. In practice, there are several de facto variations of Iambic meter which are permissible, as shown in Figure 2 . We train the rhythm classifier by converging the cascades to whatever output is the most likely given the line. Constraining the model To generate poetry using this model, we sample sequences of characters from the character-level language model. To impose rhythm constrains on the language model, we first represent these sampled characters at the word level and pool sampled characters into word tokens in an intermediary buffer. We then apply the separately trained word-level WFSTs to construct a cascade of this buffer and perform Viterbi decoding over the cascade. This defines the distribution of stress-patterns over our word tokens.", "cite_spans": [], "ref_spans": [ { "start": 499, "end": 507, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "\u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7 / / \u00d7 \u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7 \u00d7 / \u00d7 / \u00d7 / \u00d7 / \u00d7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "We can represent this cascade as a probabilistic classifier, and accept or reject the buffered output based on how closely it conforms to the desired meter. While sampling sequences of words from this model, the entire generated sequence is passed to the classifier each time a new word is sampled. The pronunciation model then returns the probability that the entire line is within the specified meter. If a new word is rejected by the classifier, the state of the network is rolled back to the last formulaically acceptable state of the line, removing the rejected word from memory. The constraint on rhythm can be controlled by adjusting the acceptability threshold of the classifier. By increasing the threshold, output focuses on form over content. Conversely, decreasing the criterion puts greater emphasis on content. Figure 3 : Two approaches for generating themed poetry.", "cite_spans": [], "ref_spans": [ { "start": 825, "end": 833, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Constrained Character-level Model", "sec_num": "4" }, { "text": "It is important for any generative poetry model to include themes and poetic devices. One way to achieve this would be by constructing a corpus that exhibits the desired themes and devices. To create a themed corpus about 'love', for instance, we would aggregate love poetry to train the model, which would thus learn an implicit representation of love. However, this forces us to generate poetry according to discrete themes and styles from pretrained models, requiring a new training corpus for each model. In other words, we would suffer from similar limitations as with the phonetic-level model, in that we require a dedicated corpus. Alternatively, we can manipulate the language model by boosting character probabilities at sample time to increase the probability of sampling thematic words like 'love'. This approach is more robust, and provides us with more control over the final output, including the capacity to vary the inclusion of poetic devices in the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Themes and Poetic devices", "sec_num": "4.1" }, { "text": "Themes In order to introduce thematic content, we heuristically boost the probability of sampling words that are semantically related to a theme word from the language model. First, we compile a list of similar words to a key theme word by retrieving its semantic neighbours from a distributional semantic model (Mikolov et al., 2013) . For example, the theme winter might include thematic words frozen, cold, snow and frosty. We represent these semantic neighbours at the character level, and heuristically boost their probability by multiplying the sampling probability of these character strings by their cosine similarity to the key word, plus a constant. Thus, the likelihood of sampling a thematically related word is artificially increased, while still constraining the model rhythmically.", "cite_spans": [ { "start": 312, "end": 334, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Themes and Poetic devices", "sec_num": "4.1" }, { "text": "Errors per line 1 2 3 4 Total Phonetic Model 11 2 3 1 28 Character Model + WFST 6 5 1 1 23 Character Model 3 8 7 7 68 Table 1 : Number of lines with n errors from a set of 50 lines generated by each of the three models.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 133, "text": "Phonetic Model 11 2 3 1 28 Character Model + WFST 6 5 1 1 23 Character Model 3 8 7 7 68 Table 1", "ref_id": null } ], "eq_spans": [], "section": "Themes and Poetic devices", "sec_num": "4.1" }, { "text": "Poetic devices A similar method may be used for poetic devices such as assonance, consonance and alliteration. Since these devices can be orthographically described by the repetition of identical sequences of characters, we can apply the same heuristic to boost the probability of sampling character strings that have previously been sampled. That is, to sample a line with many instances of alliteration (multiple words with the same initial sound) we record the historical frequencies of characters sampled at the beginning of each previous word. After a word break character, we boost the probability that those characters will be sampled again in the softmax. We only keep track of frequencies for a fixed number of time steps. By increasing or decreasing the size of this window, we can manipulate the prevalence of alliteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Themes and Poetic devices", "sec_num": "4.1" }, { "text": "Variations of this approach are applied to invoke consonance (by boosting intra-word consonants) and assonance (by boosting intra-word vowels). An example of two sampled lines with high degrees of alliteration, assonance and consonance is given in Figure 4c .", "cite_spans": [], "ref_spans": [ { "start": 248, "end": 257, "text": "Figure 4c", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Themes and Poetic devices", "sec_num": "4.1" }, { "text": "In order to examine how effective our methodologies for generating poetry are, we evaluate the proposed models in two ways. First, we perform an intrinsic evaluation where we examine the quality of the models and the generated poetry. Second, we perform an extrinsic evaluation where we evaluate the generated output using human annotators, and compare it to human-generated poetry.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "To evaluate the ability of both models to generate formulaic poetry that adheres to rhythmic rules, we compared sets of fifty sampled lines from each model. The first set was sampled from the phonetic-level model trained on Iambic poetry. The second set was sampled from the characterlevel model, constrained to Iambic form. For com-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intrinsic evaluation", "sec_num": "5.1" }, { "text": "Wikipedia 64.84% 83.35% 97.53% Sonnets 85.95% 80.32% 99.36% Table 2 : Error when transliterating text into phonemes and reconstructing back into text.", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Word Line Coverage", "sec_num": null }, { "text": "parison, and to act as a baseline, we also sampled from the unconstrained character model. We created gold-standard syllabic classifications by recording each line spoken-aloud, and marking each syllable as either stressed or unstressed. We then compared these observations to loose Iambic Pentameter (containing all four variants), to determine how many syllabic misclassifications existed on each line. This was done by speaking each line aloud, and noting where the speaker put stresses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Line Coverage", "sec_num": null }, { "text": "As Table 1 shows, the constrained character level model generated the most formulaic poetry. Results from this model show that 70% of lines had zero mistakes, with frequency obeying an inverse power-law relationship with the number of errors. We can see that the phonetic model performed similarly, but produced more subtle mistakes than the constrained character model: many of the errors were single mistakes in an otherwise correct line of poetry.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word Line Coverage", "sec_num": null }, { "text": "In order to investigate this further, we examined to what extent these errors are due to transliteration (i.e., the phonetic encoding and orthographic decoding steps). Table 2 shows the reconstruction accuracy per word and per line when transliterating either Wikipedia or Sonnets to phonemes using the CMU pronunciation dictionary and subsequently reconstructing English text using the ngram model 5 . Word accuracy reflects the frequency of perfect reconstruction, whereas per line tri-gram similarity (Kondrak, 2005) reflects the overall reconstruction. Coverage captures the percentage of in-vocabulary items. The relatively low per-word accuracy achieved on the Wikipedia corpus is likely due to the high frequency of out-ofvocabulary words. The results show that a significant number of errors in the phonetic-level model are likely to be caused by transliteration mistakes.", "cite_spans": [ { "start": 504, "end": 519, "text": "(Kondrak, 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 168, "end": 175, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Word Line Coverage", "sec_num": null }, { "text": "We conducted an indistinguishability study with a selection of automatically generated poetry and human poetry. As extrinsic evaluations are expensive and the phonetic model was unlikely to do well (as illustrated in Figure 4e : the model generates good Iambic form, but not very good English), we only evaluate on the constrained characterlevel model. Poetry was generated with a variety of themes and poetic devices (see supplementary material).", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 226, "text": "Figure 4e", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Extrinsic evaluation", "sec_num": "5.2" }, { "text": "The aim of the study was to determine whether participants could distinguish between human and machine-generated poetry, and if so to what extent. A set of 70 participants (of whom 61 were English native speakers) were each shown a selection of randomly chosen poetry segments, and were invited to classify them as either human or generated. Participants were recruited from friends and people within poetry communities within the University of Cambridge, with an age range of 17 to 80, and a mean age of 29. Our participants were not financially incentivised, perceiving the evaluation as an intellectual challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extrinsic evaluation", "sec_num": "5.2" }, { "text": "In addition to the classification task, each participant was also invited to rate each poem on a 1-5 scale with respect to three criteria, namely readability, form and evocation (how much emotion did a poem elicit). We naively consider the overall quality of a poem to be the mean of these three measures. We used a custom web-based environment, built specifically for this evaluation 6 , which is illustrated in Figure 5 . Based on human judgments, we can determine whether the models presented in this work can produce poetry of a similar quality to humans.", "cite_spans": [], "ref_spans": [ { "start": 413, "end": 421, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Extrinsic evaluation", "sec_num": "5.2" }, { "text": "To select appropriate human poetry that could be meaningfully compared with the machinegenerated poetry, we performed a comprehension test on all poems used in the evaluation, using the Dale-Chall readability formula (Dale and Chall, 1948) . This formula represents readability as a function of the complexity of the input words. We selected nine machine-generated poems with a high readability score. The generated poems produced an average score of 7.11, indicating that readers over 15 years of age should easily be able to comprehend them.", "cite_spans": [ { "start": 217, "end": 239, "text": "(Dale and Chall, 1948)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Extrinsic evaluation", "sec_num": "5.2" }, { "text": "For our human poems, we focused explicitly on poetry where greater consideration is placed on (a) The crow crooked on more beautiful and free, He journeyed off into the quarter sea. his radiant ribs girdled empty and veryleast beautiful as dignified to see.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extrinsic evaluation", "sec_num": "5.2" }, { "text": "Man with the broken blood blue glass and gold. Cheap chatter chants to be a lover do.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(c)", "sec_num": null }, { "text": "(e) The son still streams and strength and spirit. The ridden souls of which the fills of.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(c)", "sec_num": null }, { "text": "(b) Is that people like things (are the way we to figure it out) and I thought of you reading and then is your show or you know we will finish along will you play. prosodic elements like rhythm and rhyme than semantic content (known as \"nonsense verse\"). We randomly selected 30 poems belonging to that category from the website poetrysoup.com, of which eight were selected for the final comparison based on their comparable readability score. The selected poems were segmented into passages of between four and six lines, to match the length of the generated poetry segments. An example of such a segment is shown in Figure 4d . The human poems had an average score of 7.52, requiring a similar level of English aptitude to the generated texts.", "cite_spans": [], "ref_spans": [ { "start": 618, "end": 627, "text": "Figure 4d", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "(c)", "sec_num": null }, { "text": "The performance of each human poem, alongside the aggregated scores of the generated poems, is illustrated in Table 3 . For the human poems, our group of participants guessed correctly that they were human 51.4% of the time. For the generated poems, our participants guessed correctly 46.2% of the time that they were machine generated. To determine whether our results were statistically significant, we performed a Chi 2 test. This resulted in a p-value of 0.718. This indicates that our participants were unable to tell the difference between human and generated poetry in any significant way. Although our participants generally considered the human poems to be of marginally higher quality than our generated poetry, they were unable to effectively distinguish between them. Interestingly, our results seem to suggest that our participants consider the generated poems to be more 'human-like' than those actually written by humans. In addition, the poem with the highest overall quality rating is a machine generated one. This shows that our approach was effective at generating high-quality rhythmic verse.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "(c)", "sec_num": null }, { "text": "It should be noted that the poems that were most 'human-like' and most aesthetic respectively were generated by the neural character model. Generally the set of poetry produced by the neural character model was slightly less readable and emotive than the human poetry, but had above average form. All generated poems included in this evaluation can be found in the supplementary material, and our code is made available online 7 . Table 3 : Proportion of people classifying each poem as 'human', as well as the relative qualitative scores of each poem as deviations from the mean.", "cite_spans": [], "ref_spans": [ { "start": 431, "end": 438, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "(c)", "sec_num": null }, { "text": "Our contributions are twofold. First, we developed a neural language model trained on a phonetic transliteration of poetic form and content. Although example output looked promising, this model was limited by its inability to generalise to novel forms of verse. We then proposed a more robust model trained on unformed poetic text, whose output form is constrained at sample time. This approach offers greater control over the style of the generated poetry than the earlier method, and facilitates themes and poetic devices. An indistinguishability test, where participants were asked to classify a randomly selected set of human \"nonsense verse\" and machine-generated poetry, showed generated poetry to be indistinguishable from that written by humans. In addition, the poems that were deemed most 'humanlike' and most aesthetic were both machinegenerated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "In future work, it would be useful to investigate models based on morphemes, rather than characters, which offers potentially superior performance for complex and rare words (Luong et al., 2013) , which are common in poetry.", "cite_spans": [ { "start": 174, "end": 194, "text": "(Luong et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Implemented using FreeTTS(Walker et al., 2010) 2 An evaluation of models in American English, British English, German and French was undertaken byBlack et al. (1998), who reported an externally validated per token accuracy on British English as low as 67%. Although no experiments were carried out on corpora of early-modern English, it is likely that this accuracy would be significantly lower.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, the independent probability of stressing the single syllable word at is 40%, but this increases to 91% when the following word is the(Greene et al., 2010)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Words of more than 5 syllables comprise less than 0.1% of the lexicon(Aoyama and Constable, 1998).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Obviously, calculating this value for the character-level model makes no sense, since no transliteration occurs in that case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://neuralpoetry.getforge.io/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/JackHopkins/ACLPoetry", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word length frequency and distribution in english: Observations, theory, and implications for the construction of verse lines", "authors": [ { "first": "Hideaki", "middle": [], "last": "Aoyama", "suffix": "" }, { "first": "John", "middle": [], "last": "Constable", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hideaki Aoyama and John Constable. 1998. Word length frequency and distribution in english: Obser- vations, theory, and implications for the construction of verse lines. arXiv preprint cmp-lg/9808004 .", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Markov constraints for generating lyrics with style", "authors": [ { "first": "Gabriele", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Pachet", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Mirko Degli", "middle": [], "last": "Esposti", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 20th European Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "115--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabriele Barbieri, Fran\u00e7ois Pachet, Pierre Roy, and Mirko Degli Esposti. 2012. Markov constraints for generating lyrics with style. In Proceedings of the 20th European Conference on Artificial Intelligence. IOS Press, pages 115-120.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural probabilistic language models", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Jean-S\u00e9bastien", "middle": [], "last": "Sen\u00e9cal", "suffix": "" }, { "first": "Fr\u00e9deric", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Jean-Luc", "middle": [], "last": "Gauvain", "suffix": "" } ], "year": 2006, "venue": "Innovations in Machine Learning", "volume": "", "issue": "", "pages": "137--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Holger Schwenk, Jean-S\u00e9bastien Sen\u00e9cal, Fr\u00e9deric Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In Innovations in Machine Learning, Springer, pages 137-186.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Issues in building general letter to sound rules", "authors": [ { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Lenzo", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Pagel", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan W Black, Kevin Lenzo, and Vincent Pagel. 1998. Issues in building general letter to sound rules .", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Proceedings of ICLR .", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Full face poetry generation", "authors": [ { "first": "Simon", "middle": [], "last": "Colton", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Goodwin", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Third International Conference on Computational Creativity", "volume": "", "issue": "", "pages": "95--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Colton, Jacob Goodwin, and Tony Veale. 2012. Full face poetry generation. In Proceedings of the Third International Conference on Computational Creativity. pages 95-102.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An exact a* method for deciphering letter-substitution ciphers", "authors": [ { "first": "Eric", "middle": [], "last": "Corlett", "suffix": "" }, { "first": "Gerald", "middle": [], "last": "Penn", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1040--1047", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Corlett and Gerald Penn. 2010. An exact a* method for deciphering letter-substitution ciphers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Associ- ation for Computational Linguistics, pages 1040- 1047.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A formula for predicting readability: Instructions. Educational research bulletin pages", "authors": [ { "first": "Edgar", "middle": [], "last": "Dale", "suffix": "" }, { "first": "Jeanne", "middle": [ "S" ], "last": "Chall", "suffix": "" } ], "year": 1948, "venue": "", "volume": "", "issue": "", "pages": "37--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edgar Dale and Jeanne S Chall. 1948. A formula for predicting readability: Instructions. Educational re- search bulletin pages 37-54.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Wasp: Evaluation of different strategies for the automatic generation of spanish verse", "authors": [ { "first": "Pablo", "middle": [], "last": "Gerv\u00e1s", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the AISB-00 Symposium on Creative & Cultural Aspects of AI", "volume": "", "issue": "", "pages": "93--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pablo Gerv\u00e1s. 2000. Wasp: Evaluation of different strategies for the automatic generation of spanish verse. In Proceedings of the AISB-00 Symposium on Creative & Cultural Aspects of AI. pages 93-100.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Generating topical poetry", "authors": [ { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1183--1191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight. 2016. Generating topical poetry. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing. pages 1183-1191.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Natural language generation through character-based rnns with finite-state prior knowledge", "authors": [ { "first": "Raghav", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Dymetman", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Gaussier", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1083--1092", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raghav Goyal, Marc Dymetman, and Eric Gaussier. 2016. Natural language generation through character-based rnns with finite-state prior knowl- edge. In Proceedings of COLING 2016, the 26th In- ternational Conference on Computational Linguis- tics: Technical Papers. Osaka, Japan, pages 1083- 1092.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Automatic analysis of rhythmic poetry with applications to generation and translation", "authors": [ { "first": "Erica", "middle": [], "last": "Greene", "suffix": "" }, { "first": "Tugba", "middle": [], "last": "Bodrumlu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "524--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erica Greene, Tugba Bodrumlu, and Kevin Knight. 2010. Automatic analysis of rhythmic poetry with applications to generation and translation. In Pro- ceedings of the 2010 conference on empirical meth- ods in natural language processing. Association for Computational Linguistics, pages 524-533.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Character-aware neural language models", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.06615" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2015. Character-aware neural language models. arXiv preprint arXiv:1508.06615 .", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised analysis for decipherment problems", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Anish", "middle": [], "last": "Nair", "suffix": "" }, { "first": "Nishit", "middle": [], "last": "Rathod", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COL-ING/ACL on Main conference poster sessions. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "499--506", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for deci- pherment problems. In Proceedings of the COL- ING/ACL on Main conference poster sessions. As- sociation for Computational Linguistics, pages 499- 506.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "N-gram similarity and distance", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2005, "venue": "String processing and information retrieval", "volume": "", "issue": "", "pages": "115--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grzegorz Kondrak. 2005. N-gram similarity and dis- tance. In String processing and information re- trieval. Springer, pages 115-126.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Better word representations with recursive neural networks for morphology", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "CoNLL", "volume": "", "issue": "", "pages": "104--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. pages 104-113.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Towards a computational model of poetry generation", "authors": [ { "first": "Hisar", "middle": [], "last": "Manurung", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Ritchie", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hisar Manurung, Graeme Ritchie, and Henry Thomp- son. 2000. Towards a computational model of po- etry generation. Technical report, The University of Edinburgh.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Process based evaluation of computer generated poetry", "authors": [ { "first": "Stephen", "middle": [], "last": "Mcgregor", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Purver", "suffix": "" } ], "year": 2016, "venue": "The INLG 2016 Workshop on Computational Creativity in Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen McGregor, Matthew Purver, and Geraint Wig- gins. 2016. Process based evaluation of computer generated poetry. In The INLG 2016 Workshop on Computational Creativity in Natural Language Gen- eration. page 51.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" } ], "year": 2010, "venue": "IN-TERSPEECH", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In IN- TERSPEECH. volume 2, page 3.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Poetry generation system with an emotional personality", "authors": [ { "first": "Joanna", "middle": [], "last": "Misztal", "suffix": "" }, { "first": "Bipin", "middle": [], "last": "Indurkhya", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Fourth International Conference on Computational Creativity", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joanna Misztal and Bipin Indurkhya. 2014. Poetry generation system with an emotional personality. In Proceedings of the Fourth International Conference on Computational Creativity.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Performance tradeoffs in dynamic time warping algorithms for isolated word recognition. Acoustics, Speech and Signal Processing", "authors": [ { "first": "Cory", "middle": [], "last": "Myers", "suffix": "" }, { "first": "Aaron", "middle": [ "E" ], "last": "Lawrence R Rabiner", "suffix": "" }, { "first": "", "middle": [], "last": "Rosenberg", "suffix": "" } ], "year": 1980, "venue": "IEEE Transactions on", "volume": "28", "issue": "6", "pages": "623--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cory Myers, Lawrence R Rabiner, and Aaron E Rosen- berg. 1980. Performance tradeoffs in dynamic time warping algorithms for isolated word recog- nition. Acoustics, Speech and Signal Processing, IEEE Transactions on 28(6):623-635.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Gaiku: Generating haiku with word associations norms", "authors": [ { "first": "Yael", "middle": [], "last": "Netzer", "suffix": "" }, { "first": "David", "middle": [], "last": "Gabay", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Computational Approaches to Linguistic Creativity. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "32--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yael Netzer, David Gabay, Yoav Goldberg, and Michael Elhadad. 2009. Gaiku: Generating haiku with word associations norms. In Proceedings of the Workshop on Computational Approaches to Linguis- tic Creativity. Association for Computational Lin- guistics, pages 32-39.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Learning phoneme mappings for transliteration without parallel data", "authors": [ { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2009, "venue": "The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "37--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sujith Ravi and Kevin Knight. 2009. Learning phoneme mappings for transliteration without paral- lel data. In Proceedings of Human Language Tech- nologies: The 2009 Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics. Association for Computational Linguistics, pages 37-45.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Training neural network language models on very large corpora", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Jean-Luc", "middle": [], "last": "Gauvain", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "201--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk and Jean-Luc Gauvain. 2005. Train- ing neural network language models on very large corpora. In Proceedings of the conference on Hu- man Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 201-208.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Generating text with recurrent neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "James", "middle": [], "last": "Martens", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)", "volume": "", "issue": "", "pages": "1017--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, James Martens, and Geoffrey E Hin- ton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). pages 1017-1024.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems. pages 3104-3112.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Harnessing constraint programming for poetry composition", "authors": [ { "first": "Matti", "middle": [], "last": "Jukka M Toivanen", "suffix": "" }, { "first": "Hannu", "middle": [], "last": "J\u00e4rvisalo", "suffix": "" }, { "first": "", "middle": [], "last": "Toivonen", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Fourth International Conference on Computational Creativity", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jukka M Toivanen, Matti J\u00e4rvisalo, Hannu Toivonen, et al. 2013. Harnessing constraint programming for poetry composition. In Proceedings of the Fourth International Conference on Computational Cre- ativity. page 160.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Freetts 1.2: A speech synthesizer written entirely in the java programming language", "authors": [ { "first": "Willie", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Lamere", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Kwok", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Willie Walker, Paul Lamere, and Philip Kwok. 2010. Freetts 1.2: A speech synthesizer written entirely in the java programming language.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The carnegie mellon pronouncing dictionary", "authors": [ { "first": "", "middle": [], "last": "Weide", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R Weide. 2005. The carnegie mellon pronouncing dic- tionary [cmudict. 0.6].", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Backpropagation through time: what it does and how to do it", "authors": [ { "first": "J", "middle": [], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Werbos", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the IEEE", "volume": "78", "issue": "10", "pages": "1550--1560", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10):1550-1560.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Generating chinese classical poems with rnn encoderdecoder", "authors": [ { "first": "Xiaoyuan", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Ruoyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1604.01537" ] }, "num": null, "urls": [], "raw_text": "Xiaoyuan Yi, Ruoyu Li, and Maosong Sun. 2016. Gen- erating chinese classical poems with rnn encoder- decoder. arXiv preprint arXiv:1604.01537 .", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "And humble and their fit flees are wits size but that one made and made thy step me lies -------------Cool light the golden dark in any way the birds a shade a laughter turn away -------------Then adding wastes retreating white as thine She watched what eyes are breathing awe what shine -------------But sometimes shines so covered how the beak Alone in pleasant skies no more to seek Example output of the phonetic-level model trained on Iambic Pentameter poetry (grammatical errors are emphasised)." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": ": A set of states \u03a3 : An input alphabet of symbols \u03c1 : An output alphabet of symbols I : A set of initial states F : A set of final states, or sinks \u2206 : A transition function mapping pairs of states and symbols to sets of states \u03bb : A set of weights for initial states P : A set of weights for final states" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Permissible variations of Iambic Pentameter in Shakespeare's sonnets." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "be somebody, How public like a frog To tell one's name the livelong day To an admiring bog." }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "Examples of automatically generated and human generated poetry.(a) Character-level model -Strict rhythm regularisation -Iambic -No Theme. (b) Character-level model -Strict rhythm regularisation -Anapest. (c) Character-level model -Boosted alliteration/assonance. (d) Emily Dickinson -I'm nobody, who are you? (e) Phonetic-level model -Nonsensical Iambic lines." }, "FIGREF5": { "type_str": "figure", "uris": null, "num": null, "text": "The experimental environment for asking participants to distinguish between automatically generated and human poetry." } } } }