{ "paper_id": "P11-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:48:56.203308Z" }, "title": "Bayesian Inference for Zodiac and Other Homophonic Ciphers", "authors": [ { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California Information Sciences Institute Marina del Rey", "location": { "postCode": "90292", "region": "California" } }, "email": "sravi@isi.edu" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California Information Sciences Institute Marina del Rey", "location": { "postCode": "90292", "region": "California" } }, "email": "knight@isi.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce a novel Bayesian approach for deciphering complex substitution ciphers. Our method uses a decipherment model which combines information from letter n-gram language models as well as word dictionaries. Bayesian inference is performed on our model using an efficient sampling technique. We evaluate the quality of the Bayesian decipherment output on simple and homophonic letter substitution ciphers and show that unlike a previous approach, our method consistently produces almost 100% accurate decipherments. The new method can be applied on more complex substitution ciphers and we demonstrate its utility by cracking the famous Zodiac-408 cipher in a fully automated fashion, which has never been done before.", "pdf_parse": { "paper_id": "P11-1025", "_pdf_hash": "", "abstract": [ { "text": "We introduce a novel Bayesian approach for deciphering complex substitution ciphers. Our method uses a decipherment model which combines information from letter n-gram language models as well as word dictionaries. Bayesian inference is performed on our model using an efficient sampling technique. We evaluate the quality of the Bayesian decipherment output on simple and homophonic letter substitution ciphers and show that unlike a previous approach, our method consistently produces almost 100% accurate decipherments. The new method can be applied on more complex substitution ciphers and we demonstrate its utility by cracking the famous Zodiac-408 cipher in a fully automated fashion, which has never been done before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Substitution ciphers have been used widely in the past to encrypt secrets behind messages. These ciphers replace (English) plaintext letters with cipher symbols in order to generate the ciphertext sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There exist many published works on automatic decipherment methods for solving simple lettersubstitution ciphers. Many existing methods use dictionary-based attacks employing huge word dictionaries to find plaintext patterns within the ciphertext (Peleg and Rosenfeld, 1979; Ganesan and Sherman, 1993; Jakobsen, 1995; Olson, 2007) . Most of these methods are heuristic in nature and search for the best deterministic key during deci-pherment. Others follow a probabilistic decipherment approach. Knight et al. (2006) use the Expectation Maximization (EM) algorithm (Dempster et al., 1977) to search for the best probabilistic key using letter n-gram models. Ravi and Knight (2008) formulate decipherment as an integer programming problem and provide an exact method to solve simple substitution ciphers by using letter n-gram models along with deterministic key constraints. Corlett and Penn (2010) work with large ciphertexts containing thousands of characters and provide another exact decipherment method using an A* search algorithm. Diaconis (2008) presents an analysis of Markov Chain Monte Carlo (MCMC) sampling algorithms and shows an example application for solving simple substitution ciphers.", "cite_spans": [ { "start": 247, "end": 274, "text": "(Peleg and Rosenfeld, 1979;", "ref_id": "BIBREF14" }, { "start": 275, "end": 301, "text": "Ganesan and Sherman, 1993;", "ref_id": "BIBREF6" }, { "start": 302, "end": 317, "text": "Jakobsen, 1995;", "ref_id": "BIBREF9" }, { "start": 318, "end": 330, "text": "Olson, 2007)", "ref_id": "BIBREF12" }, { "start": 496, "end": 516, "text": "Knight et al. (2006)", "ref_id": "BIBREF10" }, { "start": 565, "end": 588, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF3" }, { "start": 658, "end": 680, "text": "Ravi and Knight (2008)", "ref_id": "BIBREF15" }, { "start": 875, "end": 898, "text": "Corlett and Penn (2010)", "ref_id": "BIBREF2" }, { "start": 1038, "end": 1053, "text": "Diaconis (2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most work in this area has focused on solving simple substitution ciphers. But there are variants of substitution ciphers, such as homophonic ciphers, which display increasing levels of difficulty and present significant challenges for decipherment. The famous Zodiac serial killer used one such cipher system for communication. In 1969, the killer sent a three-part cipher message to newspapers claiming credit for recent shootings and crimes committed near the San Francisco area. The 408-character message (Zodiac-408) was manually decoded by hand in the 1960's. Oranchak (2008) presents a method for solving the Zodiac-408 cipher automatically with a dictionary-based attack using a genetic algorithm. However, his method relies on using plaintext words from the known solution to solve the cipher, which departs from a strict decipherment scenario.", "cite_spans": [ { "start": 566, "end": 581, "text": "Oranchak (2008)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce a novel method for solving substitution ciphers using Bayesian learning. Our novel contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present a new probabilistic decipherment approach using Bayesian inference with sparse priors, which can be used to solve different types of substitution ciphers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Our new method combines information from word dictionaries along with letter n-gram models, providing a robust decipherment model which offsets the disadvantages faced by previous approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We evaluate the Bayesian decipherment output on three different types of substitution ciphers and show that unlike a previous approach, our new method solves all the ciphers completely.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Using the Bayesian decipherment, we show for the first time a truly automated system that successfully solves the Zodiac-408 cipher.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use natural language processing techniques to attack letter substitution ciphers. In a letter substitution cipher, every letter p in the natural language (plaintext) sequence is replaced by a cipher token c, according to some substitution key.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Letter Substitution Ciphers", "sec_num": "2" }, { "text": "For example, an English plaintext \"H E L L O W O R L D ...\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Letter Substitution Ciphers", "sec_num": "2" }, { "text": "may be enciphered as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Letter Substitution Ciphers", "sec_num": "2" }, { "text": "\"N O E E I T I M E L ...\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Letter Substitution Ciphers", "sec_num": "2" }, { "text": "according to the key:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Letter Substitution Ciphers", "sec_num": "2" }, { "text": "p: ABCDEFGHIJKLMNOPQRSTUVWXYZ c: XYZLOHANBCDEFGIJKMPQRSTUVW", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Letter Substitution Ciphers", "sec_num": "2" }, { "text": "where, \" \" represents the space character (word boundary) in the English and ciphertext messages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Letter Substitution Ciphers", "sec_num": "2" }, { "text": "If the recipients of the ciphertext message have the substitution key, they can use it (in reverse) to recover the original plaintext. The sender can encrypt the message using one of many different cipher systems. The particular type of cipher system chosen determines the properties of the key. For example, the substitution key can be deterministic in both the encipherment and decipherment directions as shown in the above example-i.e., there is a 1-to-1 correspondence between the plaintext letters and ciphertext symbols. Other types of keys exhibit nondeterminism either in the encipherment (or decipherment) or both directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Letter Substitution Ciphers", "sec_num": "2" }, { "text": "The key used in a simple substitution cipher is deterministic in both the encipherment and decipherment directions, i.e., there is a 1-to-1 mapping between plaintext letters and ciphertext symbols. The example shown earlier depicts how a simple substitution cipher works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simple Substitution Ciphers", "sec_num": "2.1" }, { "text": "Data: In our experiments, we work with a 414letter simple substitution cipher. We encrypt an original English plaintext message using a randomly generated simple substitution key to create the ciphertext. During the encipherment process, we preserve spaces between words and use this information for decipherment-i.e., plaintext character \" \" maps to ciphertext character \" \". Figure 1 (top) shows a portion of the ciphertext along with the original plaintext used to create the cipher.", "cite_spans": [], "ref_spans": [ { "start": 377, "end": 385, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Simple Substitution Ciphers", "sec_num": "2.1" }, { "text": "A homophonic cipher uses a substitution key that maps a plaintext letter to more than one cipher symbol.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers", "sec_num": "2.2" }, { "text": "For example, the English plaintext: Here, \" \" represents the space character in both English and ciphertext. Notice the non-determinism involved in the enciphering direction-the English letter \"L\" is substituted using different symbols (51, 84) at different positions in the ciphertext.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers", "sec_num": "2.2" }, { "text": "\"H E L L O W O R L D ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers", "sec_num": "2.2" }, { "text": "These ciphers are more complex than simple substitution ciphers. Homophonic ciphers are generated via a non-deterministic encipherment process-the key is 1-to-many in the enciphering direction. The number of potential cipher symbol substitutes for a particular plaintext letter is often proportional to the frequency of that letter in the plaintext languagefor example, the English letter \"E\" is assigned more cipher symbols than \"Z\". The objective of this is to flatten out the frequency distribution of ciphertext symbols, making a frequency-based cryptanalysis attack difficult.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers", "sec_num": "2.2" }, { "text": "The substitution key is, however, deterministic in the decipherment direction-each ciphertext symbol maps to a single plaintext letter. Since the ciphertext can contain more than 26 types, we need a larger alphabet system-we use a numeric substitution alphabet in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers", "sec_num": "2.2" }, { "text": "Data: For our decipherment experiments on homophonic ciphers, we use the same 414-letter English plaintext used in Section 2.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers", "sec_num": "2.2" }, { "text": "We encrypt this message using a homophonic substitution key (available from http://www.simonsingh.net/The Black Chamber/ho mophoniccipher.htm).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers", "sec_num": "2.2" }, { "text": "As before, we preserve spaces between words in the ciphertext. Figure 1 (middle) displays a section of the homophonic cipher (with spaces) and the original plaintext message used in our experiments.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 71, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Homophonic Ciphers", "sec_num": "2.2" }, { "text": "(Zodiac-408 cipher)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "In the previous two cipher systems, the wordboundary information was preserved in the cipher. We now consider a more difficult homophonic cipher by removing space characters from the original plaintext. The English plaintext from the previous example now looks like this: \"HELLOWORLD ...\" and the corresponding ciphertext is: Without the word boundary information, typical dictionary-based decipherment attacks fail on such ciphers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "Zodiac-408 cipher: Homophonic ciphers without spaces have been used extensively in the past to encrypt secret messages. One of the most famous homophonic ciphers in history was used by the infamous Zodiac serial killer in the 1960's. The killer sent a series of encrypted messages to newspapers and claimed that solving the ciphers would reveal clues to his identity. The identity of the Zodiac killer remains unknown to date. However, the mystery surrounding this has sparked much interest among cryptanalysis experts and amateur enthusiasts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "The Zodiac messages include two interesting ciphers: (1) a 408-symbol homophonic cipher without spaces (which was solved manually by hand), and (2) a similar looking 340-symbol cipher that has yet to be solved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "Here is a sample of the Zodiac-408 cipher message:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "... and the corresponding section from the original English plaintext message:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "I L I K E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I S M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O S T D A N G E R O U E A N A M A L O F A L L T O K I L L S O M E T H I N G G I ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "Besides the difficulty with missing word boundaries and non-determinism associated with the key, the Zodiac-408 cipher poses several additional challenges which makes it harder to solve than any standard homophonic cipher. There are spelling mistakes in the original message (for example, the English word \"PARADISE\" is misspelt as \"PARADICE\") which can divert a dictionary-based attack. Also, the last 18 characters of the plaintext message does not seem to make any sense (\"EBE-ORIETEMETHHPITI\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "Data: Figure 1 (bottom) displays the Zodiac-408 cipher (consisting of 408 tokens, 54 symbol types) along with the original plaintext message. We run the new decipherment method (described in Section 3.1) and show that our approach can successfully solve the Zodiac-408 cipher.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 14, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Homophonic Ciphers without spaces", "sec_num": "2.3" }, { "text": "Given a ciphertext message c 1 ...c n , the goal of decipherment is to uncover the hidden plaintext message p 1 ...p n . The size of the keyspace (i.e., number of possible key mappings) that we have to navigate during decipherment is huge-a simple substitution cipher has a keyspace size of 26!, whereas a homophonic cipher such as the Zodiac-408 cipher has 26 54 possible key mappings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decipherment", "sec_num": "3" }, { "text": "Next, we describe a new Bayesian decipherment approach for tackling substitution ciphers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decipherment", "sec_num": "3" }, { "text": "Bayesian inference methods have become popular in natural language processing (Goldwater and Griffiths, 2007; Finkel et al., 2005; Blunsom et al., 2009; Chiang et al., 2010) . Snyder et al. (2010) proposed a Bayesian approach in an archaeological decipherment scenario. These methods are attractive for their ability to manage uncertainty about model parameters and allow one to incorporate prior knowledge during inference. A common phenomenon observed while modeling natural language problems is sparsity. For simple letter substitution ciphers, the original substitution key exhibits a 1-to-1 correspondence between the plaintext letters and cipher types. It is not easy to model such information using conventional methods like EM. But we can easily specify priors that favor sparse distributions within the Bayesian framework.", "cite_spans": [ { "start": 78, "end": 109, "text": "(Goldwater and Griffiths, 2007;", "ref_id": "BIBREF8" }, { "start": 110, "end": 130, "text": "Finkel et al., 2005;", "ref_id": "BIBREF5" }, { "start": 131, "end": 152, "text": "Blunsom et al., 2009;", "ref_id": "BIBREF0" }, { "start": 153, "end": 173, "text": "Chiang et al., 2010)", "ref_id": "BIBREF1" }, { "start": 176, "end": 196, "text": "Snyder et al. (2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "Here, we propose a novel approach for deciphering substitution ciphers using Bayesian inference. Rather than enumerating all possible keys (26! for a simple substitution cipher), our Bayesian framework requires us to sample only a small number of keys during the decipherment process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "Probabilistic Decipherment: Our decipherment method follows a noisy-channel approach. We are faced with a ciphertext sequence c = c 1 ...c n and we want to find the (English) letter sequence p = p 1 ...p n that maximizes the probability P (p|c).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "We first formulate a generative story to model the process by which the ciphertext sequence is generated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "1. Generate an English plaintext sequence p = p 1 ...p n , with probability P (p).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "2. Substitute each plaintext letter p i with a ciphertext token c i , with probability P (c i |p i ) in order to generate the ciphertext sequence c = c 1 ...c n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "We build a statistical English language model (LM) for the plaintext source model P (p), which assigns a probability to any English letter sequence. Our goal is to estimate the channel model parameters \u03b8 in order to maximize the probability of the observed ciphertext c:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "arg max \u03b8 P (c) = arg max \u03b8 p P \u03b8 (p, c) (1) = arg max \u03b8 p P (p) \u2022 P \u03b8 (c|p) (2) = arg max \u03b8 p P (p) \u2022 n i=1 P \u03b8 (c i |p i ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "We estimate the parameters \u03b8 using Bayesian learning. In our decipherment framework, a Chinese Restaurant Process formulation is used to model both the source and channel. The detailed generative story using CRPs is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "1. i \u2190 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "2. Generate the English plaintext letter p 1 , with probability P 0 (p 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "3. Substitute p 1 with cipher token c 1 , with probability P 0 (c 1 |p 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "4. i \u2190 i + 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "5. Generate English plaintext letter p i , with probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "\u03b1 \u2022 P 0 (p i |p i\u22121 ) + C i\u22121 1 (p i\u22121 , p i ) \u03b1 + C i\u22121 1 (p i\u22121 ) Plaintext: D E C I P H E R M E N T I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N A N C I E N T L A N G U A G E S W H E R E T H E ... Ciphertext: i n g c m p n q s n w f c v f p n o w o k t v c v h u i h g z s n w f v r q c f f n w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "c w o w g c n w f k o w a z o a n v r p n q n f p n ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian Decipherment", "sec_num": "3.1" }, { "text": "T H E ... ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayesian solution: D E C I P H E R M E N T I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N A N C I E N T L A N G U A G E S W H E R E", "sec_num": null }, { "text": "A N A L Y S I S O F D O C U M E N T S W R I T T E N I N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Plaintext: D E C I P H E R M E N T I S T H E", "sec_num": null }, { "text": "D E C I P H E R M E N T I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Plaintext: D E C I P H E R M E N T I S T H E", "sec_num": null }, { "text": "Plaintext:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ciphertext:", "sec_num": null }, { "text": "Bayesian solution (final decoding): 6. Substitute p i with cipher token c i , with probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ciphertext:", "sec_num": null }, { "text": "I L I K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ciphertext:", "sec_num": null }, { "text": "A T D A N G E R T U E A N A M A L O F A L L ... (with spaces shown): I L I K E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O A T D A N G E R T U E A N A M A L O F A L L ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "\u03b2 \u2022 P 0 (c i |p i ) + C i\u22121 1 (p i , c i ) \u03b2 + C i\u22121 1 (p i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "7. With probability P quit , quit; else go to Step 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "This defines the probability of any given derivation, i.e., any plaintext hypothesis corresponding to the given ciphertext sequence. The base distribution P 0 represents prior knowledge about the model parameter distributions. For the plaintext source model, we use probabilities from an English language model and for the channel model, we specify a uniform distribution (i.e., a plaintext letter can be substituted with any given cipher type with equal probability). C i\u22121 1 represents the count of events occurring before plaintext letter p i in the derivation (we call this the \"cache\"). \u03b1 and \u03b2 represent Dirichlet prior hyperparameters over the source and channel models respectively. A large prior value implies that characters are generated from the base distribution P 0 , whereas a smaller value biases characters to be generated with reference to previous decisions inside the cache (favoring sparser distributions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "Efficient inference via type sampling: We use a Gibbs sampling (Geman and Geman, 1984) method for performing inference on our model. We could follow a point-wise sampling strategy, where we sample plaintext letter choices for every cipher token, one at a time. But we already know that the substitution ciphers described here exhibit determinism in the deciphering direction, 1 i.e., although we have no idea about the key mappings themselves, we do know that there exists only a single plaintext letter mapping for every cipher symbol type in the true key. So sampling plaintext choices for every cipher token separately is not an efficient strategyour sampler may spend too much time exploring invalid keys (which map the same cipher symbol to different plaintext letters).", "cite_spans": [ { "start": 63, "end": 86, "text": "(Geman and Geman, 1984)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "Instead, we use a type sampling technique similar to the one proposed by Liang et al. (2010) . Under this scheme, we sample plaintext letter choices for each cipher symbol type. In every step, we sample a new plaintext letter for a cipher type and update the entire plaintext hypothesis (i.e., plaintext letters at all corresponding positions) to reflect this change. For example, if we sample a new choice p new for a cipher symbol which occurs at positions 4, 10, 18, then we update plaintext letters p 4 , p 10 and p 18 with the new choice p new .", "cite_spans": [ { "start": 73, "end": 92, "text": "Liang et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "Using the property of exchangeability, we derive an incremental formula for re-scoring the probability of a new derivation based on the probability of the old derivation-when sampling at position i, we pretend that the area affected (within a context window around i) in the current plaintext hypothesis occurs at the end of the corpus, so that both the old and new derivations share the same cache. 2 While we may make corpus-wide changes to a derivation in every sampling step, exchangeability allows us to perform scoring in an efficient manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "Combining letter n-gram language models with word dictionaries: Many existing probabilistic approaches use statistical letter n-gram language models of English to assign P (p) probabilities to plaintext hypotheses during decipherment. Other decryption techniques rely on word dictionaries (using words from an English dictionary) for attacking substitution ciphers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "Unlike previous approaches, our decipherment method combines information from both sourcesletter n-grams and word dictionaries. We build an interpolated word+n-gram LM and use it to assign P (p) probabilities to any plaintext letter sequence p 1 ...p n . 3 The advantage is that it helps direct the sampler towards plaintext hypotheses that resemble natural language-high probability letter sequences which form valid words such as \"H E L L O\" instead of sequences like \"'T X H R T\". But in addition to this, using letter n-gram information makes our model robust against variations in the original plaintext (for example, unseen words or misspellings as in the case of Zodiac-408 cipher) which can easily throw off dictionary-based attacks. Also, it is hard for a point-wise (or type) sampler to \"find words\" starting from a random initial sample, but easier to \"find n-grams\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "Sampling for ciphers without spaces: For ciphers without spaces, dictionaries are hard to use because we do not know where words start and end. We introduce a new sampling operator which counters this problem and allows us to perform inference using the same decipherment model described earlier. In a first sampling pass, we sample from 26 plaintext letter choices (e.g., \"A\", \"B\", \"C\", ...) for every cipher symbol type as before. We then run a second pass using a new sampling operator that iterates over adjacent plaintext letter pairs p i\u22121 , p i in the current hypothesis and samples from two choices-(1) add a word boundary (space character \" \") between p i\u22121 and p i , or (2) remove an existing space character between p i\u22121 and p i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "For example, given the English plaintext hypothesis \"... A B O Y ...\", there are two sampling choices for the letter pair A,B in the second step. If we decide to add a word boundary, our new plaintext hypothesis becomes \"... A B O Y ...\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "We compute the derivation probability of the new sample using the same efficient scoring procedure described earlier. The new strategy allows us to apply Bayesian decipherment even to ciphers without spaces. As a result, we now have a new decipherment method that consistently works for a range of different types of substitution ciphers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "Decoding the ciphertext: After the sampling run has finished, 4 we choose the final sample as our English plaintext decipherment output. 4 For letter substitution decipherment we want to keep the language model probabilities fixed during training, and hence we set the prior on that model to be high (\u03b1 = 10 4 ). We use a sparse prior for the channel (\u03b2 = 0.01). We instantiate a key which matches frequently occurring plaintext letters to frequent cipher symbols and use this to generate an initial sample for the given ciphertext and run the sampler for 5000 iterations. We use a linear annealing schedule during sampling decreasing the temperature from 10 \u2192 1.", "cite_spans": [ { "start": 137, "end": 138, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O", "sec_num": null }, { "text": "We run decipherment experiments on different types of letter substitution ciphers (described in Section 2). In particular, we work with the following three ciphers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "(a) 414-letter Simple Substitution Cipher (b) 414-letter Homophonic Cipher (with spaces) (c) Zodiac-408 Cipher Methods: For each cipher, we run and compare the output from two different decipherment approaches:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "1. EM Method using letter n-gram LMs following the approach of Knight et al. (2006) . They use the EM algorithm to estimate the channel parameters \u03b8 during decipherment training. The given ciphertext c is then decoded by using the Viterbi algorithm to choose the plaintext decoding p that maximizes P (p)\u2022P \u03b8 (c|p) 3 , stretching the channel probabilities.", "cite_spans": [ { "start": 63, "end": 83, "text": "Knight et al. (2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "2. Bayesian Decipherment method using word+n-gram LMs (novel approach described in Section 3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "We evaluate the quality of a particular decipherment as the percentage of cipher tokens that are decoded correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation:", "sec_num": null }, { "text": "Results: Figure 2 compares the decipherment performance for the EM method with Bayesian decipherment (using type sampling and sparse priors) on three different types of substitution ciphers. Results show that our new approach (Bayesian) outperforms the EM method on all three ciphers, solving them completely. Even with a 3-gram letter LM, our method yields a +63% improvement in decipherment accuracy over EM on the homophonic cipher with spaces. We observe that the word+3-gram LM proves highly effective when tackling more complex ciphers and cracks the Zodiac-408 cipher. Figure 1 shows samples from the Bayesian decipherment output for all three ciphers. For ciphers without spaces, our method automatically guesses the word boundaries for the plaintext hypothesis. For the Zodiac-408 cipher, we compare the performance achieved by Bayesian decipherment under different settings:", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 576, "end": 584, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Evaluation:", "sec_num": null }, { "text": "\u2022 Letter n-gram versus Word+n-gram LMs- Figure 2 shows that using a word+3-gram LM instead of a 3-gram LM results in +75% improvement in decipherment accuracy.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 48, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Evaluation:", "sec_num": null }, { "text": "\u2022 Sparse versus Non-sparse priors-We find that using a sparse prior for the channel model (\u03b2 = 0.01 versus 1.0) helps for such problems and produces better decipherment results (97.8% versus 24.0% accuracy).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation:", "sec_num": null }, { "text": "\u2022 Type versus Point-wise sampling-Unlike point-wise sampling, type sampling quickly converges to better decipherment solutions. After 5000 sampling passes over the entire data, decipherment output from type sampling scores 97.8% accuracy compared to 14.5% for the point-wise sampling run. 5 We also perform experiments on shorter substitution ciphers. On a 98-letter simple substitution cipher, EM using 3-gram LM achieves 41% accuracy, whereas the method from Ravi and Knight (2009) scores 84% accuracy. Our Bayesian method performs the best in this case, achieving 100% with word+3-gram LM.", "cite_spans": [ { "start": 289, "end": 290, "text": "5", "ref_id": null }, { "start": 461, "end": 483, "text": "Ravi and Knight (2009)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation:", "sec_num": null }, { "text": "In this work, we presented a novel Bayesian decipherment approach that can effectively solve a va- 5 Both sampling runs were seeded with the same random initial sample. riety of substitution ciphers. Unlike previous approaches, our method combines information from letter n-gram language models and word dictionaries and provides a robust decipherment model. We empirically evaluated the method on different substitution ciphers and achieve perfect decipherments on all of them. Using Bayesian decipherment, we can successfully solve the Zodiac-408 cipher-the first time this is achieved by a fully automatic method in a strict decipherment scenario.", "cite_spans": [ { "start": 99, "end": 100, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "For future work, there are other interesting decipherment tasks where our method can be applied. One challenge is to crack the unsolved Zodiac-340 cipher, which presents a much harder problem than the solved version.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "This assumption does not strictly apply to the Zodiac-408 cipher where a few cipher symbols exhibit non-determinism in the decipherment direction as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The relevant context window that is affected when sampling at position i is determined by the word boundaries to the left and right of i.3 We set the interpolation weights for the word and n-gram LM as (0.9, 0.1). The word-based LM is constructed from a dictionary consisting of 9,881 frequently occurring words collected from Wikipedia articles. We train the letter n-gram LM on 50 million words of English text available from the Linguistic Data Consortium.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the reviewers for their comments. This research was supported by NSF grant IIS-0904684.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Gibbs sampler for phrasal synchronous grammar induction", "authors": [ { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP)", "volume": "", "issue": "", "pages": "782--790", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phil Blunsom, Trevor Cohn, Chris Dyer, and Miles Os- borne. 2009. A Gibbs sampler for phrasal syn- chronous grammar induction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the Asian Federa- tion of Natural Language Processing (ACL-IJCNLP), pages 782-790.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bayesian inference for finite-state transducers", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pauls", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies (NAACL/HLT)", "volume": "", "issue": "", "pages": "447--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Jonathan Graehl, Kevin Knight, Adam Pauls, and Sujith Ravi. 2010. Bayesian inference for finite-state transducers. In Proceedings of the Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics -Human Language Technologies (NAACL/HLT), pages 447-455.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An exact A* method for deciphering letter-substitution ciphers", "authors": [ { "first": "Eric", "middle": [], "last": "Corlett", "suffix": "" }, { "first": "Gerald", "middle": [], "last": "Penn", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1040--1047", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Corlett and Gerald Penn. 2010. An exact A* method for deciphering letter-substitution ciphers. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1040-1047.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "Arthur", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "Nan", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "Donald", "middle": [ "B" ], "last": "", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society, Series B", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur P. Dempster, Nan M. Laird, and Donald B. Ru- bin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Markov Chain Monte Carlo revolution", "authors": [ { "first": "Persi", "middle": [], "last": "Diaconis", "suffix": "" } ], "year": 2008, "venue": "Bulletin of the American Mathematical Society", "volume": "46", "issue": "2", "pages": "179--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Persi Diaconis. 2008. The Markov Chain Monte Carlo revolution. Bulletin of the American Mathematical So- ciety, 46(2):179-205.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Incorporating non-local information into information extraction systems by Gibbs sampling", "authors": [ { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "363--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into infor- mation extraction systems by Gibbs sampling. In Pro- ceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics (ACL), pages 363- 370.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical techniques for language recognition: An introduction and guide for cryptanalysts", "authors": [ { "first": "Ravi", "middle": [], "last": "Ganesan", "suffix": "" }, { "first": "Alan", "middle": [ "T" ], "last": "Sherman", "suffix": "" } ], "year": 1993, "venue": "Cryptologia", "volume": "17", "issue": "4", "pages": "321--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ravi Ganesan and Alan T. Sherman. 1993. Statistical techniques for language recognition: An introduction and guide for cryptanalysts. Cryptologia, 17(4):321- 366.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images", "authors": [ { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Geman", "suffix": "" } ], "year": 1984, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "6", "issue": "6", "pages": "721--741", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Geman and Donald Geman. 1984. Stochastic re- laxation, Gibbs distributions and the Bayesian restora- tion of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721-741.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A fully Bayesian approach to unsupervised part-of-speech tagging", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Griffiths", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "744--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater and Thomas Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tag- ging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744- 751.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A fast method for cryptanalysis of substitution ciphers", "authors": [ { "first": "Thomas", "middle": [], "last": "Jakobsen", "suffix": "" } ], "year": 1995, "venue": "Cryptologia", "volume": "19", "issue": "3", "pages": "265--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Jakobsen. 1995. A fast method for cryptanalysis of substitution ciphers. Cryptologia, 19(3):265-274.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Unsupervised analysis for decipherment problems", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Anish", "middle": [], "last": "Nair", "suffix": "" }, { "first": "Nishit", "middle": [], "last": "Rathod", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "499--506", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Ya- mada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the Joint Conference of the International Committee on Computational Lin- guistics and the Association for Computational Lin- guistics, pages 499-506.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Type-based MCMC", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Conference on Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "573--581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2010. Type-based MCMC. In Proceedings of the Conference on Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 573- 581.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Robust dictionary attack of short simple substitution ciphers", "authors": [ { "first": "Edwin", "middle": [], "last": "Olson", "suffix": "" } ], "year": 2007, "venue": "Cryptologia", "volume": "31", "issue": "4", "pages": "332--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edwin Olson. 2007. Robust dictionary attack of short simple substitution ciphers. Cryptologia, 31(4):332- 342.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Evolutionary algorithm for decryption of monoalphabetic homophonic substitution ciphers encoded as constraint satisfaction problems", "authors": [ { "first": "David", "middle": [], "last": "Oranchak", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation", "volume": "", "issue": "", "pages": "1717--1718", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Oranchak. 2008. Evolutionary algorithm for de- cryption of monoalphabetic homophonic substitution ciphers encoded as constraint satisfaction problems. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, pages 1717-1718.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Breaking substitution ciphers using a relaxation algorithm", "authors": [ { "first": "Shmuel", "middle": [], "last": "Peleg", "suffix": "" }, { "first": "Azriel", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1979, "venue": "Comm. ACM", "volume": "22", "issue": "11", "pages": "598--605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shmuel Peleg and Azriel Rosenfeld. 1979. Break- ing substitution ciphers using a relaxation algorithm. Comm. ACM, 22(11):598-605.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Attacking decipherment problems optimally with low-order n-gram models", "authors": [ { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "812--819", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sujith Ravi and Kevin Knight. 2008. Attacking deci- pherment problems optimally with low-order n-gram models. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP), pages 812- 819.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Probabilistic methods for a Japanese syllable cipher", "authors": [ { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the International Conference on the Computer Processing of Oriental Languages (ICCPOL)", "volume": "", "issue": "", "pages": "270--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sujith Ravi and Kevin Knight. 2009. Probabilistic meth- ods for a Japanese syllable cipher. In Proceedings of the International Conference on the Computer Pro- cessing of Oriental Languages (ICCPOL), pages 270- 281.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A statistical model for lost language decipherment", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1048--1057", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipher- ment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1048-1057.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Samples from the ciphertext sequence, corresponding English plaintext message and output from Bayesian decipherment (using word+3-gram LM) for three different ciphers: (a) Simple Substitution Cipher (top), (b) Homophonic Substitution Cipher with spaces (middle), and (c) Zodiac-408 Cipher (bottom)." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Comparison of decipherment accuracies for EM versus Bayesian method when using different language models of English on the three substitution ciphers: (a) 414-letter Simple Substitution Cipher, (b) 414-letter Homophonic Substitution Cipher (with spaces), and (c) the famous Zodiac-408 Cipher." } } } }