{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:06.148344Z" }, "title": "Shellcode IA32: A Dataset for Automatic Shellcode Generation", "authors": [ { "first": "Pietro", "middle": [], "last": "Liguori", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Naples Federico II", "location": { "settlement": "Naples", "country": "Italy" } }, "email": "pietro.liguori@unina.it" }, { "first": "Erfan", "middle": [], "last": "Al-Hossami", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Carolina at Charlotte", "location": { "settlement": "Charlotte", "region": "NC", "country": "USA" } }, "email": "" }, { "first": "Domenico", "middle": [], "last": "Cotroneo", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Naples Federico II", "location": { "settlement": "Naples", "country": "Italy" } }, "email": "cotroneo@unina.it" }, { "first": "Roberto", "middle": [], "last": "Natella", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Naples Federico II", "location": { "settlement": "Naples", "country": "Italy" } }, "email": "roberto.natella@unina.it" }, { "first": "Bojan", "middle": [], "last": "Cukic", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Carolina at Charlotte", "location": { "settlement": "Charlotte", "region": "NC", "country": "USA" } }, "email": "bcukic@uncc.edu" }, { "first": "Samira", "middle": [], "last": "Shaikh", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Carolina at Charlotte", "location": { "settlement": "Charlotte", "region": "NC", "country": "USA" } }, "email": "samirashaikh@uncc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We take the first step to address the task of automatically generating shellcodes, i.e., small pieces of code used as a payload in the exploitation of a software vulnerability, starting from natural language comments. We assemble and release a novel dataset (Shellcode IA32), consisting of challenging but common assembly instructions with their natural language descriptions. We experiment with standard methods in neural machine translation (NMT) to establish baseline performance levels on this task.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We take the first step to address the task of automatically generating shellcodes, i.e., small pieces of code used as a payload in the exploitation of a software vulnerability, starting from natural language comments. We assemble and release a novel dataset (Shellcode IA32), consisting of challenging but common assembly instructions with their natural language descriptions. We experiment with standard methods in neural machine translation (NMT) to establish baseline performance levels on this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A growing body of research has dealt with automated code generation: given a natural language description, a code comment or intent, the task is to generate a piece of code in a programming language (Yin and Neubig, 2017; Ling et al., 2016) . The task of generating programming code snippets, also referred to as semantic parsing (Yin and Neubig, 2019; Xu et al., 2020) , has been previously addressed to generate executable snippets in domain-specific languages (Guu et al., 2017; Long et al., 2016) , and several programming languages, including Python (Yin and Neubig, 2017) and Java (Ling et al., 2016) .", "cite_spans": [ { "start": 199, "end": 221, "text": "(Yin and Neubig, 2017;", "ref_id": "BIBREF28" }, { "start": 222, "end": 240, "text": "Ling et al., 2016)", "ref_id": "BIBREF15" }, { "start": 330, "end": 352, "text": "(Yin and Neubig, 2019;", "ref_id": "BIBREF29" }, { "start": 353, "end": 369, "text": "Xu et al., 2020)", "ref_id": "BIBREF26" }, { "start": 463, "end": 481, "text": "(Guu et al., 2017;", "ref_id": "BIBREF9" }, { "start": 482, "end": 500, "text": "Long et al., 2016)", "ref_id": "BIBREF17" }, { "start": 555, "end": 577, "text": "(Yin and Neubig, 2017)", "ref_id": "BIBREF28" }, { "start": 587, "end": 606, "text": "(Ling et al., 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "We consider the task of generating shellcodes, i.e., small pieces of code used as a payload to exploit software vulnerabilities. Shellcoding, in its most literal sense, means writing code that will return a remote shell when executed. It can represent any byte code that will be inserted into an exploit to accomplish the desired, malicious, task (Mason et al., 2009) . An example of a shellcode program in assembly language and the corresponding natural language comments are shown in Listing 1.", "cite_spans": [ { "start": 347, "end": 367, "text": "(Mason et al., 2009)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Shellcodes are important because they are the key element of security attacks: they represent code injected into victim software to take control of 1 global _start; Declare global _start. 2 section .text; Declare the text section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "3 _start:;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Define the _start label. 4 xor eax, eax;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Zero out the eax register 5 push eax;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "and push its contents on the stack. 6 push 0x68732f2f;Move /bin//sh 7 push 0x6e69622f;into the ebx register. 8 mov ebx, esp 9 push eax;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Push the contents of eax onto the stack 10 mov edx, esp;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "and point edx to the stack register. 11 push ebx;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Push the contents of ebx onto the stack 12 mov ecx, esp;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "and point ecx to the stack register. 13 mov al, 11;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Put the system call 11 into the al register. 14 int 0x80;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Make the kernel call.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Listing 1: x86 assembly code used to spawn /bin/sh shell on Linux OS. Lines 4-5, 6-7-8, 9-10, 11-12 are multi-line snippets generated by four different intents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "a machine, to escalate privileges, and to use the machine for malicious purposes such as DDoS attacks, data theft, and running malware (Arce, 2004) . Well-intentioned actors (security practitioners and product vendors) also develop shellcodes to run non-harmful proof-of-concept attacks, to show how security weaknesses can be exploited to identify vulnerabilities and patch systems. Thus, shellcode generation using (semi-) automated techniques has become a popular and very active research topic (Bao et al., 2017) . However, writing shellcodes is technically challenging since they are typically written in assembly language (c.f. Listing 1). The most sophisticated shellcodes can reach hundreds of assembly lines of code. The task of the shellcode generation has been addressed by several works and tools. Bao et al. (2017) designed ShellSwap, a system that can modify an observed exploit and replace the original shellcode with an arbitrary replacement shellcode. The system uses symbolic tracing, with a combination of shellcode layout remediation and path kneading to achieve shellcode transplants. Pwntools (pwntools, Accessed: 2021-05-29) is a CTF framework and exploit development library written in Python. It is designed for rapid prototyping and development and intended to make exploit writing as simple as possible.", "cite_spans": [ { "start": 135, "end": 147, "text": "(Arce, 2004)", "ref_id": "BIBREF0" }, { "start": 498, "end": 516, "text": "(Bao et al., 2017)", "ref_id": "BIBREF3" }, { "start": 810, "end": 827, "text": "Bao et al. (2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Differently from previous work in the security literature, we approach this problem as a machine translation (NMT) task. We apply neural machine translation (Goodfellow et al., 2016) , which unlike the traditional phrase-based translation system consisting of many small sub-components tuned separately, attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation (Bahdanau et al., 2015) . NMT has emerged as a promising machine translation approach, showing superior performance on public benchmarks (Bojar et al., 2016) , and it is widely recognized as the premier method for the translation of different languages (Wu et al., 2016) . NMT has also been used to perform complex tasks on the UNIX operating system shell (Lin et al., 2017) (e.g. file manipulation and search), by stating goals in English (Lin et al., 2018) , to automatically generate commit messages (Liu et al., 2018) , etc. However, the NMT techniques have not heretofore been adopted to automatically generate software exploits from natural language comments.", "cite_spans": [ { "start": 157, "end": 182, "text": "(Goodfellow et al., 2016)", "ref_id": "BIBREF8" }, { "start": 419, "end": 442, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF2" }, { "start": 556, "end": 576, "text": "(Bojar et al., 2016)", "ref_id": "BIBREF4" }, { "start": 672, "end": 689, "text": "(Wu et al., 2016)", "ref_id": "BIBREF25" }, { "start": 851, "end": 877, "text": "English (Lin et al., 2018)", "ref_id": null }, { "start": 922, "end": 940, "text": "(Liu et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "Since NMT is a data-driven approach to code generation, we need a dataset of intents in natural language, and their corresponding translation (in our context, in assembly language) for shellcode generation. In this preliminary work, we address the lack of such a dataset by presenting Shellcode IA32, a dataset containing 3, 200 lines of assembly code extracted from real shellcodes and described in the English language. Moreover, we present experiments on our dataset using a baseline technique, in order to establish performance levels for evaluating shellcode generation techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "We compiled a dataset, Shellcode IA32, specific to our task. This dataset consists of 3,200 examples of instructions in assembly language for IA-32 (the 32-bit version of the x86 Intel Architecture) from publicly-available security exploits. We collected assembly programs used to generate shellcode from shell-storm (Shellcodes database for study cases, Accessed: 2021-04-22) and from Exploit Database (Exploit Database Shellcodes , Accessed: 2021-04-22), in the period between 2000 and 2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "Our focus is on Linux, the most common OS for security-critical network services. Accordingly, we added assembly instructions written with Netwide Assembler (NASM) for Linux (Duntemann, 2000) . NASM is line-based. Figure 1 shows a simple example of a NASM source line. Every source line contains a combination of four fields: an optional label used to represent either an identifier or a constant, a mnemonic or instruction, which identifies the purpose of the statement and followed by zero or more operands specifying the data to be manipulated, and an optional comment, i.e., text ignored by the compiler. A mnemonic is not required if a line contains only a label or a comment. Each line of Shellcode IA32 dataset represents a snippet -intent pair. The snippet is a line or a combination of multiple lines of assembly code, built by following the NASM syntax. The intent is a comment in the English language (c.f. Listing 1).", "cite_spans": [ { "start": 174, "end": 191, "text": "(Duntemann, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 214, "end": 222, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "To take into account the variability of descriptions in natural language, multiple authors described independently different samples of the dataset in the English language. Where available, we used as natural language descriptions the comments written by developers of the collected programs. We enriched the dataset by adding examples of assembly programs for the IA-32 architecture from popular tutorials and books (Duntemann, 2011; Kusswurm, 2014; Tutorialspoint, Accessed: 2021-04-22) to understand how different authors and assembly experts describe the code and, thus, how to deal with the ambiguity of natural language in this specific context. Our dataset consists of \u223c 10% of instructions collected from books and guidelines and the rest from real shellcodes. Multi-line Snippets: To automatically generate shellcodes, we need to look beyond a one-to-one mapping between a line of code and its comment/intent. For example, a common operation in shellcodes is to save the ASCII \"/bin/sh\" into a register. This operation requires three distinct assembly Intent: jump short to the decode label if the contents of the al register is not equal to the contents of the cl register else jump to the shellcode label Multi-line Snippets: cmp al, cl \\n jne short decode \\n jmp shellcode Intent: jump to the label recv http request if the contents of the eax register is not zero else subtract the value 0x6 from the contents of the ecx register Multi-line Snippets: test eax, eax \\n jnz recv http request \\n sub ecx, 0x6 instructions: push the hexadecimal values of the words \"/bin\" and \"//sh\" onto the stack register before moving the contents of the stack register into the destination register (lines 6-8 in Listing 1). It would be meaningless to consider these three instructions as separate. To address such situations, we include 510 lines (\u223c 16% of the dataset) of intents that generate multiple lines of shellcodes (separated by the newline character \\n). Table 1 shows two further examples of multi-line snippets with their natural language intent. Statistics: Table 2 presents the descriptive statistics of the Shellcode IA32 dataset. The dataset contains 52 distinct assembly instructions (excluding function, section, and label declaration). The two most frequent assembly instructions are mov (\u223c 30% frequency), used to move data into/from registers/memory or to invoke a system call, and push (\u223c 22% frequency), which is used to push a value onto the stack. The next most frequent instructions are the cmp (\u223c 7% frequency), xor and jmp instructions (\u223c 4% frequency). The lowfrequency words (i.e., the words that appear only once or twice in the dataset) contribute to the 3.6% and 7.3% of the natural language and the assembly language, resp. Figure 2 shows the distribution of the number of tokens across the intents and snippets in the dataset. We publicly share our entire Shellcode IA32 dataset on a GitHub repository. 1 Size of our dataset: Our dataset contains 3, 200 instances, which may seem relatively small compared to training data available for most common NLP tasks. We note, however, that our dataset is comparable in size to the CoNaLa annotated dataset (2, 379 training and 500 test examples), which is one of the standard datasets in code generation (for English-Python code generation) (Yin et al., 2018) . Further, Shellcode IA32 contains a higher percent- 1 The dataset can be found here: https://github. com/dessertlab/Shellcode_IA32 Figure 2 : Histogram of the Shellcode IA32 dataset showcasing the distribution of token counts across intents and snippets.", "cite_spans": [ { "start": 417, "end": 434, "text": "(Duntemann, 2011;", "ref_id": "BIBREF6" }, { "start": 435, "end": 450, "text": "Kusswurm, 2014;", "ref_id": "BIBREF12" }, { "start": 451, "end": 488, "text": "Tutorialspoint, Accessed: 2021-04-22)", "ref_id": null }, { "start": 3316, "end": 3334, "text": "(Yin et al., 2018)", "ref_id": "BIBREF27" }, { "start": 3388, "end": 3389, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 1962, "end": 1969, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 2068, "end": 2075, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 2755, "end": 2763, "text": "Figure 2", "ref_id": null }, { "start": 3467, "end": 3475, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "age of multi-line snippets (\u223c 16% vs. \u223c 4%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "We also note here that existing code generation datasets do contain a larger, potentially noisy, subset of training examples (ranging in several thousand) obtained by mining the web. For example, the CoNaLa mined (as opposed to the CoNaLa annotated) dataset contains 598,237 training examples mined directly from Stack Overflow (Yin et al., 2018) . In our case, although shellcodes are written in assembly language, it is not feasible to simply mine examples of natural language-assembly from the web: not all assembly programs are shellcodes. Thus, our Shellcode IA32 dataset, which contains \u223c 20 years of shellcodes from a variety of sources is the largest collection of shellcodes in assembly available to date.", "cite_spans": [ { "start": 328, "end": 346, "text": "(Yin et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "We performed a set of preliminary experiments with our dataset, in order to assess the applicability of NMT in the context of shellcode generation and to establish baseline performance levels for evaluating techniques for future research. Similar to the encoder-decoder architecture with attention (Bahdanau et al., 2015), we use a bi-directional LSTM as the encoder to transform an embedded intent sequence E = |e 1 , ..., e T S | into a vector c of hidden states with equal length. We implement this architecture with Bahdanau-style attention (Bahdanau et al., 2015) using xnmt . We use an Adam optimizer (Kingma and Ba, 2015) with \u03b2 1 = 0.9 and \u03b2 2 = 0.999. The last step is inference. During inference, the auto regressive inference component uses beam search with a beam size of 5. The train/dev/test split is train (N = 2560), dev (N = 320), and test (N = 320) using a random 80/10/10 ratio. The test set includes 44 multi-line snippets (13.75% of the test set).", "cite_spans": [ { "start": 545, "end": 574, "text": "(Bahdanau et al., 2015) using", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary Evaluation", "sec_num": "3" }, { "text": "Following prior work in this area (Ling et al., 2016; Yin and Neubig, 2017; Oda et al., 2015) , we evaluate the translation performance in terms of averaged token level BLEU scores (Papineni et al., 2002) . BLEU uses the modified form of n-grams precision and length difference penalty to evaluate the quality of the output generated by the model compared to the referenced one. BLEU measures translation quality by the accuracy of translating ngrams to n-grams, for values of n usually ranging between 1 and 4 (Han, 2016; Munkova et al., 2020) . We measure the performance of the evaluation task also in terms of exact match accuracy (ACC), which is the fraction of exactly matching samples between the predicted output and the reference (Yin and Neubig, 2017) . Both metrics range between 0 and 1.", "cite_spans": [ { "start": 34, "end": 53, "text": "(Ling et al., 2016;", "ref_id": "BIBREF15" }, { "start": 54, "end": 75, "text": "Yin and Neubig, 2017;", "ref_id": "BIBREF28" }, { "start": 76, "end": 93, "text": "Oda et al., 2015)", "ref_id": "BIBREF21" }, { "start": 181, "end": 204, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF22" }, { "start": 511, "end": 522, "text": "(Han, 2016;", "ref_id": "BIBREF10" }, { "start": 523, "end": 544, "text": "Munkova et al., 2020)", "ref_id": "BIBREF19" }, { "start": 739, "end": 761, "text": "(Yin and Neubig, 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary Evaluation", "sec_num": "3" }, { "text": "During our experiments, we set a basic configuration of the model: \u03b1 = 0.001, layers = 1, vocabulary size = 4, 000, epochs (with early stopping enforced) = 200, beam size = 5, minimun word frequency = 1. Next, we performed experiments by varying the dimensionality of the layers from 64 to 1024, and the number of layers from 1 to 4 while keeping all other hyper-parameters constant. Table 3 summarizes the results. We notice that increasing the number of layers leads to worse performance, while a layer dimension set between 256 and 512 is found to be the best option.", "cite_spans": [], "ref_spans": [ { "start": 384, "end": 392, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Preliminary Evaluation", "sec_num": "3" }, { "text": "All experiments were performed on a Linux OS running on a virtual machine with 8 CPU cores and 8 GB RAM. The computational times are highly dependent on the model hyper-parameters, and range between few minutes to \u223c 105 minutes, with the average training time equal to \u223c 28 minutes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Evaluation", "sec_num": "3" }, { "text": "Automated metrics (BLEU and accuracy) provide a somewhat limited window into the efficacy of the models to accomplish our task: the task of automatically generating assembly code from natural language intents. We conducted a qualitative analysis of the outputs to address this issue and present our findings through cherry-and lemon-picked examples from our test set (Table 4 ). In particular, we manually expected the outputs predicted by the best model configurations found in Table 3 (layers number = 1, layer dimension = 512).", "cite_spans": [], "ref_spans": [ { "start": 367, "end": 375, "text": "(Table 4", "ref_id": "TABREF6" }, { "start": 479, "end": 486, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "4" }, { "text": "The first two rows of Table 4 are illustrative examples of categories of intent -snippet pairs that the model can successfully translate. The first row demonstrates the ability of the model to generate multi-line snippets from a relatively abstract intent. The example in the second row shows the model's ability to properly use the instruction lea with the correct addressing mode (specified by the bracket [] in NASM syntax) to translate the intent. We note here that a1though the output would be considered incorrect based on automated metrics (e.g. BLEU-4), it is considered correct using manual inspection.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "4" }, { "text": "We also highlight problems with the models through illustrative examples of failure outputs (Rows 3 and 4, Table 4 ). In the third row of the table, the model generates the wrong instruction due to the model's failure in using implicit knowledge (i.e. the bit-wise inversion to negate the contents of the register) because it was not explicitly mentioned in the intent. Row 4 illustrates the model's failure in predicting the right command among fifteen different conditional jumps in the dataset (jle instead of jge) in an if-then statement. To summarize, the failures we observed are caused either by a lack of implicit intent knowledge, the model generating incorrect instruction/identifiers (i.e., register names, labels, etc), or even both.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "4" }, { "text": "Recognizing that attackers use exploit code as a weapon, it is important to specify that the goal of the proof-of-concept (POC) exploits is not to cause harm but to surface security weaknesses within the software. Identifying such security issues allows companies to patch vulnerabilities and protect themselves against attacks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethical Considerations", "sec_num": "5" }, { "text": "Offensive security is a sub-field of security re-search that employs ethical hackers to probe a system for vulnerabilities or can be a technique used to disrupt an attacker. Automatic exploit generation (AEG), an offensive security technique, is a developing area of research that aims to automate the exploit generation process and to explore and test critical vulnerabilities before they are discovered by attackers (Avgerinos et al., 2014) . Indeed, studying exploits on compromised systems can provide valuable information about the technical skills, degree of experience, and intent of the attackers who developed or used them. Using this information, it is possible to implement measures to detect and prevent attacks (Arce, 2004) .", "cite_spans": [ { "start": 418, "end": 442, "text": "(Avgerinos et al., 2014)", "ref_id": "BIBREF1" }, { "start": 724, "end": 736, "text": "(Arce, 2004)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Ethical Considerations", "sec_num": "5" }, { "text": "We address the problem of automated exploit generation through NLP. We use Neural Machine Translation to translate the natural language intents into assembly code. The contribution in this work is a new dataset, Shellcode IA32, containing 3, 200 pairs of instructions in assembly language code snippets and their corresponding intents in English. These assembly language snippets can be combined together to generate attacks or exploits on Linux OS running on Intel Architecture 32-bit machines. Shellcode IA32 represents a first step towards the ambitious goal of automatically generating shellcodes from natural language. Our experimental evaluation has shown promising early results, demonstrating the feasibility of generating assembly code instructions with high accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "This work has been partially supported by the University of Naples Federico II in the frame of the Programme F.R.A., project id OSTAGE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The shellcode generation. IEEE security & privacy", "authors": [ { "first": "Iv\u00e1n", "middle": [], "last": "Arce", "suffix": "" } ], "year": 2004, "venue": "", "volume": "2", "issue": "", "pages": "72--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iv\u00e1n Arce. 2004. The shellcode generation. IEEE se- curity & privacy, 2(5):72-76.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic exploit generation. Commun", "authors": [ { "first": "Thanassis", "middle": [], "last": "Avgerinos", "suffix": "" }, { "first": "Sang", "middle": [ "Kil" ], "last": "Cha", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Rebert", "suffix": "" }, { "first": "Edward", "middle": [ "J" ], "last": "Schwartz", "suffix": "" }, { "first": "Maverick", "middle": [], "last": "Woo", "suffix": "" }, { "first": "David", "middle": [], "last": "Brumley", "suffix": "" } ], "year": 2014, "venue": "", "volume": "57", "issue": "", "pages": "74--84", "other_ids": { "DOI": [ "10.1145/2560217.2560219" ] }, "num": null, "urls": [], "raw_text": "Thanassis Avgerinos, Sang Kil Cha, Alexandre Re- bert, Edward J. Schwartz, Maverick Woo, and David Brumley. 2014. Automatic exploit generation. Com- mun. ACM, 57(2):74-84.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Your exploit is mine: Automatic shellcode transplant for remote exploits", "authors": [ { "first": "Tiffany", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Ruoyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Shoshitaishvili", "suffix": "" }, { "first": "David", "middle": [], "last": "Brumley", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Symposium on Security and Privacy (SP)", "volume": "", "issue": "", "pages": "824--839", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiffany Bao, Ruoyu Wang, Yan Shoshitaishvili, and David Brumley. 2017. Your exploit is mine: Auto- matic shellcode transplant for remote exploits. In 2017 IEEE Symposium on Security and Privacy (SP), pages 824-839. IEEE.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Findings of the 2016 conference on machine translation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Antonio", "middle": [ "Jimeno" ], "last": "Yepes", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation", "volume": "2", "issue": "", "pages": "131--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In Pro- ceedings of the First Conference on Machine Trans- lation: Volume 2, Shared Task Papers, pages 131- 198.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Assembly language step-bystep: programming with DOS and Linux", "authors": [ { "first": "Jeff", "middle": [], "last": "Duntemann", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Duntemann. 2000. Assembly language step-by- step: programming with DOS and Linux. John Wi- ley & Sons.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Assembly language step-bystep: Programming with Linux", "authors": [ { "first": "Jeff", "middle": [], "last": "Duntemann", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Duntemann. 2011. Assembly language step-by- step: Programming with Linux. John Wiley & Sons.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Exploit Database Shellcodes", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "2021--2025", "other_ids": {}, "num": null, "urls": [], "raw_text": "Exploit Database Shellcodes . Accessed: 2021-04-22. exploit-db.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Deep learning", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "From language to programs: Bridging reinforcement learning and maximum marginal likelihood", "authors": [ { "first": "Kelvin", "middle": [], "last": "Guu", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1051--1062", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelvin Guu, Panupong Pasupat, Evan Liu, and Percy Liang. 2017. From language to programs: Bridg- ing reinforcement learning and maximum marginal likelihood. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1051-1062.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Machine translation evaluation resources and methods: A survey", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.04515" ] }, "num": null, "urls": [], "raw_text": "Lifeng Han. 2016. Machine translation evaluation re- sources and methods: A survey. arXiv preprint arXiv:1605.04515.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Modern X86 Assembly Language Programming", "authors": [ { "first": "Daniel", "middle": [], "last": "Kusswurm", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Kusswurm. 2014. Modern X86 Assembly Lan- guage Programming. Springer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Program synthesis from natural language using recurrent neural networks", "authors": [ { "first": "Chenglong", "middle": [], "last": "Xi Victoria Lin", "suffix": "" }, { "first": "Deric", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Michael D", "middle": [], "last": "Vu", "suffix": "" }, { "first": "", "middle": [], "last": "Ernst", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xi Victoria Lin, Chenglong Wang, Deric Pang, Kevin Vu, and Michael D Ernst. 2017. Program synthe- sis from natural language using recurrent neural net- works. University of Washington Department of Computer Science and Engineering, Seattle, WA, USA, Tech. Rep. UW-CSE-17-03-01.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "NL2Bash: A corpus and semantic parser for natural language interface to the linux operating system", "authors": [ { "first": "Chenglong", "middle": [], "last": "Xi Victoria Lin", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Michael", "middle": [ "D" ], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Ernst", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D. Ernst. 2018. NL2Bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Latent predictor networks for code generation", "authors": [ { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Tom\u00e1s", "middle": [], "last": "Kocisk\u00fd", "suffix": "" }, { "first": "Andrew", "middle": [ "W" ], "last": "Senior", "suffix": "" }, { "first": "Fumin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang Ling, Edward Grefenstette, Karl Moritz Her- mann, Tom\u00e1s Kocisk\u00fd, Andrew W. Senior, Fumin Wang, and Phil Blunsom. 2016. Latent predictor net- works for code generation. CoRR, abs/1603.06744.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neuralmachine-translation-based commit message generation: how far are we?", "authors": [ { "first": "Zhongxin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Ahmed", "middle": [ "E" ], "last": "Hassan", "suffix": "" }, { "first": "David", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Zhenchang", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Xinyu", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering", "volume": "", "issue": "", "pages": "373--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongxin Liu, Xin Xia, Ahmed E Hassan, David Lo, Zhenchang Xing, and Xinyu Wang. 2018. Neural- machine-translation-based commit message genera- tion: how far are we? In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 373-384.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Simpler context-dependent logical forms via model projections", "authors": [ { "first": "Reginald", "middle": [], "last": "Long", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1456--1465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler context-dependent logical forms via model projections. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1456- 1465.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "English shellcode", "authors": [ { "first": "Joshua", "middle": [], "last": "Mason", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Small", "suffix": "" }, { "first": "Fabian", "middle": [], "last": "Monrose", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Macmanus", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 16th ACM conference on Computer and communications security", "volume": "", "issue": "", "pages": "524--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshua Mason, Sam Small, Fabian Monrose, and Greg MacManus. 2009. English shellcode. In Proceed- ings of the 16th ACM conference on Computer and communications security, pages 524-533.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Evaluation of machine translation quality through the metrics of error rate and accuracy", "authors": [ { "first": "Dasa", "middle": [], "last": "Munkova", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Hajek", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Munk", "suffix": "" } ], "year": 2020, "venue": "Procedia Computer Science", "volume": "171", "issue": "", "pages": "1327--1336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dasa Munkova, Petr Hajek, Michal Munk, and Jan Skalka. 2020. Evaluation of machine translation quality through the metrics of error rate and accu- racy. Procedia Computer Science, 171:1327-1336.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "XNMT: The eXtensible neural machine translation toolkit", "authors": [ { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Sperber", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Felix", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Sarguna", "middle": [], "last": "Padmanabhan", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Devendra", "middle": [], "last": "Sachan", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Arthur", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Godard", "suffix": "" }, { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Rachid", "middle": [], "last": "Riad", "suffix": "" }, { "first": "Liming", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas", "volume": "1", "issue": "", "pages": "185--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padman- abhan, Ye Qi, Devendra Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, and Lim- ing Wang. 2018. XNMT: The eXtensible neural ma- chine translation toolkit. In Proceedings of the 13th Conference of the Association for Machine Transla- tion in the Americas (Volume 1: Research Track), pages 185-192, Boston, MA. Association for Ma- chine Translation in the Americas.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning to generate pseudo-code from source code using statistical machine translation (t)", "authors": [ { "first": "Yusuke", "middle": [], "last": "Oda", "suffix": "" }, { "first": "Hiroyuki", "middle": [], "last": "Fudaba", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Hideaki", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Sakriani", "middle": [], "last": "Sakti", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2015, "venue": "30th IEEE/ACM International Conference on Automated Software Engineering (ASE)", "volume": "", "issue": "", "pages": "574--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical ma- chine translation (t). In 2015 30th IEEE/ACM In- ternational Conference on Automated Software En- gineering (ASE), pages 574-584. IEEE.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Shellcodes database for study cases", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "2021--2025", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shellcodes database for study cases. Accessed: 2021- 04-22. shell-storm.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Assembly Programming Tutorial", "authors": [ { "first": "", "middle": [], "last": "Tutorialspoint", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "2021--2025", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tutorialspoint. Accessed: 2021-04-22. Assembly Pro- gramming Tutorial.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Klingner", "suffix": "" }, { "first": "Apurva", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Xiaobing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Yoshikiyo", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Hideto", "middle": [], "last": "Kazawa", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Stevens", "suffix": "" }, { "first": "George", "middle": [], "last": "Kurian", "suffix": "" }, { "first": "Nishant", "middle": [], "last": "Patil", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Oriol Vinyals", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Incorporating external knowledge through pre-training for natural language to code generation", "authors": [ { "first": "F", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Zhengbao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Bogdan", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Vasilescu", "suffix": "" }, { "first": "", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6045--6052", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.538" ] }, "num": null, "urls": [], "raw_text": "Frank F. Xu, Zhengbao Jiang, Pengcheng Yin, Bogdan Vasilescu, and Graham Neubig. 2020. Incorporating external knowledge through pre-training for natural language to code generation. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 6045-6052, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning to mine aligned code and natural language pairs from stack overflow", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Edgar", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Bogdan", "middle": [], "last": "Vasilescu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "International Conference on Mining Software Repositories, MSR", "volume": "", "issue": "", "pages": "476--486", "other_ids": { "DOI": [ "10.1145/3196398.3196408" ] }, "num": null, "urls": [], "raw_text": "Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In International Conference on Min- ing Software Repositories, MSR, pages 476-486. ACM.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A syntactic neural model for general-purpose code generation", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. CoRR, abs/1704.01696.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Reranking for neural semantic parsing", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4553--4559", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin and Graham Neubig. 2019. Reranking for neural semantic parsing. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 4553-4559.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "wordvar: resw 1 ; reserve a word for wordvar label instruction operand comment" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Layout of NASM source line" }, "TABREF0": { "num": null, "html": null, "content": "