{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:43:59.232473Z" }, "title": "Pimlico: A toolkit for corpus-processing pipelines and reproducible experiments", "authors": [ { "first": "Mark", "middle": [], "last": "Granroth-Wilding", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": {} }, "email": "mark.granroth-wilding@helsinki.fi" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present Pimlico, an open source toolkit for building pipelines for processing large corpora. It is especially focused on processing linguistic corpora and provides wrappers around existing, widely used NLP tools. A particular goal is to ease distribution of reproducible and extensible experiments by making it easy to document and rerun all steps involved, including data loading, pre-processing, model training and evaluation. Once a pipeline is released, it is easy to adapt, for example, to run on a new dataset, or to rerun an experiment with different parameters. The toolkit takes care of many common challenges in writing and distributing corpus-processing code, such as managing data between the steps of a pipeline, installing required software and combining existing toolkits with new, task-specific code.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present Pimlico, an open source toolkit for building pipelines for processing large corpora. It is especially focused on processing linguistic corpora and provides wrappers around existing, widely used NLP tools. A particular goal is to ease distribution of reproducible and extensible experiments by making it easy to document and rerun all steps involved, including data loading, pre-processing, model training and evaluation. Once a pipeline is released, it is easy to adapt, for example, to run on a new dataset, or to rerun an experiment with different parameters. The toolkit takes care of many common challenges in writing and distributing corpus-processing code, such as managing data between the steps of a pipeline, installing required software and combining existing toolkits with new, task-specific code.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "It is becoming more and more common for conferences and journals in NLP and other computational areas to encourage, or even require, authors to make publicly available the code and data required to reproduce their reported results. It is now widely acknowledged that such practices lie at the center of open science and are essential to ensuring that research contributions are verifiable, extensible and useable in applications. However, this requires extensive additional work. And, even when researchers do this, it is all too common for others to have to spend large amounts of time and effort preparing data, downloading and installing tools, configuring execution environments and picking through instructions and scripts before they can reproduce the original results, never mind apply the code to new datasets or build upon it in novel research. Whilst sometimes it may be sufficient to release a script that performs all of the data processing, model training and experimental evalua-tion steps, often this is not a practical approach to multi-stage processing of large corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present a toolkit, Pimlico (Pipelined Modular Linguistic Corpus processing), that addresses these problems. It allows users to write and run potentially complex processing pipelines, with the key goals of making it easy to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 clearly document what was done;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 incorporate standard NLP and data-processing tasks with minimal effort;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 integrate non-standard code, specific to the task at hand, into the same pipeline; and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 distribute code for later reproduction or application to other datasets or experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The toolkit is written in Python and released under the open source LGPLv3 license 1 . It comes with pre-defined modules to wrap a number of existing NLP toolkits (including non-Python code) and carry out many other common pre-processing or data manipulation tasks. Comprehensive documentation is maintained online 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe the core concepts that Pimlico is built around and some of its key features. We also describe a number of the core modules that come built into the toolkit and we present an example pipeline. Finally, we explain how the toolkit addresses the stated goals and outline plans for future development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pimlico addresses the task of building of pipelines to process large datasets. It allows you to run one or several steps of processing at a time, with highlevel control over how each step is run, manages [split] type=pimlico.modules.corpora.split input=tokenized_corpus set1_size=0.8 Figure 1 : Example configuration section specifying a single module in a pipeline. The module has a single input, taken from an earlier module's output, and a single parameter. the data produced by each step, and lets you observe these intermediate outputs. Pimlico provides simple, powerful tools to give this kind of control, without needing to write any code.", "cite_spans": [ { "start": 204, "end": 211, "text": "[split]", "ref_id": null } ], "ref_spans": [ { "start": 284, "end": 292, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Building pipelines", "sec_num": "2" }, { "text": "Developing a pipeline with Pimlico involves defining the structure of the pipeline itself in terms of modules to be executed and connections between their inputs and outputs describing the flow of data. Modules correspond to some data-processing code, with some parameters. They may be of a standard type, so-called core modules, for which code is provided as part of Pimlico. A pipeline may also incorporate custom module types, for which metadata and data-processing code must be provided by the author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building pipelines", "sec_num": "2" }, { "text": "At the heart of Pimlico is the concept of a pipeline configuration, defined by a configuration (or conf ) file, which can be loaded and executed. This specifies some general parameters and metadata regarding the pipeline and then a sequence of modules to be executed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline configuration", "sec_num": "2.1" }, { "text": "Each pipeline module is defined by a named section in the file, which specifies the module type, inputs to be read from the outputs of other, previous modules, and parameters. For example, the configuration section in Fig. 1 defines a module called split. Its type is the core Pimlico module type corpus split 3 , which splits a corpus by documents into two randomly sampled subsets (as is typically done to produce training and test sets). The option input specifies where the module's only input comes from and refers by name to a module defined earlier in the pipeline whose output provides the data. The option set1 size tells the module to put 80% of documents into the first set and 20% in the second. Two outputs are produced, which can be referred to later in the pipeline as split.set1 and split.set2.", "cite_spans": [], "ref_spans": [ { "start": 218, "end": 224, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Pipeline configuration", "sec_num": "2.1" }, { "text": "The first module(s) of a pipeline have no inputs, but load datasets, with parameters to specify where the input data can be found on the filesystem. A number of standard input readers are among Pimlico's core module types to support reading of simple datasets, such as text files in a directory, and some standard input formats for data such as word embeddings. The toolkit also provides a factory to make it easy to define custom routines for reading other types of input data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline configuration", "sec_num": "2.1" }, { "text": "The type of a module is given as a fully qualified Python path to a Python package. The package provides separately the module type's metadata, referred to as its 'module info' -input datatypes, options, etc. -and the code that is executed when it is run, the 'module executor'. The example in Fig. 1 uses one of Pimlico's core module types. A pipeline will usually also include non-standard module types, distributed together with the conf file. These are defined and used in exactly the same way as the core module types. Where custom module types are used, the pipeline conf file specifies a directory where the source code can be found.", "cite_spans": [], "ref_spans": [ { "start": 294, "end": 300, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Pipeline configuration", "sec_num": "2.1" }, { "text": "An example of a complete pipeline conf, using both core and custom module types, is shown in Fig. 2 and is described in more detail in Section 6.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 99, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Pipeline configuration", "sec_num": "2.1" }, { "text": "When a module is run, its output is stored ready for use by subsequent modules. Pimlico takes care of storing each module's output in separate locations and providing the correct data as input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datatypes", "sec_num": "2.2" }, { "text": "The module info for a module type defines a datatype for each input and each output. Pimlico includes a system of datatypes for the datasets that are passed between modules. When a pipeline is loaded, type-checking is performed on the connections between modules' outputs and subsequent modules' inputs to ensure that appropriate datatypes are provided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datatypes", "sec_num": "2.2" }, { "text": "For example, a module may require a vocabulary as an input, for which Pimlico provides a standard datatype. The pipeline will only pass checks if this input is connected to an output that supplies a compatible type. The supplying module does not need to define how to store a vocabulary, since the datatype defines the necessary routines for writing a vocabulary to disk. The subsequent module does not need to define how to read the data, since the datatype takes care of that too, providing the module executor with suitable Python data structures. Figure 2 : Full example pipeline which loads a dataset from raw text files, tokenizes it and applies some custom processing. The file, together with the source code for the custom module type, are available at https: //github.com/markgw/pimlico/tree/master/examples. Alongside is a graphical representation of the pipeline structure.", "cite_spans": [], "ref_spans": [ { "start": 551, "end": 559, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Datatypes", "sec_num": "2.2" }, { "text": "Often modules read and write corpora, consisting of a large number of documents. Pimlico provides a datatype for representing such corpora and a further type system for the types of the documents stored within a corpus (rather like Java's generic types). For example, a module may specify that it requires as input a corpus whose documents contain tokenized text. All tokenizer modules (of which there are several) provide output corpora with this document type. The corpus datatype takes care of reading and writing large corpora, preserving the order of documents, storing corpus metadata, and much more.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datatypes", "sec_num": "2.2" }, { "text": "The datatype system is also extensible in custom code. As well as defining custom module types, a pipeline author may wish to define new datatypes to represent the data required as input to the modules or provided as output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datatypes", "sec_num": "2.2" }, { "text": "Pimlico provides a command-line interface for parsing and executing pipelines. The interface provides sub-commands to perform different operations relating to a given pipeline. The conf file defining the pipeline is always given as an argument and the first operation is therefore to parse the pipeline and check it for validity. We describe here a few of the most important sub-commands.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "status. Outputs a list of all of the modules in the pipeline, reporting the execution status of each. This indicates whether the module has been run; if so, whether it completed successfully or failed; if not, whether it is ready to be run (i.e. all of its input data is available).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "Each of the modules is numbered in the list, and this number can be used instead of the module's full name in arguments to all sub-commands.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "Given the name of a module, the command out-puts a detailed report on the status of that module and its input and output datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "run. Executes a module. An option --dry runs all pre-execution checks for the module, without running it. These include checking that required software is installed (see Section 3.2) and performing automatic installation if not. If all requirements are satisfied, the module will be executed, outputting its progress to the terminal and to module-specific log files. Output datasets are written to module-specific directories, ready to be used by subsequent modules later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "Multiple modules can be run in sequence, or even the entire pipeline. A switch --all-deps causes any unexecuted modules upon whose output the specified module(s) depend to be run.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "browse. Inspects the data output by a module, stored in its pipeline-internal storage. Inspecting output data by loading the files output by the module would require knowledge of both the Pimlico data storage system and the specific storage formats used by the output datatypes. Instead, this command lets the user inspect the data from a given module (and a given output, if there are multiple).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "Datatypes, as part of their definition, along with specification of storage format reading and writing, define how the data can be formatted for display. Multiple formatters may be defined, giving alternative ways to inspect the same data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "For some datatypes, browsing is as simple as outputting some statistics about the data, or a string representing its contents. For corpora, a documentby-document browser is provided, using the Urwid 4 library. Furthermore, the definition of corpus document types determines how an individual document should be displayed in the corpus browser. For example, the tokenized text type shows each sentence on a separate line, with spaces between tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the pipeline", "sec_num": "2.3" }, { "text": "A common type of module is one that takes input from one or more corpora, applies some independent processing to each document in turn and outputs a new corpus containing the processed data for the same set of documents. For example, we might lower-case the text of each document; map words to IDs from a vocabulary; or perform document-level topic inference using a pre-trained topic model. Pimlico makes it easy to define such modules, referred to as document map modules. The module executor can be defined using a factory, simply specifying a function to be applied independently to each document. It may also define pre-and postprocessing functions to be run before and after the document mapping process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document map modules", "sec_num": "2.4" }, { "text": "Such modules lend themselves naturally to parallelization, since separate documents can be processed independently by worker processes in a pool. When a document map module is defined using the factory, this simple type of parallelization is provided by default, using Python's multiprocessing module. The user simply needs to specify when running a module how many processes Pimlico should use and this number of workers will be launched to process documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document map modules", "sec_num": "2.4" }, { "text": "Furthermore, any document map module can be set to run in filter mode, using the filter=T option. This causes its processing to be performed on the fly as required by subsequent modules, instead of being stored to disk. The module then no longer appears in the list of executable modules, since it will be executed as necessary to provide inputs to subsequent modules when they are run. If an output corpora is used a number of times, this approach is inefficient, but if not, and especially if the per-document processing is fast, this can lead to a more streamlined workflow.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document map modules", "sec_num": "2.4" }, { "text": "3 Some key features", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document map modules", "sec_num": "2.4" }, { "text": "Data output by a module is stored ready for other modules to use. Pimlico manages storage locations specific to the pipeline, module and output, and provides the correct version of the data to modules that use the data as input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data management", "sec_num": "3.1" }, { "text": "Pimlico can be configured to use any location on the filesystem for pipeline-internal storage. Beyond this, the user does not need to concern themselves with the storage structure, nor data storage formats, which are managed by the datatype system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data management", "sec_num": "3.1" }, { "text": "The command-line interface provides a reset command to remove the output data of a given module and any subsequent modules that depend on it. This is useful, for example, if changing a module parameter and rerunning it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data management", "sec_num": "3.1" }, { "text": "Executing a module will often depend on having some software installed. This may be Python pack- ages, for pure Python modules, or other types of software. For example, Pimlico's core modules include wrappers around the OpenNLP Java toolkit, so running modules of one of these types requires the Java Runtime Environment (JRE) as well as the OpenNLP jar packages. Pimlico includes a software dependency management system. Software dependencies of many different types can be defined, such as Python packages, Java libraries, compiled C++ binaries and so on. A software dependency definition includes a routine to test whether the software is available and, wherever possible, a routine to automatically install the software in a location that is local to the pipeline's execution environment. For example, Python dependencies can be simply defined by reference to a Pip 5 package, which can be automatically downloaded and installed within a Python virtual environment using the Pip library.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Software dependencies", "sec_num": "3.2" }, { "text": "Each module type lists software that it depends on to run as part of its module info. When the user attempts to run a module or checks whether it is ready to run (using the run subcommand, Section 2.3), Pimlico checks all the dependencies and installs the necessary software by running the installation routine. A module's executor is strictly separated from its module info and is not loaded until all dependency checks are passed. This allows a module type programmer to freely write code within the executor that loads dependent libraries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Software dependencies", "sec_num": "3.2" }, { "text": "For example, the core module for training topic models using the Gensim toolkit (\u0158eh\u016f\u0159ek and Sojka, 2010) can only be run when the Gensim Python library is installed. Its module info declares this dependency. When a user attempts to run a module of this type in a pipeline, Pimlico uses Pip to automatically install the library before executing. In this way, another user subsequently receiving the pipeline does not need to make sure that they 5 https://pip.pypa.io/en/stable/ have installed this package on their system before running the pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Software dependencies", "sec_num": "3.2" }, { "text": "Requirements of specific versions of dependencies are currently supported for some types of dependencies. In future, this will be extended, including more sophisticated handling of conflicting versions within a pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Software dependencies", "sec_num": "3.2" }, { "text": "Examples so far have been of linear pipelines, where each module's output feeds into the input for the next. Pipeline structures are not restricted to this: they may branch arbitrarily by defining multiple modules that take input from the same source, or combine branches with a single module that takes multiple inputs. Several tools are provided to assist concise definition of complex pipeline structures. One we describe here is module alternatives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Module alternatives", "sec_num": "3.3" }, { "text": "Consider a hypothetical module type, used in Fig. 3 , that takes one input corpus and trains a machine learning model on the data. It has a parameter layers which takes a numeric value. We wish to train models with several different values for this parameter and apply the same evaluation to each.", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 51, "text": "Fig. 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Module alternatives", "sec_num": "3.3" }, { "text": "We could do this by defining multiple modules of this type, each training a different model. We would then need to duplicate the subsequent evaluation module to create a version for each model. Pimlico provides a more concise way to do this. We define one module, model train, and specify a list of alternative values for the layers parameter: layers=5|10. The module is automatically expanded into multiple modules, one for each parameter value. Each is given a distinct name, which may be specified explicitly or automatically generatedmodel train[5] and model train [10] .", "cite_spans": [ { "start": 569, "end": 573, "text": "[10]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Module alternatives", "sec_num": "3.3" }, { "text": "Subsequent modules can also be expanded automatically, propagating the set of alternatives through the pipeline to create separate branches. In our example, we define a single evaluation module model eval, which declares its input to come from model train (the name of the training module prior to expansion). This is expanded into model eval[5] and model eval [10] , each alternative taking input from the respective model training module.", "cite_spans": [ { "start": 361, "end": 365, "text": "[10]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Module alternatives", "sec_num": "3.3" }, { "text": "Further details of expansion, combination and naming of module alternatives are given in the documentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Module alternatives", "sec_num": "3.3" }, { "text": "In a pipeline that processes a large corpus, it can take hours or even days to run a single module. While developing and testing the pipeline, it is not convenient to blindly write the entire configuration and module code without testing, or to have to execute long-running modules simply to get some input data to test custom code later in the pipeline. Pimlico provides a solution: pipeline variants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline variants", "sec_num": "3.4" }, { "text": "Variants are independent pipelines, sharing no internal datasets or state, defined by a single config file. A special syntax can be used in the file to prefix lines that are to appear only in a specific variant. Other lines are included in all variants. This can be used to set different values of module parameters in different variants, or even include whole modules in only one variant. When Pimlico is run, it will by default load the standard variant, always called 'main'. A command-line option can specify another variant to load.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline variants", "sec_num": "3.4" }, { "text": "The most common use of this is to define a small variant, which only processes a small subset of the input data. It may do this, for example, by setting parameters of the input reader, or including a subset module to truncate the corpus. The entire pipeline can then be run to test configuration and custom code and sanity-check the resulting datasets, before setting the pipeline running on the full dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline variants", "sec_num": "3.4" }, { "text": "Other uses of this feature include running an identical pipeline on different input corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline variants", "sec_num": "3.4" }, { "text": "One of the key problems that Pimlico sets out to solve is the difficulty of distributing code in a way that makes it easy for others to reproduce and extend the processing. It achieves this by making the full processing pipeline explicit in the pipeline conf file. It is therefore crucial that (a) it is easy to distribute all the necessary files to re-run a pipeline; and (b) it is easy for someone else, given these files to get the pipeline running.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Code distribution", "sec_num": "4" }, { "text": "Three elements of a pipeline need to be distributed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Releasing pipelines", "sec_num": "4.1" }, { "text": "(1) a full description of the processing pipeline;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Releasing pipelines", "sec_num": "4.1" }, { "text": "(2) any code needed to run the pipeline that is not part of a standard library; and (3) input data. (1) is trivial with Pimlico, since a pipeline's conf file is all that is needed. (2) requires simply that all code in the path from which custom code is loaded is distributed. This can simply be packaged into an archive together with the conf file. Pimlico's source code does not need to be distributed, since it can be downloaded as necessary. Other libraries will generally be downloaded and installed automatically by Pimlico when the pipeline is to be run, as described in Section 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Releasing pipelines", "sec_num": "4.1" }, { "text": "Pimlico does not attempt to address the distribution of datasets used as input data. It is usually appropriate to distribute these separately in a way that respects licenses and handles distribution of large files. Much of the time, input data is not specific to a pipeline, but comes from existing corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Releasing pipelines", "sec_num": "4.1" }, { "text": "Upon receiving the files providing (1) and 2above, you can use Pimlico's bootstrap tool to set up a working environment for running the pipeline. A Python script, bootstrap.py, is available from the online documentation. This reads the config file to check what version of Pimlico was used when it was originally run and downloads the same release. It then prompts Pimlico to set up a Python virtual environment and install core software dependencies. After this, the pipeline is ready to be loaded and run.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using and extending pipelines", "sec_num": "4.2" }, { "text": "Having loaded a pipeline and set up the environment, it is easy to extend or adjust the pipeline to run further experiments or build on the previous work. New modules can be added and parameters to the existing modules changed. Pimlico's system of standardized internal datatypes for passing data between modules also makes it straightforward to apply the same pipeline to a different dataset. All that is required is a suitable input reader for the new data (see Section 2.1). This supplies the dataset in a standard, pipeline-internal format, so the rest of the pipeline can be run without modification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using and extending pipelines", "sec_num": "4.2" }, { "text": "Pimlico comes with a large number of core module types, for which a pipeline author needs to write no code, but simply define the module configuration in their config file. This set is being constantly expanded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core module types", "sec_num": "5" }, { "text": "The following list gives some examples of core module types provided with Pimlico. The full list is available in the documentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core module types", "sec_num": "5" }, { "text": "\u2022 Generic corpus manipulation, including shuffling, concatenation, truncation, subsampling, random splitting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core module types", "sec_num": "5" }, { "text": "\u2022 Vocabulary building, word-to-ID mapping \u2022 Gensim topic model training (\u0158eh\u016f\u0159ek and Sojka, 2010) \u2022 Malt dependency parsing (Nivre et al., 2006) \u2022 OpenNLP 6 tokenization, POS tagging, constituency parsing", "cite_spans": [ { "start": 72, "end": 97, "text": "(\u0158eh\u016f\u0159ek and Sojka, 2010)", "ref_id": null }, { "start": 124, "end": 144, "text": "(Nivre et al., 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Core module types", "sec_num": "5" }, { "text": "\u2022 Word embedding (Mikolov et al., 2013) loading, manipulation, storing", "cite_spans": [ { "start": 17, "end": 39, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Core module types", "sec_num": "5" }, { "text": "\u2022 Word embedding training using word2vec (Mikolov et al., 2013) and fastText (Mikolov et al., 2018) \u2022 Text normalization (lower-casing, etc.)", "cite_spans": [ { "start": 41, "end": 63, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF5" }, { "start": 77, "end": 99, "text": "(Mikolov et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Core module types", "sec_num": "5" }, { "text": "\u2022 Scikit-learn classifier training (Pedregosa et al., 2011) The core module types also serve as a reference for defining custom module types. For example, the current release contains several module types wrapping tools from OpenNLP, but not coreference resolution. If a user wishes to use the OpenNLP coreference resolver, it is a relatively simple matter to define a custom module in their own source directory, using one of the existing wrappers as a model.", "cite_spans": [ { "start": 35, "end": 59, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Core module types", "sec_num": "5" }, { "text": "An example of a full pipeline config file is shown in Fig. 2 . This simple pipeline loads a corpus from a directory containing text files, each representing a single document. It applies tokenization to each document using the core document-map module that wraps spaCy's tokenizer.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 60, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "A worked example", "sec_num": "6" }, { "text": "Then it applies some custom processing to the tokens of each document, using a module type defined specifically for this pipeline and found in 6 https://opennlp.apache.org/ the accompanying source directory 7 . The resulting corpus is finally passed through the core vocabulary builder, which builds a vocabulary from all the words used in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A worked example", "sec_num": "6" }, { "text": "Pimlico itself is released under the GNU LGPLv3 license. However, it provides access to a large number of software packages, with a wide range of different licenses. Software dependencies are installed only when required, so use of Pimlico does not fall under the terms of all of these -only those required by the modules of the user's pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Software licenses", "sec_num": "7" }, { "text": "It can be important to know what licenses apply to all the code used by a pipeline. The Pimlico codebase keeps track of the licenses that apply to software that may be installed to support the use of the core module types. The command licenses produces a list of the licenses of all of the software used by a given pipeline, or alternatively just particular modules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Software licenses", "sec_num": "7" }, { "text": "Some proprietary tools exist for similar purposes to Pimlico 8 . However, the use of a proprietary tool to build a pipeline in itself precludes easy replication and extension by other authors, so we focus here on open source tools.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related toolkits", "sec_num": "8" }, { "text": "Two recently released examples of toolkits for building NLP pipelines are Forte and PSI. Forte (Liu et al., 2020) is constructed around similar concepts to Pimlico and it too provides wrappers around other NLP toolkits. PSI (Gralinski et al., 2012 , Platform for Situated Intelligence) is similar in its goals and design to Forte. Pimlico's focus is on control of the execution of static pipelines to process large datasets and the management of the data as it passes through the pipeline. For these purposes, it provides a powerful set of tools not built into other toolkits. It does not provide facilities to run pipelines in a way that can be dynamically integrated into other systems. We see this as a distinct use case with different design requirements, one that is well catered for by toolkits like Forte and PSI.", "cite_spans": [ { "start": 95, "end": 113, "text": "(Liu et al., 2020)", "ref_id": "BIBREF2" }, { "start": 224, "end": 247, "text": "(Gralinski et al., 2012", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related toolkits", "sec_num": "8" }, { "text": "Many other toolkits focus specifically on NLP tools, allowing models to be trained and applied for standard NLP tasks. Some provide their own structures for defining pipelines that chain multiple tasks (e.g., Qi et al., 2020; Manning et al., 2014; Honnibal and Montani, 2017) . Pimlico provides a general framework for processing of large datasets, incorporating NLP tasks by providing wrappers around toolkits such as these. Unlike with these toolkits, data loading, pre-and post-processing can be handled in a single pipeline definition, requiring minimal (or no) code to be written.", "cite_spans": [ { "start": 209, "end": 225, "text": "Qi et al., 2020;", "ref_id": "BIBREF8" }, { "start": 226, "end": 247, "text": "Manning et al., 2014;", "ref_id": "BIBREF3" }, { "start": 248, "end": 275, "text": "Honnibal and Montani, 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related toolkits", "sec_num": "8" }, { "text": "Other general toolkits exist for building and running data-processing pipelines, such as Bonobo 9 . An alternative approach to developing Pimlico would have been to define a library of modules for NLP-specific tasks that could be used from such a toolkit. We chose instead to develop an infrastructure tuned to the type of corpus processing and data management that is typical in NLP experiments and tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related toolkits", "sec_num": "8" }, { "text": "We have introduced the Pimlico toolkit for building pipelines for processing large corpora. We set out to address four key goals in improving the process of writing, running and distributing pipelines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "1. Pimlico provides clear documentation of pipelines in the form of a simple definition in a text file, containing pipeline structure and parameters for every step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "2. It is easy to incorporate standard NLP tasks using the core modules provided with the toolkit, for which only a definition of inputs and parameters is required. Among these are wrappers for commonly used NLP toolkits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "3. Integrating custom code into the pipeline is straightforward, by defining custom module types. An extensive array of factories, tools and templates means that typically only a small amount of code is required beyond the code to be executed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "4. The resulting pipeline definition and code can easily be packaged and distributed. Tools are provided to make the process of setting up the execution environment and installing software quick and simple. It is then possible to extend or adjust the pipeline by editing the conf file, or apply to other datasets by replacing input modules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "9 https://www.bonobo-project.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "The toolkit effectively addresses common problems encountered in using NLP tools to process large datasets, releasing code for experiments or other corpus processing for others to use, and running someone else's released code in a new environment or on new data. As such, we present it as a key contribution to free distribution of code to accompany NLP research and replicability of experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "Pimlico is under active development and new features are constantly being added. Several planned enhancements are worth noting in particular.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "9.1" }, { "text": "We plan to continue to expand the set of core modules to include wrappers around other NLP and machine learning toolkits. Many excellent new NLP toolkits have been released in recent years and have yet to be wrapped by core Pimlico modules, or have only partial wrappers. In many case, the addition of a wrapper is quick and requires only a small amount of code. Further commonly used preprocessing methods not currently covered by core modules, like Byte-Pair Encoding, would make pipeline development for modern NLP methods faster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "9.1" }, { "text": "Pimlico includes a number of input readers for standard formats in which corpora are stored. However, many different formats are used for NLP corpora, often specific to one corpus. We plan to expand the set of core input reader modules, to allow more corpora to be read into a pipeline without requiring custom module code.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "9.1" }, { "text": "Modules currently assume that a corpus is a fixed unit, with a known size. Whilst this is often the case, there are exceptions. For example, if data is generated on the fly, a corpus could in effect have an infinite length. In future, it may be desirable to extend Pimlico's conception of a corpus to cover such cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "9.1" }, { "text": "Pipeline development and use could be helped by a visual tool to inspect pipeline structure and execution status. This could take the form of a tool to output images like those in the figures of this paper, or an interactive graphical interface as an alternative to the command-line interface.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "9.1" }, { "text": "We plan to add a system similar to the management of software dependencies for fetching pretrained models. For example, OpenNLP provides models for a number of languages for some of its components. Currently, the user must download these models themselves in order to be able to run a module that uses them. The specification of which model to use, however, is part of the pipeline config. The new model management system would be able to download the models prior to running the module in question, just as software dependencies are downloaded and installed automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "9.1" }, { "text": "We have chosen not to build into the toolkit any system for storing, fetching or managing input data. However, corpora are increasingly available online in standard repositories and formats, thanks to projects like Hugging Face 10 . Pipelines using such corpora could include a specification of where their input data can be retrieved from, such that it could be automatically downloaded as part of the execution process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "9.1" }, { "text": "Pimlico has been developed to support work in a number of different projects. It has been supported by: European Commission FP7 framework grant 611560 (WHIM); the Academy of Finland grant 12933481 (Digital Language Typology). European Union Horizon 2020 research and innovation programme grants 770299 (NewsEye) and 825153 (EMBEDDIA).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "10" }, { "text": "https://github.com/markgw/pimlico/ 2 https://pimlico.readthedocs.io/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://pimlico.readthedocs.io/en/ latest/modules/pimlico.modules.corpora. split.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://urwid.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The source code is not shown here, but the full example, including code, can be found in the documentation.8 For example, I2E, https://www.linguamatics. com/products/i2e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Psi-toolkit: A natural language processing pipeline", "authors": [ { "first": "Filip", "middle": [], "last": "Gralinski", "suffix": "" }, { "first": "Krzysztof", "middle": [], "last": "Jassem", "suffix": "" }, { "first": "Marcin Junczys-Dowmunt", "middle": [], "last": "", "suffix": "" } ], "year": 2012, "venue": "Computational Linguistics", "volume": "458", "issue": "", "pages": "27--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filip Gralinski, Krzysztof Jassem, and Marcin Junczys- Dowmunt. 2012. Psi-toolkit: A natural language processing pipeline. Computational Linguistics, 458:27-39.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Ines", "middle": [], "last": "Montani", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Forte: Composing Diverse NLP tools For Text Retrieval", "authors": [ { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Avinash", "middle": [], "last": "Bukkittu", "suffix": "" }, { "first": "Mansi", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Pengzhi", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Swapnil", "middle": [], "last": "Singhavi", "suffix": "" }, { "first": "Atif", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Zecong", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" }, { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2020, "venue": "Analysis and Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengzhong Liu, Avinash Bukkittu, Mansi Gupta, Pengzhi Gao, Swapnil Singhavi, Atif Ahmed, Wei Wei, Zecong Hu, Haoran Shi, Eric P. Xing, and Zhit- ing Hu. 2020. Forte: Composing Diverse NLP tools For Text Retrieval, Analysis and Generation.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Advances in pre-training distributed word representa-10 https://huggingface.co/datasets tions", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in pre-training distributed word representa- 10 https://huggingface.co/datasets tions. In Proceedings of the International Confer- ence on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751, Atlanta, Georgia. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Maltparser: A data-driven parser-generator for dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser-generator for de- pendency parsing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Stanza: A Python natural language processing toolkit for many human languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45- 50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Example pipeline fragment defining a module with alternative values for an option. The diagram shows how the two modules are expanded into branches for the alternatives.", "num": null, "uris": null } } } }