{ "paper_id": "1998", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:27:17.924197Z" }, "title": "", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "1998", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "The Xerox Research Centre Europe (http://www.xrce.xerox.com for more information) pursues a vision of document technology where language, physical location and medium -electronic, paper or other -impose no barrier to effective use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Our primary activity is research. Our second activity is a Program of Advanced Technology Development, to create new document services based on our own research and that of the wider Xerox community. We also participate actively in exchange programs with European partners.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Language issues cover important aspects in the production and use of documents. As such, language is a central theme of our research activities. More particularly, our Centre focuses on multilingual aspects of Natural Language Processing (NLP). Our current developments cover more than ten European languages and some non-European languages such as Arabic. Some of these developments are conducted through direct collaboration with academic institutions all over Europe.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The present article is an introduction to our basic linguistic components and to some of their multilingual applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The MLTT (Multilingual Theory and Technology) team creates basic tools for linguistic analysis, e.g. morphological analysers, taggers, parsing and generation platforms. These tools are used to develop descriptions of various languages and the relation between them. They are later integrated into higher level applications, such as terminology extraction, information retrieval or translation aid. The Xerox Linguistic Development Architecture (XeLDA) developed by the Advanced Technology Systems group incorporates the MLTT language technology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LINGUISTIC COMPONENTS", "sec_num": "1." }, { "text": "Finite-state technology is the fundamental technology on which Xerox language R&D is based. It encompasses both work on the basic calculus and on linguistic tools, in particular in the domain of morphology and syntax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LINGUISTIC COMPONENTS", "sec_num": "1." }, { "text": "The basic calculus is built on a central library that implements the fundamental operations on finite-state networks. It is based on long-term Xerox research, originated at PARC in the early 1980s. The most recent development in the finite-state calculus is the introduction of the replace operator. The replacement operation is defined in a very general way, allowing replacement to be constrained by input and output contexts, as in two-level rules but without the restriction of only single-symbol replacements. Replacements can be combined with other kinds of operations, such as composition and union, to form complex expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finite-state calculus", "sec_num": null }, { "text": "The finite-state calculus is widely used in our linguistic development, to create tokenisers, morphological analysers, noun phrase extractors, shallow parsers and other language-specific linguistic components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finite-state calculus", "sec_num": null }, { "text": "The MLTT work on morphology is based on the fundamental insight that word formation and morphological or orthographic alternation can be solved with the help of finite automata:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": null }, { "text": "1. the allowed combinations of morphemes can be encoded as a finite-state network; 2. the rules that determine the form of each morpheme can be implemented as finitestate transducers; 3. the lexicon network and the rule transducers can be composed into a single automaton, a lexical transducer, that contains all the morphological information about the language including derivation, inflection, and compounding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": null }, { "text": "Lexical transducers have many advantages. They are bi-directional (the same network for both analysis and generation), fast (thousands of words per second), and compact.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": null }, { "text": "We have created comprehensive morphological analysers for many languages including English, German, Dutch, French, Italian, Spanish, and Portuguese. More recent developments include Czech, Hungarian, Polish, Russian, Scandinavian languages and Arabic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": null }, { "text": "The general purpose of a part-of-speech tagger is to associate each word in a text with its morphosyntactic category (represented by a tag), as in the following example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part-of-speech tagging", "sec_num": null }, { "text": "This+PRON is+VAUX_3SG a+DET sentence+NOUN_SG .+SENT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part-of-speech tagging", "sec_num": null }, { "text": "The process of tagging consists in three steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part-of-speech tagging", "sec_num": null }, { "text": "1. tokenisation: break a text into tokens 2. lexical lookup: provide all potential tags for each token 3. disambiguation: assign to each token a single tag Each step is performed by an application program which uses language specific data:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part-of-speech tagging", "sec_num": null }, { "text": "\u2022 The tokenisation step uses a finite-state transducer to insert token boundaries around simple words (or multi-word expressions), punctuation, numbers, etc. \u2022 Lexical lookup requires a morphological analyser to associate each token with one or more readings. Unknown words are handled by a guesser which provides potential part-of-speech categories based on affix patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part-of-speech tagging", "sec_num": null }, { "text": "\u2022 Disambiguation is done with statistical methods (Hidden Markov Model), although we also experiment with fully rule-based methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part-of-speech tagging", "sec_num": null }, { "text": "For the purpose of terminology extraction from technical documents we designed a tool which applies finite-state techniques to mark potential terms, especially noun phrases corresponding to given regular patterns. The noun-phrase extraction tool consists of several modules: language independent programs (tokeniser, part-of-speech disambiguator, and noun phrase mark-up) and language dependant data (finite-state transducers and transition probabilities). This modular architecture allows rapid extension to different languages. Currently, implementations for 8 languages (Dutch, English, French, German, Hungarian, Italian, Portuguese, Spanish) exist; more languages (e.g. Czech, Polish, Russian) are in preparation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "Noun phrase (NP) mark-up applies finite-state automata describing noun phrase patterns. These patterns rely on the simple (non-ambiguous) tagger output format, i.e. they consist of regular expressions on sequences of tokens and tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "A very simple noun phrase description for a given language (e.g. French) may consist in a (possibly empty) sequence of adjectives followed by a noun and another sequence of adjectives. The automata which describe noun phrases are compiled into the final NPmark-up. The compilation script uses the directed replace operation for the longest match and inserts brackets around maximal NPs (according to the NP patterns). The final NP-mark-up transducers are non-ambiguous, i.e. for every input they provide a single output containing non-recursive bracketing for NPs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "The following examples from the current realisations for French, Dutch and Spanish illustrate the application of the complete chain of tokenising, part-of-speech disambiguation and noun phrase mark-up:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "Lorsqu'on tourne le commutateur de d\u00e9marrage sur la position auxiliaire, l'aiguille retourne alors \u00e0 z\u00e9ro.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "Lorsqu'/CONN on/PRON tourne/VERBP3SG le/DETSG NP{commutateur/NOUNSG de/PREPDE d\u00e9marrage/NOUNSG} sur/PREP la/DETSG NP{position/NOUNSG auxiliaire/ADJSG} ,/CM l'/DETSG NP{aiguille/NOUNSG} retourne/VERBP3SG alors/ADV \u00e0/PREPA NP{z\u00e9ro/NOUNSG} De reparatie-en afstelprocedures zijn bedoeld ter ondersteuning voor zowel de volledig gediplomeerde monteur als de monteur met, minder ervaring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "De/ART NP{reparatie-/CMPDPART en/CON afstelprocedures/NOUN} zijn/VAFIN bedoeld/VVPP ter/PREP NP{ondersteuning/NOUN} voor/PREP zowel/CON de/ART NP{volledig/ADJA gediplomeerde/ADJA monteur/NOUN} als/PREP de/ART NP{monteur/NOUN} met/PREP minder/INDDET NP{ervaring/NOUN} Para asegurar el funcionamiento \u00f3ptimo de los veh\u00edculos, as\u00ed como la seguridad personal del t\u00e9cnico, es imprescindible seguir los m\u00e9todos apropiados de trabajo y los procedimientos correctos de reparaci\u00f3n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "Para/PREP asegurar/VINF el/DETSG NP{funcionamiento/NOUNSG \u00f3ptimo/ADJSG de/PREP los/DETPL veh\u00edculos/NOUNPL},/COMA as\u00ed~como/CONJ la/DETSG NP{seguridad/NOUNSG personal/ADJSG del/PREPDET t\u00e9cnico/NOUNSG} ,/COMA es/AUX imprescindible/ADJSG seguir/VINF los/DETPL NP{m\u00e9todos/NOUNPL apropiados/VPASTPARTPL de/PREP trabajo/NOUNSG} y/CONJ los/DETPL NP{procedimientos/NOUNPL correctos/ADJPL de/PREP reparaci\u00f3n/NOUNSG} Naturally, in a terminology management application, noun phrase extraction leads only to the selection of candidate terms. This automatic selection remains to be validated by human terminologists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "Additionally, by combining monolingual NP extraction as described above with alignment techniques based on statistical methods, one may extend the application to bilingual terminology extraction. Candidate terms are first extracted independently for language A and B. Aligned terms are then spotted by evaluating how often a given bilingual pair of terms (T a , T b ) appears within aligned sentences. Again, in terminology management, bilingual extraction as well as alignment needs to be further validated by human specialists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase Extraction", "sec_num": null }, { "text": "Finite State Parsing is an extension of finite state technology to the level of phrases and sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental finite-state parsing", "sec_num": null }, { "text": "Our work concentrates on shallow parsing of unrestricted texts. We compute syntactic structures, without fully analysing linguistic phenomena that require deep semantic or pragmatic knowledge. For instance, PP-attachment, co-ordinated or elliptic structures are not always fully analysed. The annotation scheme remains underspecified with respect to yet unresolved issues. On the other hand, such phenomena do not cause parse failures, even on complex sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental finite-state parsing", "sec_num": null }, { "text": "Syntactic information is added at the sentence level in an incremental way, depending on the contextual information available at a given stage. The implementation relies on a sequence of networks built with the replace operator. The current system has been implemented for French and is being expanded to new languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental finite-state parsing", "sec_num": null }, { "text": "The parsing process is incremental in the sense that the linguistic description attached to a given transducer in the sequence relies on the preceding sequence of transducers, covers only some occurrences of a given linguistic phenomenon and can be revised at a later stage. The parser output can be used for further processing such as extraction of dependency relations over unrestricted corpora. In tests on French corpora (technical manuals, newspaper), precision is around 90-97% for subjects (84-88% for objects) and recall around 86-92% for subjects (80-90% for objects).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental finite-state parsing", "sec_num": null }, { "text": "LOCOLEX is an on-line bilingual comprehension dictionary, which aids the understanding of electronic documents written in a foreign language. It displays only the appropriate part of a dictionary entry when a user clicks on a word in a given context. The system disambiguates parts of speech and recognises multiword expressions such as compounds (e.g. heart attack), phrasal verbs (e.g. to nit pick), idiomatic expressions (e.g. to take the bull by the horns) and proverbs (e.g. birds of a feather flock together). In such cases LOCOLEX displays the translation of the whole phrase and not the translation of the word the user has clicked on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "For instance, someone may use a French/English dictionary to understand the following text written in French:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "Lorsqu'on \u00e9voque devant les cadres la s\u00e9paration n\u00e9goci\u00e9e, les rumeurs fantaisistes vont apparemment toujours bon train.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "When the user clicks on the word cadres, LOCOLEX identifies its POS and base form. It then displays the corresponding entry, here the noun cadre, with its different sense indicators and associated translations. In this particular context, the verb reading of cadres is ignored by LOCOLEX. Actually, in order to make the entry easier to use, only essential elements are displayed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "cadre I: nm 1: *[constr,art] (of a picture, a window) frame 2: *(scenery) setting 3: *(milieu) surroundings 4: *(structure, context) framework 5: *(employee) executive 6: *(of a bike, motorcycle) frame", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "The word train in the same example above is part of a verbal multiword expression aller bon train. In our example, the expression is inflected and two adverbs have been stuck in between the head verb and its complement. Still LOCOLEX retrieves only the equivalent expression in English to be flying around and not the entire entry for train.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "train I: nm 5 : * [rumeurs] aller bon train : to be flying round LOCOLEX uses an SGML-tagged bilingual dictionary (the Oxford-Hachette French English Dictionary). To adapt this dictionary to LOCOLEX required the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 Revision of an SGML-tagged Dictionary to build a disambiguated active dictionary (DAD); \u2022 Rewriting multi-word expressions as regular expressions using a special grammar;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 Building a finite state machine which compactly associates index numbers with dictionary entries. The lookup process itself may be represented as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 split the sentence string into words (tokenisation);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 normalise each word to a standard form by changing cases and considering spelling variants;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 identify all possible morpho-syntatic usages (base form and morpho-syntactic tags) for each word in the sentence;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 disambiguate the POS;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 find relevant entries (including possible homographs or compounds) in the dictionary for the lexical form(s) chosen by the POS disambiguator;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 use the result of the morphological analysis and disambiguation to eliminate irrelevant sections;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 process the regular expressions to see if they match the word's actual context in order to identify special or idiomatic usages;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "\u2022 display to the user only the most appropriate translation based on the part of speech and surrounding context. Besides being an effective tool for understanding, LOCOLEX could also be useful in the framework of language learning. LOCOLEX also points out that existing on-line dictionaries, even when organised like a database rather than a set of type-setting instructions, are not necessarily suitable for NLP-applications. By adding grammar rules to the dictionary in order to describe the possible variations of multiword expressions we add a dynamic feature to this dictionary. SGML functions no longer point to text but to programs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LOCOLEX: a Machine Aided Comprehension Dictionary", "sec_num": "2.1." }, { "text": "Many of the linguistic tools being developed at our Centre are being used in applied research into multilingual information retrieval. Multilingual information retrieval allows the interrogation of texts written in a target language B by users asking questions in source language A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Information Retrieval", "sec_num": "2.2." }, { "text": "In order to perform this retrieval, the following linguistic processing steps are performed on the documents and the query:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Information Retrieval", "sec_num": "2.2." }, { "text": "\u2022 Automatically recognise language of the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Information Retrieval", "sec_num": "2.2." }, { "text": "\u2022 Perform the morphological analysis of the text using Xerox finite state analysers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Information Retrieval", "sec_num": "2.2." }, { "text": "\u2022 Part of speech tag the words in the text using the preceding morphological analysis and the probability of finding part-of-speech tag paths in the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Information Retrieval", "sec_num": "2.2." }, { "text": "\u2022 Lemmatise, i.e. normalise or reduce to dictionary entry form, the words in the text using the part of speech tags. This morphological analysis, tagging, and subsequent lemmatisation of analysed words has proved to be a useful improvement for information retrieval as any informationretrieval specific stemming. To process a given query, an intermediate form of the query must be generated which he normalised language of the query to the indexed text of the documents. This intermediate form can be constructed by replacing each word with target language words through an on-line bilingual dictionary. The intermediate query, which is in the same language as the target documents, is passed along to a traditional information retrieval system, such as SMART 4 . This simple word-based method is the first approach we have been testing. Initial runs indicate that incorporating multi-word expression matching can significantly improve results. The multi-word expressions most interesting for information retrieval are terminological expressions, which most often appear as noun phrases in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Information Retrieval", "sec_num": "2.2." }, { "text": "Digital libraries represent a new way of accessing information distributed all over the world, via the use of a computer connected to the Internet network. Whereas a physical library deals primarily with physical data, a digital library deals with electronic documents such as texts, pictures, sounds and video.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Callimaque: a collaborative project for virtual libraries", "sec_num": "2.3." }, { "text": "We expect more from a digital library than only the possibility of browsing its documents. A digital library front-end should provide users with a set of tools for querying and retrieving information, as well as annotating pages of a document, defining hyper-links between pages or helping to understand multilingual documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Callimaque: a collaborative project for virtual libraries", "sec_num": "2.3." }, { "text": "Callimaque is one of our projects dealing with such new functionalities for digital libraries. More precisely, Callimaque is a collaborative project between the Xerox Research Centre and research/academic institutions of the Grenoble area (IMAG, INRIA, CICG). The goal is to build a virtual library that reconstructs the early history of information technology in France. The project is based on a similar project, the Class project, which was started by the University of Cornell several years ago under the leadership of Stuart Lynn to preserve brittling old books. The Class project runs over conventional networks and all scanned material is in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Callimaque: a collaborative project for virtual libraries", "sec_num": "2.3." }, { "text": "The Callimaque project includes the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Callimaque: a collaborative project for virtual libraries", "sec_num": "2.3." }, { "text": "\u2022 Scanning and indexing around 1000 technical reports and 2000 theses written at \u2022 With a view to making these documents widely accessible, Xerox has developed software which authorises access to this database by any client using the http protocol used by the World Wide Web. The base is thus accessible via any PC, Macintosh, UNIX station or even from a simple ASCII terminal (The web address is http://callimaque.grenet.fr).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Callimaque: a collaborative project for virtual libraries", "sec_num": "2.3." }, { "text": "\u2022 Print on demand facilities connected to the network allow the users to make copies of the scanned material. This connection will subsequently develop towards a high output ATM network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Callimaque: a collaborative project for virtual libraries", "sec_num": "2.3." }, { "text": "2.4.1. XTRAS Terminology Suite 2.4.1.1. TermFinder: Multilingual Terminology Extraction TermFinder enables the user to semi-automatically create multilingual terminology, hence ensuring a huge productivity increase over manual terminology creation. TermFinder is based on the linguistic components described above, especially NP extraction tools and alignment. TermFinder supports Dutch, English, French, German, Italian, Spanish, and Portuguese. Any of these languages can be source or target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Xerox Translation and Authoring Systems (XTRAS)", "sec_num": "2.4." }, { "text": "In addition, Danish, Swedish, Finnish, Norwegian, Czech, Hungarian, Russian, Romanian, Polish, Arabic, Japanese, Korean are under development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Xerox Translation and Authoring Systems (XTRAS)", "sec_num": "2.4." }, { "text": "Built on top of Open Database Connectivity (ODBC), the database independent layer from Microsoft, TermFinder is independent from a specific database. TermFinder supports SGML, HTML, XML, iso-8859-1 and Rich Text Format documents. TermManager is the complement to TermFinder. It enables one to quickly manage the terminology that was created with TermFinder. One can modify it, add terms, remove others, and add specific information. The Term In Context view enables users to see all occurrences of a term in the context of the original sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Xerox Translation and Authoring Systems (XTRAS)", "sec_num": "2.4." }, { "text": "TermManager uses several views to display the terminology: Form View, to view all the information related to a term, Table View , to see information related to several terms, Dictionary view: to see terms that are related. One can define filters to see only a subset of the database. One can customise fonts, colours. One can create one's own fields to store user defined information.", "cite_spans": [], "ref_spans": [ { "start": 117, "end": 127, "text": "Table View", "ref_id": null } ], "eq_spans": [], "section": "Xerox Translation and Authoring Systems (XTRAS)", "sec_num": "2.4." }, { "text": "The terminology that has been built using TermFinder can then be used by TermChecker to provide authors with interactive feedback, to help them increase the terminology consistency. This tool can be used both by the author for the source terminology and by the translator for the target terminology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TermChecker : Controlled Terminology Tool", "sec_num": "2.4.1.3." }, { "text": "TermChecker is fully integrated with word processors. It provides the same look and feel than the standard spell checker function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TermChecker : Controlled Terminology Tool", "sec_num": "2.4.1.3." }, { "text": "This Multilingual Assistant provides translation of words in context, using a general or specialised dictionary. It can differentiate between similar expressions that should be translated differently (\"apply to\" vs. \"apply something to\"). The Multilingual Assistant is based on the results of the Locolex project described above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Assistant : Comprehension Aid Tool", "sec_num": "2.4.1.4." }, { "text": "Translation memory helps the translation process by recognising previously translated texts: the system \"keeps\" sentences that have been previously translated, with their ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XTRAS Translation memory", "sec_num": "2.4.2." }, { "text": "Controlled Terminology", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TermChecker TermChecker", "sec_num": null }, { "text": "Online reader Online Dictionary corresponding translation. When a new document has to be translated, or an updated version of an existing document, the translation memory can rapidly find identical or similar sentences and retrieve them for the translator to view. This will save any unnecessary duplication of work for the translator whilst increasing consistency and quality of translations. By cutting down on the repetitious and routine work, Translation Memory frees up the translator to focus on new texts and thereby reduce the overall time and cost of translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Database of terms", "sec_num": null }, { "text": "The Filter: A filter receives the source document to be translated which it parses, extracting information about the structure, such as titles, styles, paragraph marks etc. The process simultaneously extracts the text itself, plus some additional formatting, such as character style, (bold, italic, underlined\u2026) in order to store as much data as possible to reduce the efforts of the human translator. This format information is stored independently from the format of the input document and so can relate to parts of text as well as the whole text. Additional data can be added such as page numbers, document identification etc. etc.The filter can read the most well known document formats (RTF SGML HTML MIF Interleaf) and in this way is word processor independent. The filter reads character codes in English, French, German, Italian, Spanish, Portuguese and Dutch for the source documents. An indefinite number of target languages can be supported when written in Unicode characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How does it work? XTRAS Translation Memory Overview", "sec_num": null }, { "text": "The input text is split up into units of translation which are to be stored in the translation memory database, normally consisting of whole sentences and their formatting. This formatting is copied to the output sentences without any modifications. However, other pieces of text may be considered as translation units, such as titles, lists, figures, captions etc. and stored accordingly. A list of abbreviations is maintained to enable proper recognition by the user, for example to avoid interpreting every occurrence of a period as the end of a sentence. This list can be extended and modified Translation Memory System: it performs several functions: -Manages the translation memory database (storage, administration, import/export) -Processes the source sentences by retrieving them from the translation memory and/or by retrieving similar sentences -Retrieves the translation which has been stored for matching sentences (perfect matching) and, in the case of non-identical sentences (fuzzy matching or no match), generating a close translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation:", "sec_num": null }, { "text": "Storage and Administration: Documents to be translated are grouped together to form projects and assigned a manager who will define the characteristics of that project, by domain, customer, source language and target language for example. The manager can add/remove texts to/from the project, delete them, file them and merge two translation memories if required. The database for storage is computationally efficient and can maintain a large amount of information using a minimum of resources. The database holds pairs of sentences, (source and target) containing the following history: the source of the sentence, the source and target languages of the sentence, the number of times the sentence occurs, when the sentence was written and by whom and the last time the sentence was accessed and by whom. The sentences will also carry their original format As the storage facility can show these details, the project manager will have no trouble in editing and cleaning up texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation:", "sec_num": null }, { "text": "-Import: Various data sources (text files) can be fed into the translation memory, including other translation memory systems for example Trados, IBM TM/2, bilingual dictionaries (by extracting translations).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation:", "sec_num": null }, { "text": "-Export: The data from the translation memory can be moved to a file of text which contains aligned sentences and to documents using other translation memory systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation:", "sec_num": null }, { "text": "Input for translation memory consists of sentences with some formatting information. Searches for these sentences can take place in more than one translation memory and can be defined and prioritised by the user, to obtain the best matches first. Any differences between the input sentence and the matching sentence are taken into account by the system and include: -formatting differences; some characters do not have the same style -case differences -punctuation differences -words are substituted; changes in proper nouns, acronyms, numbers -linguistic differences; one word has the same base form but not the same surface form -number, tense, gender -insertion or deletion of one or more words; secondary words (adverbs and adjectives) are different but the main words (verbs) are the same -changes in the order of phrases -changes in the order of words Generation of Translation: If there is a difference between the match and the searched sentence, the aim is to find the closest possible target sentence and so minimise the work of the translator. Translation memory can generate such a modified match if the difference is small, for example relating to punctuation, case or number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search and Retrieval:", "sec_num": null }, { "text": "The Translator's Workbench: The workbench is the store for sentences and their matches. It allows the translator to translate sentences that have not been found and to verify matches (perfect and fuzzy) that have been found in the translation memory. The workbench can take information from several translators and merge information from several documents. It provides a graphical interface which displays as much information as possible to help the translator work quickly and efficiently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search and Retrieval:", "sec_num": null }, { "text": "This software is available for research purposes at ftp://ftp.cs.cornell.edu/pub/smart.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Incremental finite-state parsing", "authors": [ { "first": "", "middle": [], "last": "A\u00eft-Mokhtar", "suffix": "" }, { "first": "", "middle": [], "last": "Salah", "suffix": "" }, { "first": "Jean-Pierre", "middle": [], "last": "Chanod", "suffix": "" } ], "year": 1997, "venue": "Proceedings of Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A\u00eft-Mokhtar, Salah and Chanod, Jean-Pierre (1997a): \"Incremental finite-state parsing\", in Proceedings of Applied Natural Language Processing 1997, Washington, DC.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Subject and Object Dependency Extraction Using Finite-State Transducers", "authors": [ { "first": "", "middle": [], "last": "A\u00eft-Mokhtar", "suffix": "" }, { "first": "", "middle": [], "last": "Salah", "suffix": "" }, { "first": "Jean-Pierre", "middle": [], "last": "Chanod", "suffix": "" } ], "year": 1997, "venue": "ACL workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A\u00eft-Mokhtar, Salah and Chanod, Jean-Pierre (1997b): \"Subject and Object Dependency Extraction Using Finite-State Transducers\", ACL workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications. Madrid.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "LOCOLEX: the translation rolls off your tongue", "authors": [ { "first": "D", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "F", "middle": [], "last": "Segond", "suffix": "" }, { "first": "A", "middle": [], "last": "Zaenen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the ACH-ALLC conference", "volume": "", "issue": "", "pages": "6--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bauer, D., Segond, F. and Zaenen, A. (1995): \"LOCOLEX: the translation rolls off your tongue.\" in Proceedings of the ACH-ALLC conference, Santa Barbara, pp. 6-8.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Tagging French --comparing a statistical and a constraint-based method", "authors": [ { "first": "Jean-Pierre", "middle": [], "last": "Chanod", "suffix": "" }, { "first": "Pasi", "middle": [], "last": "Tapanainen", "suffix": "" } ], "year": 1995, "venue": "Seventh Conference of the European Chapter of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chanod, Jean-Pierre, Tapanainen, Pasi (1995): \"Tagging French --comparing a statistical and a constraint-based method\" in Seventh Conference of the European Chapter of the ACL. Dublin.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Explorations in Automatic Thesaurus Discovery", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grefenstette, Gregory (1994): Explorations in Automatic Thesaurus Discovery. Kluwer Academic Press, Boston.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The DECIDE project: Multilingual Collocation Extraction", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Heid", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Fontenelle", "suffix": "" } ], "year": 1996, "venue": "Seventh Euralex International Congress", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grefenstette, Gregory, Heid, Ulrich and Fontenelle, Thierry (1996): \"The DECIDE project: Multilingual Collocation Extraction.\" Seventh Euralex International Congress, University of Gothenburg, Sweden, Aug 13-18, 1996.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Probabilistic and Rule-based Tagger of an Inflective Language", "authors": [ { "first": "Barbara", "middle": [], "last": "Hladka", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hajic", "suffix": "" } ], "year": 1997, "venue": "Proceedings of Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hladka, Barbara and Hajic, Jan (1997): \"Probabilistic and Rule-based Tagger of an Inflective Language\" In Proceedings of Applied Natural Language Processing 1997", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Regular Models of Phonological Rule Systems", "authors": [ { "first": "Ronald", "middle": [ "M" ], "last": "Kaplan", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "", "pages": "3--331", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaplan, Ronald M. and Kay, Martin (1994): \"Regular Models of Phonological Rule Systems\". Computational Linguistics, 20:3 331-378.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Constructing Lexical Transducers", "authors": [ { "first": "Lauri", "middle": [], "last": "Karttunen", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karttunen, Lauri (1994): \"Constructing Lexical Transducers\". In Proceedings of the 15th International Conference on Computational Linguistics, Coling, Kyoto, Japan.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Replace Operator", "authors": [ { "first": "Lauri", "middle": [], "last": "Karttunen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL-95", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karttunen, Lauri (1995): \"The Replace Operator\". In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL-95) 16-23.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A General Computational Model for Word-Form Recognition and Production. Department of General Linguistics", "authors": [ { "first": "Kimmo", "middle": [], "last": "Koskenniemi", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koskenniemi, Kimmo (1983): \"A General Computational Model for Word-Form Recognition and Production. Department of General Linguistics\". University of Helsinki.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The dds tagger guide version 1.1", "authors": [ { "first": "Julian", "middle": [], "last": "Kupiec", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Wilkens", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kupiec, Julian and Wilkens, Mike (1994): The dds tagger guide version 1.1. Technical report, Xerox Palo Alto Research Center.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A method for disjunctive constraint satisfaction", "authors": [ { "first": "", "middle": [], "last": "Maxwell", "suffix": "" }, { "first": "John", "middle": [ "T" ], "last": "Iii", "suffix": "" }, { "first": "Ronald", "middle": [ "M" ], "last": "Kaplan", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "173--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maxwell, III, John T. and Kaplan, Ronald M. (1991): \"A method for disjunctive constraint satisfaction.\" In Tomita, Masaru (ed.), Current Issues in Parsing Technology. Kluwer Academic Publishers, Dordrecht, pp.173-190.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Reading more into Foreign Languages", "authors": [ { "first": "John", "middle": [], "last": "Nerbonne", "suffix": "" }, { "first": "", "middle": [], "last": "Karttunen", "suffix": "" }, { "first": "", "middle": [], "last": "Lauri", "suffix": "" }, { "first": "", "middle": [], "last": "Paskaleva", "suffix": "" }, { "first": "", "middle": [], "last": "Elena", "suffix": "" }, { "first": "", "middle": [], "last": "Proszeky", "suffix": "" }, { "first": "", "middle": [], "last": "Gabor", "suffix": "" }, { "first": "Tiit", "middle": [], "last": "Roosmaa", "suffix": "" } ], "year": 1997, "venue": "Proceedings of Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nerbonne, John, Karttunen, Lauri, Paskaleva, Elena, Proszeky, Gabor and Roosmaa, Tiit (1997): \"Reading more into Foreign Languages\". In Proceedings of Applied Natural Language Processing 1997 Washington, DC.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Multilingual Finite-State Noun Phrase Extraction", "authors": [ { "first": "Ann", "middle": [], "last": "Schiller", "suffix": "" } ], "year": 1996, "venue": "ECAI '96 workshop on", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schiller, Ann (1996): \"Multilingual Finite-State Noun Phrase Extraction.\" In: ECAI '96 workshop on \"Extended finite state models of language\", Budapest.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Using a finite-state based formalism to identify and generate multiword expressions", "authors": [ { "first": "F", "middle": [], "last": "Segond", "suffix": "" }, { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Segond, F. and Tapanainen, P. (1995): Using a finite-state based formalism to identify and generate multiword expressions. Technical Report MLTT-019, Xerox Research Centre, Grenoble, 1995.", "links": null } }, "ref_entries": {} } }