{ "paper_id": "C16-1044", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:02:17.264875Z" }, "title": "Inducing Multilingual Text Analysis Tools Using Bidirectional Recurrent Neural Networks", "authors": [ { "first": "Othman", "middle": [], "last": "Zennaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "LIST", "location": { "settlement": "Gif-sur-Yvette", "region": "LVIC", "country": "France" } }, "email": "othman.zennaki@cea.fr" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "", "affiliation": { "laboratory": "", "institution": "CEA", "location": { "settlement": "Gif-sur-Yvette", "country": "France" } }, "email": "nasredine.semmar@cea.fr" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "", "affiliation": { "laboratory": "LIG, Univ. Grenoble-Alpes Grenoble", "institution": "", "location": { "country": "France" } }, "email": "laurent.besacier@imag.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This work focuses on the rapid development of linguistic annotation tools for resource-poor languages. We experiment several cross-lingual annotation projection methods using Recurrent Neural Networks (RNN) models. The distinctive feature of our approach is that our multilingual word representation requires only a parallel corpus between source and target languages. More precisely, our method has the following characteristics: (a) it does not use word alignment information, (b) it does not assume any knowledge about foreign languages, which makes it applicable to a wide range of resource-poor languages, (c) it provides truly multilingual taggers. We investigate both uni-and bi-directional RNN models and propose a method to include external information (for instance low level information from Part-Of-Speech tags) in the RNN to train higher level taggers (for instance, super sense taggers). We demonstrate the validity and genericity of our model by using parallel corpora (obtained by manual or automatic translation). Our experiments are conducted to induce cross-lingual POS and super sense taggers.", "pdf_parse": { "paper_id": "C16-1044", "_pdf_hash": "", "abstract": [ { "text": "This work focuses on the rapid development of linguistic annotation tools for resource-poor languages. We experiment several cross-lingual annotation projection methods using Recurrent Neural Networks (RNN) models. The distinctive feature of our approach is that our multilingual word representation requires only a parallel corpus between source and target languages. More precisely, our method has the following characteristics: (a) it does not use word alignment information, (b) it does not assume any knowledge about foreign languages, which makes it applicable to a wide range of resource-poor languages, (c) it provides truly multilingual taggers. We investigate both uni-and bi-directional RNN models and propose a method to include external information (for instance low level information from Part-Of-Speech tags) in the RNN to train higher level taggers (for instance, super sense taggers). We demonstrate the validity and genericity of our model by using parallel corpora (obtained by manual or automatic translation). Our experiments are conducted to induce cross-lingual POS and super sense taggers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In order to minimize the need for annotated resources (produced through manual annotation, or by manual check of automatic annotation), several research works were interested in building Natural Language Processing (NLP) tools based on unsupervised or semi-supervised approaches (Collins and Singer, 1999; Klein, 2005; Goldberg, 2010) . For example, NLP tools based on cross-language projection of linguistic annotations achieved good performances in the early 2000s (Yarowsky et al., 2001 ). The key idea of annotation projection can be summarized as follows: through word alignment in parallel text corpora, the annotations are transferred from the source (resource-rich) language to the target (under-resourced) language, and the resulting annotations are used for supervised training in the target language. However, automatic word alignment errors (Fraser and Marcu, 2007) limit the performance of these approaches.", "cite_spans": [ { "start": 279, "end": 305, "text": "(Collins and Singer, 1999;", "ref_id": "BIBREF9" }, { "start": 306, "end": 318, "text": "Klein, 2005;", "ref_id": "BIBREF22" }, { "start": 319, "end": 334, "text": "Goldberg, 2010)", "ref_id": "BIBREF17" }, { "start": 467, "end": 489, "text": "(Yarowsky et al., 2001", "ref_id": "BIBREF46" }, { "start": 853, "end": 877, "text": "(Fraser and Marcu, 2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work is built upon these previous contributions and observations. We explore the possibility of using Recurrent Neural Networks (RNN) to build multilingual NLP tools for resource-poor languages analysis. The major difference with previous works is that we do not explicitly use word alignment information. Our only assumption is that parallel sentences (source-target) are available and that the source part is annotated. In other words, we try to infer annotations in the target language from sentencebased alignments only. While most NLP researches on RNN have focused on monolingual tasks 1 and sequence labeling (Collobert et al., 2011; Graves, 2012) , this paper, however, considers the problem of learning multilingual NLP tools using RNN.", "cite_spans": [ { "start": 620, "end": 644, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF10" }, { "start": 645, "end": 658, "text": "Graves, 2012)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions In this paper, we investigate the effectiveness of RNN architectures -Simple RNN (SRNN) and Bidirectional RNN (BRNN) -for multilingual sequence labeling tasks without using any word alignment information. Two NLP tasks are considered: Part-Of-Speech (POS) tagging and Super Sense (SST) tagging (Ciaramita and Altun, 2006) . Our RNN architectures demonstrate very competitive results on unsupervised training for new target languages. In addition, we show that the integration of POS information in RNN models is useful to build multilingual coarse-grain semantic (Super Senses) taggers. For this, a simple and efficient way to take into account low-level linguistic information for more complex sequence labeling RNN is proposed.", "cite_spans": [ { "start": 308, "end": 335, "text": "(Ciaramita and Altun, 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Methodology For training our multilingual RNN models, we just need as input a parallel (or multiparallel) corpus between a resource-rich language and one or many under-resourced languages. Such a parallel corpus can be manually obtained (clean corpus) or automatically obtained (noisy corpus).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To show the potential of our approach, we investigate two sequence labeling tasks: cross-language POS tagging and multilingual Super Sense Tagging (SST). For the SST task, we measure the impact of the parallel corpus quality with manual or automatic translations of the SemCor (Miller et al., 1993) translated from English into Italian (manually and automatically) and French (automatically).", "cite_spans": [ { "start": 270, "end": 298, "text": "SemCor (Miller et al., 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Outline The remainder of the paper is organized as follows. Section 2 reviews related work. Section 3 describes our cross-language annotation projection approaches based on RNN. Section 4 presents the empirical study and associated results. We finally conclude the paper in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Cross-lingual projection of linguistic annotations was pioneered by Yarowsky et al. (2001) who created new monolingual resources by transferring annotations from resource-rich languages onto resource-poor languages through the use of word alignments. The resulting (noisy) annotations are used in conjunction with robust learning algorithms to build cheap unsupervised NLP tools (Pad\u00f3 and Lapata, 2009) . This approach has been successfully used to transfer several linguistic annotations between languages (efficient learning of POS taggers (Das and Petrov, 2011; Duong et al., 2013) and accurate projection of word senses (Bentivogli et al., 2004) ). Cross-lingual projection requires a parallel corpus and word alignment between source and target languages. Many automatic word alignment tools are available, such as GIZA++ which implements IBM models (Och and Ney, 2000) . However, the noisy (non perfect) outputs of these methods is a serious limitation for the annotation projection based on word alignments (Fraser and Marcu, 2007) .", "cite_spans": [ { "start": 68, "end": 90, "text": "Yarowsky et al. (2001)", "ref_id": "BIBREF46" }, { "start": 379, "end": 402, "text": "(Pad\u00f3 and Lapata, 2009)", "ref_id": "BIBREF33" }, { "start": 542, "end": 564, "text": "(Das and Petrov, 2011;", "ref_id": "BIBREF11" }, { "start": 565, "end": 584, "text": "Duong et al., 2013)", "ref_id": "BIBREF12" }, { "start": 624, "end": 649, "text": "(Bentivogli et al., 2004)", "ref_id": "BIBREF2" }, { "start": 855, "end": 874, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF32" }, { "start": 1014, "end": 1038, "text": "(Fraser and Marcu, 2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To deal with this limitation, recent studies based on cross-lingual representation learning methods have been proposed to avoid using such pre-processed and noisy alignments for label projection. First, these approaches learn language-independent features, across many different languages (Durrett et al., 2012; Al-Rfou et al., 2013; T\u00e4ckstr\u00f6m et al., 2013; Luong et al., 2015; Gouws and S\u00f8gaard, 2015; . Then, the induced representation space is used to train NLP tools by exploiting labeled data from the source language and apply them in the target language. Cross-lingual representation learning approaches have achieved good results in different NLP applications such as cross-language SST and POS tagging (Gouws and S\u00f8gaard, 2015) , cross-language named entity recognition (T\u00e4ckstr\u00f6m et al., 2012) , cross-lingual document classification and lexical translation task , cross language dependency parsing (Durrett et al., 2012; T\u00e4ckstr\u00f6m et al., 2013) and cross-language semantic role labeling (Titov and Klementiev, 2012) .", "cite_spans": [ { "start": 289, "end": 311, "text": "(Durrett et al., 2012;", "ref_id": "BIBREF13" }, { "start": 312, "end": 333, "text": "Al-Rfou et al., 2013;", "ref_id": "BIBREF0" }, { "start": 334, "end": 357, "text": "T\u00e4ckstr\u00f6m et al., 2013;", "ref_id": "BIBREF43" }, { "start": 358, "end": 377, "text": "Luong et al., 2015;", "ref_id": "BIBREF25" }, { "start": 378, "end": 402, "text": "Gouws and S\u00f8gaard, 2015;", "ref_id": "BIBREF18" }, { "start": 711, "end": 736, "text": "(Gouws and S\u00f8gaard, 2015)", "ref_id": "BIBREF18" }, { "start": 779, "end": 803, "text": "(T\u00e4ckstr\u00f6m et al., 2012)", "ref_id": "BIBREF42" }, { "start": 909, "end": 931, "text": "(Durrett et al., 2012;", "ref_id": "BIBREF13" }, { "start": 932, "end": 955, "text": "T\u00e4ckstr\u00f6m et al., 2013)", "ref_id": "BIBREF43" }, { "start": 998, "end": 1026, "text": "(Titov and Klementiev, 2012)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our approach described in next section, is inspired by these works since we also try to induce a common language-independent feature space (crosslingual words embeddings). Unlike Durrett et al. (2012) and Gouws and S\u00f8gaard (2015) , who use bilingual lexicons, and unlike Luong et al. (2015) who use word alignments between the source and target languages 2 our common multilingual representation is very agnostic. We use a simple (multilingual) vector representation based on the occurrence of source and target words in a parallel corpus and we let the RNN learn the best internal representations (corresponding to the hidden layers) specific to the task (SST or POS tagging).", "cite_spans": [ { "start": 179, "end": 200, "text": "Durrett et al. (2012)", "ref_id": "BIBREF13" }, { "start": 205, "end": 229, "text": "Gouws and S\u00f8gaard (2015)", "ref_id": "BIBREF18" }, { "start": 271, "end": 290, "text": "Luong et al. (2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this work, we learn a cross-lingual POS tagger (multilingual POS tagger if a multilingual parallel corpus is used) based on a recurrent neural network (RNN) on the source labeled text and apply it to tag target language text. We explore simple and bidirectional RNN architectures (SRNN and BRNN respectively). Starting from the intuition that low-level linguistic information is useful to learn more complex taggers, we also introduce three new RNN variants to take into account external (POS) information in multilingual SST. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To avoid projecting label information from deterministic and error-prone word alignments, we propose to represent the word alignment information intrinsically in a recurrent neural network architecture. The idea consists in implementing a recurrent neural network as a multilingual sequence labeling tool (we investigate POS tagging and SST tagging). Before describing our cross-lingual (multilingual if a multiparallel corpus is used) neural network tagger, we present the simple cross-lingual projection method, considered as our baseline in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Approach Overview", "sec_num": "3" }, { "text": "We use direct transfer as a baseline system which is similar to the method described in (Yarowsky et al., 2001 ). First we tag the source side of the parallel corpus using the available supervised tagger. Next, we align words in the parallel corpus to find out corresponding source and target words. Tags are then projected to the (resource-poor) target language. The target language tagger is trained using any machine learning approach (we use TnT tagger (Brants, 2000) in our experiments).", "cite_spans": [ { "start": 88, "end": 110, "text": "(Yarowsky et al., 2001", "ref_id": "BIBREF46" }, { "start": 457, "end": 471, "text": "(Brants, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Cross-lingual Annotation Projection", "sec_num": "3.1" }, { "text": "We propose a method for learning multilingual sequence labeling tools based on RNN, as it can be seen in Figure 1 . In our approach, a parallel or multi-parallel corpus between a resource-rich language and one or many under-resourced languages is used to extract common (multilingual) and agnostic words representations. These representations, which rely on sentence level alignment only, are used with the source side of the parallel/multi-parallel corpus to learn a neural network tagger in the source language. Since a common representation of source and target words is chosen, this neural network tagger is truly multilingual and can be also used to tag texts in target language(s).", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 113, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Proposed Approach", "sec_num": "3.2" }, { "text": "In our agnostic representation, we associate to each word (in source and target vocabularies) a common vector representation, namely V wi , i = 1, ..., N , where N is the number of parallel sentences (bisentences in the parallel corpus). If w appears in i-th bi-sentence of the parallel corpus then V wi = 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Words Representation", "sec_num": "3.2.1" }, { "text": "The idea is that, in general, a source word and its target translation appear together in the same bisentences and their vector representations are close. We can then use the RNN tagger, initially trained on source side, to tag the target side (because of our common vector representation). This simple representation does not require multilingual word alignments and it lets the RNN learns the optimal internal representation needed for the annotation task (for instance, the hidden layers of the RNN can be considered as multi-lingual embeddings of the words). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Words Representation", "sec_num": "3.2.1" }, { "text": "There are two major architectures of neural networks: Feedforward (Bengio et al., 2003) and Recurrent Neural Networks (RNN) (Schmidhuber, 1992; Mikolov et al., 2010) . Sundermeyer et al. (2013) showed that language models based on recurrent architecture achieve better performance than language models based on feedforward architecture. This is due to the fact that recurrent neural networks do not use a context of limited size. This property led us to use, in our experiments, the Elman recurrent architecture (Elman, 1990) , in which recurrent connections occur at the hidden layer level.", "cite_spans": [ { "start": 66, "end": 87, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF1" }, { "start": 124, "end": 143, "text": "(Schmidhuber, 1992;", "ref_id": "BIBREF37" }, { "start": 144, "end": 165, "text": "Mikolov et al., 2010)", "ref_id": "BIBREF26" }, { "start": 168, "end": 193, "text": "Sundermeyer et al. (2013)", "ref_id": "BIBREF40" }, { "start": 512, "end": 525, "text": "(Elman, 1990)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Recurrent Neural Networks", "sec_num": "3.2.2" }, { "text": "We consider in this work two Elman RNN architectures (see Figure 2 ): Simple RNN (SRNN) and Bidirectional RNN (BRNN). In addition, to be able to include low-level linguistic information in our architecture designed for more complex sequence labeling tasks, we propose three new RNN variants to take into account external (POS) information for multilingual Super Sense Tagging (SST).", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 66, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Recurrent Neural Networks", "sec_num": "3.2.2" }, { "text": "In the simple Elman RNN (SRNN), the recurrent connection is a loop at the hidden layer level. This connection allows SRNN to use at the current time step hidden layer's states of previous time steps. In other words, the hidden layer of SRNN represents all previous history and not just n \u2212 1 previous inputs, thus the model can theoretically represent long context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A. Simple RNN", "sec_num": null }, { "text": "The architecture of the SRNN considered in this work is shown in Figure 2 . In this architecture, we have 4 layers: input layer, forward (also called recurrent or context layer), compression hidden layer and output layer. All neurons of the input layer are connected to every neuron of forward layer by weight matrix I F and R F , the weight matrix H F connects all neurons of the forward layer to every neuron of compression layer and all neurons of the compression layer are connected to every neuron of output layer by weight matrix O.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 73, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "A. Simple RNN", "sec_num": null }, { "text": "The input layer consists of a vector w(t) that represents the current word w t in our common words representation (all input neurons corresponding to current word w t are set to 0 except those that correspond to bi-sentences containing w t , which are set to 1), and of vector f (t \u2212 1) that represents output values in the forward layer from the previous time step. We name f (t) and c(t) the current time step hidden layers (our preliminary experiments have shown better performance using these two hidden layers instead of one hidden layer), with variable sizes (usually 80-1024 neurons) and sigmoid activation function. These hidden layers represent our common language-independent feature space and inherently capture word alignment information. The output layer y(t), given the input w(t) and f (t \u2212 1) is computed with the following steps :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A. Simple RNN", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (t) = \u03a3(w(t).I F (t) + f (t \u2212 1).R F (t))", "eq_num": "(1)" } ], "section": "A. Simple RNN", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c(t) = \u03a3(f (t).H F (t))", "eq_num": "(2)" } ], "section": "A. Simple RNN", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y(t) = \u0393(c(t).O(t))", "eq_num": "(3)" } ], "section": "A. Simple RNN", "sec_num": null }, { "text": "\u03a3 and \u0393 are the sigmoid and the softmax functions, respectively. The softmax activation function is used to normalize the values of output neurons to sum up to 1. After the network is trained, the output y(t) is a vector representing a probability distribution over the set of tags. The current word w t (in input) is tagged with the most probable output tag. For many sequence labeling tasks, it is beneficial to have access to future in addition to the past context. So, it can be argued that our SRNN is not optimal for sequence labeling, since the network ignores future context and tries to optimize the output prediction given the previous context only. This SRNN is thus penalized compared with our baseline projection based on TnT (Brants, 2000) which considers both left and right contexts. To overcome the limitations of SRNN, a simple extension of the SRNN architecture -namely Bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) -is used to ensure that context at previous and future time steps will be considered.", "cite_spans": [ { "start": 739, "end": 753, "text": "(Brants, 2000)", "ref_id": "BIBREF5" }, { "start": 935, "end": 963, "text": "(Schuster and Paliwal, 1997)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "A. Simple RNN", "sec_num": null }, { "text": "An unfolded BRNN architecture is given in Figure 2 . The basic idea of BRNN is to present each training sequence forwards and backwards to two separate recurrent hidden layers (forward and backward hidden layers) and then somehow merge the results. This structure provides the compression and the output layers with complete past and future context for every point in the input sequence. Note that without the backward layer, this structure simplifies to a SRNN.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 50, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "B. Bidirectional RNN", "sec_num": null }, { "text": "As mentioned in the introduction, we propose three new RNN variants to take into account low level (POS) information in a higher level (SST) annotation task. The question addressed here is: at which layer of the RNN this low level information should be included to improve SST performance? As specified in Figure 3 , the POS information can be introduced either at input layer or at forward layer (forward and backward layers for BRNN) or at compression layer. In all these RNN variants, the POS of the current word is also represented with a vector (P OS(t)). Its dimension corresponds to the number of POS tags in the tagset (universal tagset of Petrov et al. (2012) is used). We propose one hot vector representation where only one value is set to 1 and corresponds to the index of current tag (all other values are 0).", "cite_spans": [], "ref_spans": [ { "start": 306, "end": 314, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "C. RNN Variants", "sec_num": null }, { "text": "The first step in our approach is to train the neural network, given a parallel corpus (training corpus), and a validation corpus (different from train data) in the source language. In typical applications, the source language is a resource-rich language (which already has an efficient tagger or manually tagged resources). Our RNN models are trained by stochastic gradient descent using usual back-propagation and back-propagation through time algorithms (Rumelhart et al., 1985) . We learn our RNN models with an iterative process on the tagged source side of the parallel corpus. After each epoch (iteration) in training, validation data is used to compute per-token accuracy of the model. After that, if the per-token accuracy increases, training continues in the new epoch. Otherwise, the learning rate is halved at the start of the new epoch. Eventually, if the per-token accuracy does not increase anymore, training is stopped to prevent over-fitting. Generally, convergence takes 5-10 epochs, starting with a learning rate \u03b1 = 0.1.", "cite_spans": [ { "start": 457, "end": 481, "text": "(Rumelhart et al., 1985)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Network Training", "sec_num": "3.2.3" }, { "text": "The second step consists in using the trained model as a target language tagger (using our common vector representation). It is important to note that if we train on a multilingual parallel corpus with N languages (N > 2), the same trained model will be able to tag all the N languages. Hence, our approach assumes that the word order in both source and target languages are similar. In some languages such as English and French, word order for contexts containing nouns could be reversed most of the time. For example, the compound word the European Commission would be translated into la Commission europ\u00e9enne. In order to deal with the word order constraints, we also combine the RNN model with the cross-lingual projection model in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Network Training", "sec_num": "3.2.3" }, { "text": "For the words absent from in the initial parallel corpus, their vector representation is a vector of zero values. Consequently, during testing, the RNN model will use only the context information to tag the OOV words found in the test corpus. To deal with these types of OOV words 3 , we use the CBOW model of (Mikolov et al., 2013) to replace each OOV word by its closest known word in the current OOV word context. Once the closest word is found, its common vector representation is used (instead of the vector of zero values) at the input of the RNN.", "cite_spans": [ { "start": 310, "end": 332, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Dealing with out-of-vocabulary words", "sec_num": "3.3" }, { "text": "Since the simple cross-lingual projection model M1 and RNN model M2 use different strategies for tagging (TnT is based on Markov models while RNN is a neural network), we assume that these two models can be complementary. To keep the benefits of each approach, we explore how to combine them with linear interpolation. Formally, the probability to tag a given word w is computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Simple Cross-lingual Projection and RNN Models", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PM12(t|w) = (\u00b5PM1(t|w, CM1) + (1 \u2212 \u00b5)PM2(t|w, CM2))", "eq_num": "(4)" } ], "section": "Combining Simple Cross-lingual Projection and RNN Models", "sec_num": "3.4" }, { "text": "where, C M 1 and C M 2 are the context of w considered by M1 and M2 respectively. The relative importance of each model is adjusted through the interpolation parameter \u00b5. The word w is tagged with the most probable tag, using the function f described as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Simple Cross-lingual Projection and RNN Models", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (w) = arg max t (P M 12 (t|w))", "eq_num": "(5)" } ], "section": "Combining Simple Cross-lingual Projection and RNN Models", "sec_num": "3.4" }, { "text": "Our models are evaluated on two labeling tasks: Cross-language Part-Of-speech (POS) tagging and Multilingual Super Sense Tagging (SST).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We applied our method to build RNN POS taggers for four target languages -French, German, Greek and Spanish -with English as the source language. In order to determine the effectiveness of our common words representation described in section 3.2.1, we also investigated the use of state-of-the-art bilingual word embeddings (using MultiVec Toolkit (B\u00e9rard et al., 2016) ) as input to our RNN.", "cite_spans": [ { "start": 348, "end": 369, "text": "(B\u00e9rard et al., 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual POS Tagging", "sec_num": "4.1" }, { "text": "For French as a target language, we used a training set of 10, 000 parallel sentences, a validation set of 1000 English sentences, and a test set of 1000 French sentences, all extracted from the ARCADE II English-French corpus (Veronis et al., 2008) . The test set is tagged with the French TreeTagger (Schmid, 1995) and then manually checked.", "cite_spans": [ { "start": 227, "end": 249, "text": "(Veronis et al., 2008)", "ref_id": "BIBREF45" }, { "start": 302, "end": 316, "text": "(Schmid, 1995)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1.1" }, { "text": "For German, Greek and Spanish as a target language, we used training and validation data extracted from the Europarl corpus (Koehn, 2005) which are a subset of the training data used in (Das and Petrov, 2011; Duong et al., 2013) . This choice allows us to compare our results with those of (Das and Petrov, 2011; Duong et al., 2013; Gouws and S\u00f8gaard, 2015) . The train data set contains 65, 000 bi-sentences ; a validation set of 10, 000 bi-sentences is also available. For testing, we use the same test corpora as (Das and Petrov, 2011; Duong et al., 2013; Gouws and S\u00f8gaard, 2015) 2013and Gouws & S\u00f8gaard (2015) .", "cite_spans": [ { "start": 124, "end": 137, "text": "(Koehn, 2005)", "ref_id": "BIBREF23" }, { "start": 186, "end": 208, "text": "(Das and Petrov, 2011;", "ref_id": "BIBREF11" }, { "start": 209, "end": 228, "text": "Duong et al., 2013)", "ref_id": "BIBREF12" }, { "start": 290, "end": 312, "text": "(Das and Petrov, 2011;", "ref_id": "BIBREF11" }, { "start": 313, "end": 332, "text": "Duong et al., 2013;", "ref_id": "BIBREF12" }, { "start": 333, "end": 357, "text": "Gouws and S\u00f8gaard, 2015)", "ref_id": "BIBREF18" }, { "start": 516, "end": 538, "text": "(Das and Petrov, 2011;", "ref_id": "BIBREF11" }, { "start": 539, "end": 558, "text": "Duong et al., 2013;", "ref_id": "BIBREF12" }, { "start": 559, "end": 583, "text": "Gouws and S\u00f8gaard, 2015)", "ref_id": "BIBREF18" }, { "start": 592, "end": 614, "text": "Gouws & S\u00f8gaard (2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1.1" }, { "text": "tasks on dependency parsing (Buchholz and Marsi, 2006) ). The evaluation metric (per-token accuracy) and the Petrov et al. (2012) universal tagset are used for evaluation. For training, the English (source) sides of the training corpora (ARCADE II and Europarl) and of the validation corpora are tagged with the English TreeTagger toolkit. Using the matching provided by Petrov et al. (2012) , we map the TreeTagger and the CoNLL tagsets to the common Universal Tagset.", "cite_spans": [ { "start": 28, "end": 54, "text": "(Buchholz and Marsi, 2006)", "ref_id": "BIBREF6" }, { "start": 109, "end": 129, "text": "Petrov et al. (2012)", "ref_id": "BIBREF34" }, { "start": 371, "end": 391, "text": "Petrov et al. (2012)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1.1" }, { "text": "In order to build our baseline unsupervised tagger (based on a Simple Cross-lingual Projection -see section 3.1), we also tag the target side of the training corpus, with tags projected from English side through word-alignments established by GIZA++. After tags projection, a target language POS tagger based on TnT approach (Brants, 2000) is trained.", "cite_spans": [ { "start": 325, "end": 339, "text": "(Brants, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1.1" }, { "text": "The combined model is built for each considered language using cross-validation on the test corpus. First, the test corpus is split into 2 equal parts and on each part, we estimate the interpolation parameter \u00b5 (Equation 4) which maximizes the per-token accuracy score. Then each part of test corpus is tagged using the combined model tuned on the other part, and vice versa (standard cross-validation procedure).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1.1" }, { "text": "We trained MultiVec bilingual word embeddings on the parallel Europarl corpus between English and each of the target languages considered. Table 1 reports the results obtained for the unsupervised POS tagging. We note that the POS tagger based on bidirectional RNN (BRNN) has better performance than simple RNN (SRNN), which means that both past and future contexts help select the correct tag. Table 1 also shows the performance before and after performing our procedure for handling OOVs in BRNNs. It is shown that after replacing OOVs by the closest words using CBOW, the tagging accuracy significantly increases.", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 146, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Dataset", "sec_num": "4.1.1" }, { "text": "As shown in the same table, our RNN models accuracy is close to that of the simple projection tagger. It achieves comparable results to Das and Petrov (2011) , Duong et al. (2013) (who used the full Europarl corpus while we use only a 65, 000 subset of it) and to Gouws and S\u00f8gaard (2015) (who used extra resources such as Wiktionary and Wikipedia). Interestingly, RNN models learned using our common words representation (section 3.2.1) seem to perform significantly better than RNN models using MultiVec bilingual word embeddings.", "cite_spans": [ { "start": 136, "end": 157, "text": "Das and Petrov (2011)", "ref_id": "BIBREF11" }, { "start": 160, "end": 179, "text": "Duong et al. (2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "4.1.2" }, { "text": "It is also important to note that only one single SRNN and BRNN tagger applies to German, Greek and Spanish; so this is a truly multilingual POS tagger! Finally, as for several other NLP tasks such as language modelling or machine translation (where standard and NN-based models are generally combined in order to obtain optimal results), the combination of standard and RNN-based approaches (Projec-tion+_) seems necessary to further optimize POS tagging accuracies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "4.1.2" }, { "text": "In order to measure the impact of the parallel corpus quality on our method, we also learn our SST models using the multilingual parallel corpus MultiSemCor (MSC) which is the result of manual or automatic translation of SemCor from English into Italian and French.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual SST", "sec_num": "4.2" }, { "text": "SemCor The SemCor (Miller et al., 1993) is a subset of the Brown Corpus (Kucera and Francis, 1979) labeled with the WordNet (Fellbaum, 1998) senses. MultiSemCor The English-Italian MultiSemcor (MSC-IT-1) corpus is a manual translation of the English SemCor to Italian (Bentivogli et al., 2004) . As we already mentioned, we are also interested in measuring the impact of the parallel corpus quality on our method. For this we use two translation systems: (a) Google Translate to translate the English SemCor to Italian (MSC-IT-2) and French (MSC-FR-2). (b) LIG machine translation system (Besacier et al., 2012) to translate the English SemCor to French (MSC-FR-1). Training corpus The SemCor was labeled with the WordNet synsets. However, because we train models for SST, we convert SemCor synsets annotations to super senses. We learn our models using the four different versions of MSC (MSC-IT-1,2 -MSC-FR-1,2), with modified Semcor on source side. Test Corpus To evaluate our models, we used the SemEval 2013 Task 12 (Multilingual Word Sense Disambiguation) (Navigli et al., 2013) test corpora, which are available in 5 languages (English, French, German, Spanish and Italian) and labeled with BabelNet (Navigli and Ponzetto, 2012) senses. We map BabelNet senses to WordNet synsets, then WordNet synsets are mapped to super senses.", "cite_spans": [ { "start": 18, "end": 39, "text": "(Miller et al., 1993)", "ref_id": "BIBREF28" }, { "start": 72, "end": 98, "text": "(Kucera and Francis, 1979)", "ref_id": "BIBREF24" }, { "start": 124, "end": 140, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 268, "end": 293, "text": "(Bentivogli et al., 2004)", "ref_id": "BIBREF2" }, { "start": 588, "end": 611, "text": "(Besacier et al., 2012)", "ref_id": "BIBREF4" }, { "start": 1062, "end": 1084, "text": "(Navigli et al., 2013)", "ref_id": "BIBREF31" }, { "start": 1207, "end": 1235, "text": "(Navigli and Ponzetto, 2012)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.2.1" }, { "text": "The goals of our SST experiments are twofold: first, to investigate the effectiveness of using POS information to build multilingual super sense tagger, secondly to measure the impact of the parallel corpus quality (manual or automatic translation) on our RNN models (SRNN, BRNN and our proposed variants). To summarize, we build four super sense taggers based on baseline cross-lingual projection (see section 3.1) using four versions of MultiSemcor (MSC-IT-1, MSC-IT-2, MSC-FR-1, MSC-FR-2) described above. Then we use the same four versions to train our multilingual SST models based on SRNN and BRNN. For learning our multilingual SST models based on RNN variants proposed in part (C) of section 3.2.2, we also tag SemCor using TreeTagger (POS tagger proposed by Schmid (1995) ).", "cite_spans": [ { "start": 767, "end": 780, "text": "Schmid (1995)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "SST Systems Evaluated", "sec_num": "4.2.2" }, { "text": "Our models are evaluated on SemEval 2013 Task 12 test corpora. Results are directly comparable with those of systems which participated to this evaluation campaign. We report two SemEval 2013 (unsupervised) system results for comparison:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "4.2.3" }, { "text": "\u2022 MFS Semeval 2013 : The most frequent sense is the baseline provided by SemEval 2013 for Task 12, this system is a strong baseline, which is obtained by using an external resource (the WordNet most frequent sense).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "4.2.3" }, { "text": "\u2022 GETALP : a fully unsupervised WSD system proposed by (Schwab et al., 2012) based on Ant-Colony algorithm.", "cite_spans": [ { "start": 55, "end": 76, "text": "(Schwab et al., 2012)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "4.2.3" }, { "text": "The DAEBAK! (Navigli and Lapata, 2010) and the UMCC-DLSI systems (Guti\u00e9rrez V\u00e1zquez et al., 2011) have also participated to SemEval 2013 Task 12. However, they use a supervised approach 6 . Table 2 shows the results obtained by our RNN models and by two SemEval 2013 WSD systems. SRNN-POS-X and BRNN-POS-X refer to our RNN variants: In means input layer, H1 means first hidden layer and H2 means second hidden layer. We achieve the best performance on Italian using MSC-IT-1 clean corpus while noisy training corpus degrades SST performance. The best results are obtained with combination of simple projection and RNN which confirms (as for POS tagging) that both approaches are complementary. (Schwab et al., 2012) 40.2 34.6 Table 2 : Super Sense Tagging (SST) accuracy for Simple Projection, RNN and their combination.", "cite_spans": [ { "start": 12, "end": 38, "text": "(Navigli and Lapata, 2010)", "ref_id": "BIBREF29" }, { "start": 65, "end": 97, "text": "(Guti\u00e9rrez V\u00e1zquez et al., 2011)", "ref_id": "BIBREF21" }, { "start": 694, "end": 715, "text": "(Schwab et al., 2012)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 190, "end": 197, "text": "Table 2", "ref_id": null }, { "start": 726, "end": 733, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results and discussion", "sec_num": "4.2.3" }, { "text": "We also observe that the RNN approach seems more robust than simple projection on noisy corpora. This is probably due to the fact that no word alignments are required in our cross language RNN. Finally, BRNN-POS-H2-OOV achieves the best performance, which shows that the integration of POS information in RNN models and dealing with OOV words are useful to build efficient multilingual super senses taggers. Finally, it is worth mentioning that integrating low level (POS) information lately (last hidden layer) seems to be the best option in our case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "4.2.3" }, { "text": "In this paper, we have presented an approach based on recurrent neural networks (RNN) to induce multilingual text analysis tools. We have studied Simple and Bidirectional RNN architectures on multilingual POS and SST tagging. We have also proposed new RNN variants in order to take into account low level (POS) information in a super sense tagging task. Our approach has the following advantages: (a) it uses a language-independent word representation (based only on word co-occurrences in a parallel corpus), (b) it provides truly multilingual taggers (1 tagger for N languages) (c) it can be easily adapted to a new target language (when a small amount of supervised data is available, a previous study (Zennaki et al., 2015a; Zennaki et al., 2015b) has shown the effectiveness of our method in a weakly supervised context).", "cite_spans": [ { "start": 705, "end": 728, "text": "(Zennaki et al., 2015a;", "ref_id": "BIBREF47" }, { "start": 729, "end": 751, "text": "Zennaki et al., 2015b)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Short term perspectives are to apply multi-task learning to build systems that simultaneously perform syntactic and semantic analysis. Adding out-of-language data to improve our RNN taggers is also possible (and interesting to experiment) with our common (multilingual) vector representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Exceptions are the recent propositions on Neural Machine Translation(Cho et al., 2014;Sutskever et al., 2014) This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "to train a bilingual representation regardless of the task", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "words which do not have a known vector representation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For RNN models, only one (same) system is used to tag German, Greek and Spanish", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "DAEBAK! and UMCC-DLSI for SST have obtained: 68.1% and 72.5% on Italian; 59.8% and 67.6 % on French", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Polyglot: Distributed word representations for multilingual nlp. CoNLL-2013", "authors": [ { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "183--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilin- gual nlp. CoNLL-2013, pages 183-192.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137-1155.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluating cross-language annotation transfer in the multisemcor corpus", "authors": [ { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Pamela", "middle": [], "last": "Forner", "suffix": "" }, { "first": "Emanuele", "middle": [], "last": "Pianta", "suffix": "" } ], "year": 2004, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luisa Bentivogli, Pamela Forner, and Emanuele Pianta. 2004. Evaluating cross-language annotation transfer in the multisemcor corpus. In COLING, page 364.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multivec: a multilingual and multilevel representation learning toolkit for nlp", "authors": [ { "first": "Alexandre", "middle": [], "last": "B\u00e9rard", "suffix": "" }, { "first": "Christophe", "middle": [], "last": "Servan", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Pietquin", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2016, "venue": "The 10th edition of the Language Resources and Evaluation Conference (LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre B\u00e9rard, Christophe Servan, Olivier Pietquin, and Laurent Besacier. 2016. Multivec: a multilingual and multilevel representation learning toolkit for nlp. In The 10th edition of the Language Resources and Evaluation Conference (LREC 2016).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The lig english to french machine translation system for iwslt 2012", "authors": [ { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Lecouteux", "suffix": "" }, { "first": "Marwen", "middle": [], "last": "Azouzi", "suffix": "" }, { "first": "Ngoc-Quang", "middle": [], "last": "Luong", "suffix": "" } ], "year": 2012, "venue": "IWSLT", "volume": "", "issue": "", "pages": "102--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurent Besacier, Benjamin Lecouteux, Marwen Azouzi, and Ngoc-Quang Luong. 2012. The lig english to french machine translation system for iwslt 2012. In IWSLT, pages 102-108.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Tnt: a statistical part-of-speech tagger", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the sixth conference on Applied natural language processing", "volume": "", "issue": "", "pages": "224--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants. 2000. Tnt: a statistical part-of-speech tagger. In Proceedings of the sixth conference on Applied natural language processing, pages 224-231.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Conll-x shared task on multilingual dependency parsing", "authors": [ { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Tenth CoNLL", "volume": "", "issue": "", "pages": "149--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proceedings of the Tenth CoNLL, pages 149-164.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Syntax, Semantics and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. Syntax, Semantics and Structure in Statistical Translation.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger", "authors": [ { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Yasemin", "middle": [], "last": "Altun", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the EMNLP-2006", "volume": "", "issue": "", "pages": "594--602", "other_ids": {}, "num": null, "urls": [], "raw_text": "Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extrac- tion with a supersense sequence tagger. In Proceedings of the EMNLP-2006, pages 594-602.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised models for named entity classification", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the joint SIGDAT conference on EMNLP and very large corpora", "volume": "", "issue": "", "pages": "100--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In Proceedings of the joint SIGDAT conference on EMNLP and very large corpora, pages 100-110.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Natural language processing from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing from scratch. The Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised part-of-speech tagging with bilingual graph-based projections", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th ACL", "volume": "1", "issue": "", "pages": "600--609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. Proceedings of the 49th ACL, 1:600-609.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Simpler unsupervised pos tagging with bilingual projections", "authors": [ { "first": "Long", "middle": [], "last": "Duong", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Pecina", "suffix": "" } ], "year": 2013, "venue": "ACL (2)", "volume": "", "issue": "", "pages": "634--639", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Duong, Paul Cook, Steven Bird, and Pavel Pecina. 2013. Simpler unsupervised pos tagging with bilingual projections. In ACL (2), pages 634-639.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Syntactic transfer using a bilingual lexicon", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pauls", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2012, "venue": "The Joint Conference on EMNLP and CoNLL", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett, Adam Pauls, and Dan Klein. 2012. Syntactic transfer using a bilingual lexicon. In The Joint Conference on EMNLP and CoNLL, pages 1-11.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Finding structure in time", "authors": [ { "first": "", "middle": [], "last": "Jeffrey L Elman", "suffix": "" } ], "year": 1990, "venue": "Cognitive science", "volume": "14", "issue": "2", "pages": "179--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179-211.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Measuring word alignment quality for statistical machine translation", "authors": [ { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "3", "pages": "293--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Fraser and Daniel Marcu. 2007. Measuring word alignment quality for statistical machine translation. Computational Linguistics, 33(3):293-303.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "New directions in semi-supervised learning", "authors": [ { "first": "Goldberg", "middle": [], "last": "Andrew Brian", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Brian Goldberg. 2010. New directions in semi-supervised learning. Ph.D. thesis, University of Wisconsin-Madison.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Simple task-specific bilingual word embeddings", "authors": [ { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "1386--1390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Gouws and Anders S\u00f8gaard. 2015. Simple task-specific bilingual word embeddings. In NAACL-HLT, pages 1386-1390.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bilbowa: Fast bilingual distributed representations without word alignments. ICML", "authors": [ { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. ICML 2015.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Supervised sequence labelling", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves. 2012. Supervised sequence labelling. Springer.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Enriching the integration of semantic resources based on wordnet", "authors": [ { "first": "Yoan Guti\u00e9rrez", "middle": [], "last": "V\u00e1zquez", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Fern\u00e1ndez Orqu\u00edn", "suffix": "" }, { "first": "Andr\u00e9s", "middle": [], "last": "Montoyo Guijarro", "suffix": "" }, { "first": "Sonia", "middle": [], "last": "V\u00e1zquez P\u00e9rez", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoan Guti\u00e9rrez V\u00e1zquez, Antonio Fern\u00e1ndez Orqu\u00edn, Andr\u00e9s Montoyo Guijarro, Sonia V\u00e1zquez P\u00e9rez, et al. 2011. Enriching the integration of semantic resources based on wordnet.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The unsupervised learning of natural language structure", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein. 2005. The unsupervised learning of natural language structure. Ph.D. thesis, Stanford University.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "MT summit", "volume": "5", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79-86.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A standard corpus of present-day edited american english, for use with digital computers", "authors": [ { "first": "H", "middle": [], "last": "Kucera", "suffix": "" }, { "first": "", "middle": [], "last": "Francis", "suffix": "" } ], "year": 1967, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H Kucera and W Francis. 1979. A standard corpus of present-day edited american english, for use with digital computers (revised and amplified from 1967 version).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Bilingual word representations with monolingual quality in mind", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "151--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolin- gual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" } ], "year": 2010, "venue": "INTERSPEECH 2010", "volume": "", "issue": "", "pages": "1045--1048", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, pages 1045-1048.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A semantic concordance", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Randee", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Ross T", "middle": [], "last": "Tengi", "suffix": "" }, { "first": "", "middle": [], "last": "Bunker", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the workshop on HLT", "volume": "", "issue": "", "pages": "303--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller, Claudia Leacock, Randee Tengi, and Ross T Bunker. 1993. A semantic concordance. In Proceedings of the workshop on HLT, pages 303-308.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "An experimental study of graph connectivity for unsupervised word sense disambiguation. Pattern Analysis and Machine Intelligence", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "", "volume": "32", "issue": "", "pages": "678--692", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Mirella Lapata. 2010. An experimental study of graph connectivity for unsupervised word sense disambiguation. Pattern Analysis and Machine Intelligence, 32(4):678-692.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Babelnet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2012, "venue": "Artificial Intelligence", "volume": "193", "issue": "", "pages": "217--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. Babelnet: The automatic construction, evaluation and appli- cation of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217-250.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Semeval-2013 : Multilingual word sense disambiguation", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Daniele", "middle": [], "last": "Vannella", "suffix": "" } ], "year": 2013, "venue": "Second Joint Conference on Lexical and Computational Semantics", "volume": "2", "issue": "", "pages": "222--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. Semeval-2013 : Multilingual word sense disam- biguation. In Second Joint Conference on Lexical and Computational Semantics, volume 2, pages 222-231.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Improved statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting on ACL", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting on ACL, pages 440-447.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Cross-lingual annotation projection for semantic roles", "authors": [ { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2009, "venue": "Journal of Artificial Intelligence Research", "volume": "36", "issue": "1", "pages": "307--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2009. Cross-lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36(1):307-340.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A universal part-of-speech tagset", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2012, "venue": "LREC'12", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In LREC'12.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning internal representations by error propagation", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "David E Rumelhart", "suffix": "" }, { "first": "Ronald J Williams ; Dtic", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "", "middle": [], "last": "Document", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1985. Learning internal representations by error propagation. Technical report, DTIC Document.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Treetagger| a language independent part-of-speech tagger", "authors": [ { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1995, "venue": "", "volume": "43", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helmut Schmid. 1995. Treetagger| a language independent part-of-speech tagger. Institut f\u00fcr Maschinelle Sprachverarbeitung, Universit\u00e4t Stuttgart, 43:28.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A fixed size storage o (n3) time complexity learning algorithm for fully recurrent continually running networks", "authors": [ { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1992, "venue": "Neural Computation", "volume": "4", "issue": "2", "pages": "243--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00fcrgen Schmidhuber. 1992. A fixed size storage o (n3) time complexity learning algorithm for fully recurrent continually running networks. Neural Computation, 4(2):243-248.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "K", "middle": [], "last": "Kuldip", "suffix": "" }, { "first": "", "middle": [], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "Signal Processing", "volume": "45", "issue": "11", "pages": "2673--2681", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. Signal Processing, 45(11):2673-2681.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Ant colony algorithm for the unsupervised word sense disambiguation of texts: Comparison and evaluation", "authors": [ { "first": "Didier", "middle": [], "last": "Schwab", "suffix": "" }, { "first": "J\u00e9r\u00f4me", "middle": [], "last": "Goulian", "suffix": "" } ], "year": 2012, "venue": "COLING", "volume": "", "issue": "", "pages": "2389--2404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Didier Schwab, J\u00e9r\u00f4me Goulian, Andon Tchechmedjiev, and Herv\u00e9 Blanchon. 2012. Ant colony algorithm for the unsupervised word sense disambiguation of texts: Comparison and evaluation. In COLING, pages 2389-2404.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Comparison of feedforward and recurrent neural network language models", "authors": [ { "first": "Martin", "middle": [], "last": "Sundermeyer", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Oparin", "suffix": "" }, { "first": "J-L", "middle": [], "last": "Gauvain", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Freiberg", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schluter", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2013, "venue": "ICASSP", "volume": "", "issue": "", "pages": "8430--8434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Sundermeyer, Ilya Oparin, J-L Gauvain, Ben Freiberg, Ralf Schluter, and Hermann Ney. 2013. Comparison of feedforward and recurrent neural network language models. In ICASSP, pages 8430-8434. IEEE.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Cross-lingual word clusters for direct transfer of linguistic structure", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 conference of the NAACL-HLT", "volume": "", "issue": "", "pages": "477--487", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 conference of the NAACL-HLT, pages 477-487.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Target language adaptation of discriminative transfer parsers", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Crosslingual induction of semantic roles", "authors": [ { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Klementiev", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the ACL", "volume": "1", "issue": "", "pages": "647--656", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Titov and Alexandre Klementiev. 2012. Crosslingual induction of semantic roles. In Proceedings of the 50th Annual Meeting of the ACL, volume 1, pages 647-656.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Arcade ii action de recherche concert\u00e9e sur l'alignement de documents et son \u00e9valuation", "authors": [ { "first": "J", "middle": [], "last": "Veronis", "suffix": "" }, { "first": "C", "middle": [], "last": "Hamon", "suffix": "" }, { "first": "", "middle": [], "last": "Ayache", "suffix": "" }, { "first": "", "middle": [], "last": "Belmouhoub", "suffix": "" }, { "first": "", "middle": [], "last": "Kraif", "suffix": "" }, { "first": "", "middle": [], "last": "Laurent", "suffix": "" }, { "first": "N", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Semmar", "suffix": "" }, { "first": "W", "middle": [], "last": "Stuck", "suffix": "" }, { "first": "", "middle": [], "last": "Zaghouani", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Veronis, O Hamon, C Ayache, R Belmouhoub, O Kraif, D Laurent, TMH Nguyen, N Semmar, F Stuck, and W Zaghouani. 2008. Arcade ii action de recherche concert\u00e9e sur l'alignement de documents et son \u00e9valuation.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Inducing multilingual text analysis tools via robust projection across aligned corpora", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the first international conference on Human language technology research", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human language technology research, pages 1-8.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Unsupervised and lightly supervised part-ofspeech tagging using recurrent neural networks", "authors": [ { "first": "Othman", "middle": [], "last": "Zennaki", "suffix": "" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Othman Zennaki, Nasredine Semmar, and Laurent Besacier. 2015a. Unsupervised and lightly supervised part-of- speech tagging using recurrent neural networks. In PACLIC 29.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Utilisation des r\u00e9seaux de neurones r\u00e9currents pour la projection interlingue d'\u00e9tiquettes morpho-syntaxiques \u00e0 partir d'un corpus parall\u00e8le", "authors": [ { "first": "Othman", "middle": [], "last": "Zennaki", "suffix": "" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Othman Zennaki, Nasredine Semmar, and Laurent Besacier. 2015b. Utilisation des r\u00e9seaux de neurones r\u00e9currents pour la projection interlingue d'\u00e9tiquettes morpho-syntaxiques \u00e0 partir d'un corpus parall\u00e8le. In TALN 2015.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Overview of the proposed model architecture for inducing multilingual RNN taggers.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "High level schema of RNN used in our work.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "SRNN variants with POS information at three levels: (a) input layer, (b) forward layer, (c) compression layer.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "num": null, "content": "
Lang.FrenchGermanGreekSpanish
ModelAll words OOVAll words OOV All words OOV All words OOV
Simple Projection80.377.178.973.077.572.880.079.7
SRNN MultiVec75.065.470.368.871.165.473.462.4
SRNN78.570.076.176.475.770.778.872.6
BRNN80.670.977.576.677.271.080.573.1
BRNN -OOV81.477.877.677.877.975.380.674.7
Projection + SRNN84.578.881.577.078.374.683.681.2
Projection + BRNN85.279.081.977.179.275.084.481.7
Projection + BRNN -OOV85.680.482.178.779.978.584.481.9
(Das, 2011)--82.8-82.5-84.2-
(Duong, 2013)--85.4-80.4-83.3-
(Gouws, 2015a)--84.8---82.6-
", "html": null, "type_str": "table", "text": "(bi-sentences from CoNLL shared" }, "TABREF1": { "num": null, "content": "", "html": null, "type_str": "table", "text": "Token-level POS tagging accuracy for Simple Projection, SRNN using MultiVec bilingual word embeddings as input, RNN 5 , Projection+RNN and methods of Das & Petrov (2011), Duong et al" } } } }