{ "paper_id": "P14-1006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:04:22.206679Z" }, "title": "Multilingual Models for Compositional Distributed Semantics", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford Oxford", "location": { "postCode": "OX1 3QD", "country": "UK" } }, "email": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford Oxford", "location": { "postCode": "OX1 3QD", "country": "UK" } }, "email": "phil.blunsom@cs.ox.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data.", "pdf_parse": { "paper_id": "P14-1006", "_pdf_hash": "", "abstract": [ { "text": "We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Distributed representations of words provide the basis for many state-of-the-art approaches to various problems in natural language processing today. Such word embeddings are naturally richer representations than those of symbolic or discrete models, and have been shown to be able to capture both syntactic and semantic information. Successful applications of such models include language modelling (Bengio et al., 2003) , paraphrase detection (Erk and Pad\u00f3, 2008) , and dialogue analysis (Kalchbrenner and Blunsom, 2013) .", "cite_spans": [ { "start": 400, "end": 421, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF2" }, { "start": 445, "end": 465, "text": "(Erk and Pad\u00f3, 2008)", "ref_id": "BIBREF14" }, { "start": 490, "end": 522, "text": "(Kalchbrenner and Blunsom, 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Within a monolingual context, the distributional hypothesis (Firth, 1957) forms the basis of most approaches for learning word representations. In Figure 1 : Model with parallel input sentences a and b. The model minimises the distance between the sentence level encoding of the bitext. Any composition functions (CVM) can be used to generate the compositional sentence level representations.", "cite_spans": [ { "start": 60, "end": 73, "text": "(Firth, 1957)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "this work, we extend this hypothesis to multilingual data and joint-space embeddings. We present a novel unsupervised technique for learning semantic representations that leverages parallel corpora and employs semantic transfer through compositional representations. Unlike most methods for learning word representations, which are restricted to a single language, our approach learns to represent meaning across languages in a shared multilingual semantic space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present experiments on two corpora. First, we show that for cross-lingual document classification on the Reuters RCV1/RCV2 corpora (Lewis et al., 2004) , we outperform the prior state of the art (Klementiev et al., 2012) . Second, we also present classification results on a massively multilingual corpus which we derive from the TED corpus (Cettolo et al., 2012) . The results on this task, in comparison with a number of strong baselines, further demonstrate the relevance of our approach and the success of our method in learning multilingual semantic representations over a wide range of languages.", "cite_spans": [ { "start": 134, "end": 154, "text": "(Lewis et al., 2004)", "ref_id": "BIBREF24" }, { "start": 198, "end": 223, "text": "(Klementiev et al., 2012)", "ref_id": "BIBREF21" }, { "start": 344, "end": 366, "text": "(Cettolo et al., 2012)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Distributed representation learning describes the task of learning continuous representations for discrete objects. Here, we focus on learning semantic representations and investigate how the use of multilingual data can improve learning such representations at the word and higher level. We present a model that learns to represent each word in a lexicon by a continuous vector in R d . Such distributed representations allow a model to share meaning between similar words, and have been used to capture semantic, syntactic and morphological content (Collobert and Weston, 2008; Turian et al., 2010, inter alia) .", "cite_spans": [ { "start": 551, "end": 579, "text": "(Collobert and Weston, 2008;", "ref_id": "BIBREF9" }, { "start": 580, "end": 612, "text": "Turian et al., 2010, inter alia)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2" }, { "text": "We describe a multilingual objective function that uses a noise-contrastive update between semantic representations of different languages to learn these word embeddings. As part of this, we use a compositional vector model (CVM, henceforth) to compute semantic representations of sentences and documents. A CVM learns semantic representations of larger syntactic units given the semantic representations of their constituents (Clark and Pulman, 2007; Mitchell and Lapata, 2008; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Socher et al., 2012; Hermann and Blunsom, 2013, inter alia) .", "cite_spans": [ { "start": 427, "end": 451, "text": "(Clark and Pulman, 2007;", "ref_id": "BIBREF6" }, { "start": 452, "end": 478, "text": "Mitchell and Lapata, 2008;", "ref_id": "BIBREF29" }, { "start": 479, "end": 507, "text": "Baroni and Zamparelli, 2010;", "ref_id": "BIBREF1" }, { "start": 508, "end": 541, "text": "Grefenstette and Sadrzadeh, 2011;", "ref_id": "BIBREF16" }, { "start": 542, "end": 562, "text": "Socher et al., 2012;", "ref_id": "BIBREF35" }, { "start": 563, "end": 601, "text": "Hermann and Blunsom, 2013, inter alia)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2" }, { "text": "A key difference between our approach and those listed above is that we only require sentencealigned parallel data in our otherwise unsupervised learning function. This removes a number of constraints that normally come with CVM models, such as the need for syntactic parse trees, word alignment or annotated data as a training signal. At the same time, by using multiple CVMs to transfer information between languages, we enable our models to capture a broader semantic context than would otherwise be possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2" }, { "text": "The idea of extracting semantics from multilingual data stems from prior work in the field of semantic grounding. Language acquisition in humans is widely seen as grounded in sensory-motor experience (Bloom, 2001; Roy, 2003) . Based on this idea, there have been some attempts at using multi-modal data for learning better vector representations of words (e.g. Srivastava and Salakhutdinov (2012) ). Such methods, however, are not easily scalable across languages or to large amounts of data for which no secondary or tertiary representation might exist.", "cite_spans": [ { "start": 200, "end": 213, "text": "(Bloom, 2001;", "ref_id": "BIBREF4" }, { "start": 214, "end": 224, "text": "Roy, 2003)", "ref_id": "BIBREF32" }, { "start": 361, "end": 396, "text": "Srivastava and Salakhutdinov (2012)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2" }, { "text": "Parallel data in multiple languages provides an alternative to such secondary representations, as parallel texts share their semantics, and thus one language can be used to ground the other. Some work has exploited this idea for transferring linguistic knowledge into low-resource languages or to learn distributed representations at the word level (Klementiev et al., 2012; Zou et al., 2013; Lauly et al., 2013, inter alia) . So far almost all of this work has been focused on learning multilingual representations at the word level. As distributed representations of larger expressions have been shown to be highly useful for a number of tasks, it seems to be a natural next step to attempt to induce these, too, cross-lingually.", "cite_spans": [ { "start": 349, "end": 374, "text": "(Klementiev et al., 2012;", "ref_id": "BIBREF21" }, { "start": 375, "end": 392, "text": "Zou et al., 2013;", "ref_id": "BIBREF40" }, { "start": 393, "end": 424, "text": "Lauly et al., 2013, inter alia)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2" }, { "text": "Most prior work on learning compositional semantic representations employs parse trees on their training data to structure their composition functions (Socher et al., 2012; Hermann and Blunsom, 2013, inter alia) . Further, these approaches typically depend on specific semantic signals such as sentiment-or topic-labels for their objective functions. While these methods have been shown to work in some cases, the need for parse trees and annotated data limits such approaches to resourcefortunate languages. Our novel method for learning compositional vectors removes these requirements, and as such can more easily be applied to low-resource languages. Specifically, we attempt to learn semantics from multilingual data. The idea is that, given enough parallel data, a shared representation of two parallel sentences would be forced to capture the common elements between these two sentences. What parallel sentences share, of course, are their semantics. Naturally, different languages express meaning in different ways. We utilise this diversity to abstract further from mono-lingual surface realisations to deeper semantic representations. We exploit this semantic similarity across languages by defining a bilingual (and trivially multilingual) energy as follows.", "cite_spans": [ { "start": 151, "end": 172, "text": "(Socher et al., 2012;", "ref_id": "BIBREF35" }, { "start": 173, "end": 211, "text": "Hermann and Blunsom, 2013, inter alia)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Assume two functions f : X \u2192 R d and g : Y \u2192 R d , which map sentences from languages x and y onto distributed semantic representations in R d . Given a parallel corpus C, we then define the energy of the model given two sentences (a, b) \u2208 C as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E bi (a, b) = f (a) \u2212 g(b) 2", "eq_num": "(1)" } ], "section": "Approach", "sec_num": "3" }, { "text": "We want to minimize E bi for all semantically equivalent sentences in the corpus. In order to prevent the model from degenerating, we further introduce a noise-constrastive large-margin update which ensures that the representations of non-aligned sentences observe a certain margin from each other. For every pair of parallel sentences (a, b) we sample a number of additional sentence pairs (\u2022, n) \u2208 C, where n-with high probability-is not semantically equivalent to a. We use these noise samples as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "E hl (a, b, n) = [m + E bi (a, b) \u2212 E bi (a, n)] +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "where [x] + = max(x, 0) denotes the standard hinge loss and m is the margin. This results in the following objective function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "J(\u03b8) = (a,b)\u2208C k i=1 E hl (a, b, n i ) + \u03bb 2 \u03b8 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "(2) where \u03b8 is the set of all model variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "The objective function in Equation 2 could be coupled with any two given vector composition functions f, g from the literature. As we aim to apply our approach to a wide range of languages, we focus on composition functions that do not require any syntactic information. We evaluate the following two composition functions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two Composition Models", "sec_num": "3.1" }, { "text": "The first model, ADD, represents a sentence by the sum of its word vectors. This is a distributed bag-of-words approach as sentence ordering is not taken into account by the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two Composition Models", "sec_num": "3.1" }, { "text": "Second, the BI model is designed to capture bigram information, using a non-linearity over bigram pairs in its composition function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two Composition Models", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (x) = n i=1 tanh (x i\u22121 + x i )", "eq_num": "(3)" } ], "section": "Two Composition Models", "sec_num": "3.1" }, { "text": "The use of a non-linearity enables the model to learn interesting interactions between words in a document, which the bag-of-words approach of ADD is not capable of learning. We use the hyperbolic tangent as activation function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two Composition Models", "sec_num": "3.1" }, { "text": "For a number of tasks, such as topic modelling, representations of objects beyond the sentence level are required. While most approaches to compositional distributed semantics end at the word level, our model extends to document-level learning quite naturally, by recursively applying the composition and objective function Equation 2to compose sentences into documents. This is achieved by first computing semantic representations for each sentence in a document. Next, these representations are used as inputs in a higher-level CVM, computing a semantic representation of a document ( Figure 2 ). This recursive approach integrates documentlevel representations into the learning process. We can thus use corpora of parallel documentsregardless of whether they are sentence aligned or not-to propagate a semantic signal back to the individual words. If sentence alignment is available, of course, the document-signal can simply be combined with the sentence-signal, as we did with the experiments described in \u00a75.3.", "cite_spans": [], "ref_spans": [ { "start": 587, "end": 595, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Document-level Semantics", "sec_num": "3.2" }, { "text": "This concept of learning compositional representations for documents contrasts with prior work (Socher et al., 2011; Klementiev et al., 2012 , inter alia) who rely on summing or averaging sentencevectors if representations beyond the sentencelevel are required for a particular task.", "cite_spans": [ { "start": 95, "end": 116, "text": "(Socher et al., 2011;", "ref_id": "BIBREF34" }, { "start": 117, "end": 140, "text": "Klementiev et al., 2012", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Document-level Semantics", "sec_num": "3.2" }, { "text": "We evaluate the models presented in this paper both with and without the document-level signal. We refer to the individual models used as ADD and BI if used without, and as DOC/ADD and DOC/BI is used with the additional document composition function and error signal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document-level Semantics", "sec_num": "3.2" }, { "text": "We use two corpora for learning semantic representations and performing the experiments described in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "4" }, { "text": "The Europarl corpus v7 1 (Koehn, 2005) was used during initial development and testing of our approach, as well as to learn the representations used for the Cross-Lingual Document Classification task described in \u00a75.2. We considered the English-German and English-French language pairs from this corpus. From each pair the final 100,000 sentences were reserved for development.", "cite_spans": [ { "start": 25, "end": 38, "text": "(Koehn, 2005)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "4" }, { "text": "Second, we developed a massively multilingual corpus based on the TED corpus 2 for IWSLT 2013 (Cettolo et al., 2012) . This corpus contains English transcriptions and multilingual, sentencealigned translations of talks from the TED conference. While the corpus is aimed at machine translation tasks, we use the keywords associated with each talk to build a subsidiary corpus for multilingual document classification as follows. 3 The development sections provided with the IWSLT 2013 corpus were again reserved for development. We removed approximately 10 percent of the training data in each language to create a test corpus (all talks with id \u2265 1,400). The new training corpus consists of a total of 12,078 parallel documents distributed across 12 language pairs 4 . In total, this amounts to 1,678,219 non-English sentences (the number of unique English sentences is smaller as many documents are translated into multiple languages and thus appear repeatedly in the corpus). Each document (talk) contains one or several keywords. We used the 15 most frequent keywords for the topic classification experiments described in section \u00a75.3.", "cite_spans": [ { "start": 94, "end": 116, "text": "(Cettolo et al., 2012)", "ref_id": "BIBREF5" }, { "start": 428, "end": 429, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "4" }, { "text": "Both corpora were pre-processed using the set of tools provided by cdec 5 for tokenizing and lowercasing the data. Further, all empty sentences and their translations were removed from the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "4" }, { "text": "We report results on two experiments. First, we replicate the cross-lingual document classification task of Klementiev et al. (2012) , learning distributed representations on the Europarl corpus and evaluating on documents from the Reuters RCV1/RCV2 corpora. Subsequently, we design a multi-label classification task using the TED corpus, both for training and evaluating. The use of a wider range of languages in the second experiments allows us to better evaluate our models' capabilities in learning a shared multilingual semantic representation. We also investigate the learned embeddings from a qualitative perspective in \u00a75.4.", "cite_spans": [ { "start": 108, "end": 132, "text": "Klementiev et al. (2012)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "All model weights were randomly initialised using a Gaussian distribution (\u00b5=0, \u03c3 2 =0.1). We used the available development data to set our model parameters. For each positive sample we used a number of noise samples (k \u2208 {1, 10, 50}), randomly drawn from the corpus at each training epoch. All our embeddings have dimensionality d=128, with the margin set to m=d. 6 Further, we use L2 regularization with \u03bb=1 and step-size in {0.01, 0.05}. We use 100 iterations for the RCV task, 500 for the TED single and 5 for the joint corpora. We use the adaptive gradient method, AdaGrad (Duchi et al., 2011) , for updating the weights of our models, in a mini-batch setting (b \u2208 {10, 50}). All settings, our model implementation and scripts to replicate our experiments are available at http://www.karlmoritz.com/.", "cite_spans": [ { "start": 579, "end": 599, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "5.1" }, { "text": "We evaluate our models on the cross-lingual document classification (CLDC, henceforth) task first described in Klementiev et al. (2012) . This task involves learning language independent embeddings which are then used for document classification across the English-German language pair. For this, CLDC employs a particular kind of supervision, namely using supervised training data in one language and evaluating without further supervision in another. Thus, CLDC can be used to establish whether our learned representations are semantically useful across multiple languages.", "cite_spans": [ { "start": 111, "end": 135, "text": "Klementiev et al. (2012)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "RCV1/RCV2 Document Classification", "sec_num": "5.2" }, { "text": "We follow the experimental setup described in Klementiev et al. (2012) , with the exception that we learn our embeddings using solely the Europarl data and use the Reuters corpora only during for classifier training and testing. Each document in the classification task is represented by the average of the d-dimensional representations of all its sentences. We train the multiclass classifier using an averaged perceptron (Collins, 2002) with the same settings as in Klementiev et al. (2012) . We present results from four models. The ADD model is trained on 500k sentence pairs of the English-German parallel section of the Europarl corpus. The ADD+ model uses an additional 500k parallel sentences from the English-French corpus, resulting in one million English sentences, each paired up with either a German or a French sentence, with BI and BI+ trained accordingly. The motivation behind ADD+ and BI+ is to investigate whether we can learn better embeddings by introducing additional data from other languages. A similar idea exists in machine translation where English is frequently used to pivot between other languages (Cohn and Lapata, 2007) .", "cite_spans": [ { "start": 46, "end": 70, "text": "Klementiev et al. (2012)", "ref_id": "BIBREF21" }, { "start": 423, "end": 438, "text": "(Collins, 2002)", "ref_id": "BIBREF8" }, { "start": 468, "end": 492, "text": "Klementiev et al. (2012)", "ref_id": "BIBREF21" }, { "start": 1128, "end": 1151, "text": "(Cohn and Lapata, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "RCV1/RCV2 Document Classification", "sec_num": "5.2" }, { "text": "The actual CLDC experiments are performed by training on English and testing on German documents and vice versa. Following prior work, we use varying sizes between 100 and 10,000 documents when training the multiclass classifier. The results of this task across training sizes are in Figure 3. Table 1 shows the results for training on 1,000 documents compared with the results published in Klementiev et al. (2012) . Our models outperform the prior state of the art, with the BI models performing slightly better than the ADD models. As the relative results indicate, the addition of a second language improves model perfor-mance. It it interesting to note that results improve in both directions of the task, even though no additional German data was used for the '+' models.", "cite_spans": [ { "start": 391, "end": 415, "text": "Klementiev et al. (2012)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 284, "end": 290, "text": "Figure", "ref_id": null }, { "start": 294, "end": 301, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "RCV1/RCV2 Document Classification", "sec_num": "5.2" }, { "text": "Here we describe our experiments on the TED corpus, which enables us to scale up to multilingual learning. Consisting of a large number of relatively short and parallel documents, this corpus allows us to evaluate the performance of the DOC model described in \u00a73.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "We use the training data of the corpus to learn distributed representations across 12 languages. Training is performed in two settings. In the single mode, vectors are learnt from a single language pair (en-X), while in the joint mode vectorlearning is performed on all parallel sub-corpora simultaneously. This setting causes words from all languages to be embedded in a single semantic space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "First, we evaluate the effect of the documentlevel error signal (DOC, described in \u00a73.2), as well as whether our multilingual learning method can extend to a larger variety of languages. We train DOC models, using both ADD and BI as CVM (DOC/ADD, DOC/BI), both in the single and joint mode. For comparison, we also train ADD and DOC models without the document-level error signal. The resulting document-level representations are used to train classifiers (system and settings as in \u00a75.2) for each language, which are then evaluated in the paired language. In the English case we train twelve individual classifiers, each using the training data of a single language pair only. As described in \u00a74, we use 15 keywords for the classification task. Due to space limitations, we report cumulative results in the form of F1-scores throughout this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "MT System We develop a machine translation baseline as follows. We train a machine translation tool on the parallel training data, using the development data of each language pair to optimize the translation system. We use the cdec decoder (Dyer et al., 2010) with default settings for this purpose. With this system we translate the test data, and then use a Na\u00efve Bayes classifier 7 for the actual experiments. To exemplify, this means the de\u2192ar result is produced by training a translation system from Arabic to German. The Arabic test set is translated into German. A classifier is then trained Table 4 : F1-scores on the TED corpus document classification task when training and evaluating on the same language. Baseline embeddings are Senna (Collobert et al., 2011) and Polyglot (Al-Rfou' et al., 2013) . Table 1 for model descriptions). The left chart shows results for these models when trained on German data and evaluated on English data, the right chart vice versa.", "cite_spans": [ { "start": 240, "end": 259, "text": "(Dyer et al., 2010)", "ref_id": "BIBREF13" }, { "start": 747, "end": 771, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF10" }, { "start": 776, "end": 808, "text": "Polyglot (Al-Rfou' et al., 2013)", "ref_id": null } ], "ref_spans": [ { "start": 599, "end": 606, "text": "Table 4", "ref_id": null }, { "start": 811, "end": 818, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "on the German training data and evaluated on the translated Arabic. While we developed this system as a baseline, it must be noted that the classifier of this system has access to significantly more information (all words in the document) as opposed to our models (one embedding per document), and we do not expect to necessarily beat this system. The results of this experiment are in Table 2 . When comparing the results between the ADD model and the models trained using the documentlevel error signal, the benefit of this additional signal becomes clear. The joint training mode leads to a relative improvement when training on English data and evaluating in a second language. This suggests that the joint mode improves the quality of the English embeddings more than it affects the L2-embeddings. More surprising, perhaps, is the relative performance between the ADD and BI composition functions, especially when compared to the results in \u00a75.2, where the BI models relatively consistently performed better. We suspect that the better performance of the additive composition function on this task is related to the smaller amount of training data available which could cause sparsity issues for the bigram model.", "cite_spans": [], "ref_spans": [ { "start": 386, "end": 393, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "As expected, the MT system slightly outperforms our models on most language pairs. However, the overall performance of the models is comparable to that of the MT system. Considering the relative amount of information available during the classifier training phase, this indicates that our learned representations are semantically useful, capturing almost the same amount of information as available to the Na\u00efve Bayes classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "We next investigate linguistic transfer across languages. We re-use the embeddings learned with the DOC/ADD joint model from the previous experiment for this purpose, and train classifiers on all non-English languages using those embeddings. Subsequently, we evaluate their performance in classifying documents in the remaining languages. Results for this task are in Table 3 . While the results across language-pairs might not be very insightful, the overall good performance compared with the results in Table 2 implies that we learnt semantically meaningful vectors and in fact a joint embedding space across thirteen languages.", "cite_spans": [], "ref_spans": [ { "start": 368, "end": 375, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 506, "end": 513, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "In a third evaluation (Table 4) , we apply the embeddings learnt with out models to a monolingual classification task, enabling us to compare with prior work on distributed representation learning. In this experiment a classifier is trained in one language and then evaluated in the same. We again use a Na\u00efve Bayes classifier on the raw data to establish a reasonable upper bound.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 31, "text": "(Table 4)", "ref_id": null } ], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "We compare our embeddings with the SENNA embeddings, which achieve state of the art performance on a number of tasks (Collobert et al., 2011) . Additionally, we use the Polyglot embeddings of Al-Rfou' et al. (2013) , who published word embeddings across 100 languages, including all languages considered in this paper. We represent each document by the mean of its word vectors and then apply the same classifier training and testing regime as with our models. Even though both of these sets of embeddings were trained on much larger datasets than ours, our models outperform these baselines on all languages-even outperforming the Na\u00efve Bayes system on on several Figure 4 : t-SNE projections for a number of English, French and German words as represented by the BI+ model. Even though the model did not use any parallel French-German data during training, it learns semantic similarity between these two languages using English as a pivot, and semantically clusters words across all languages. languages. While this may partly be attributed to the fact that our vectors were learned on in-domain data, this is still a very positive outcome.", "cite_spans": [ { "start": 117, "end": 141, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF10" }, { "start": 192, "end": 214, "text": "Al-Rfou' et al. (2013)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 665, "end": 673, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "TED Corpus Experiments", "sec_num": "5.3" }, { "text": "While the classification experiments focused on establishing the semantic content of the sentence level representations, we also want to briefly investigate the induced word embeddings. We use the BI+ model trained on the Europarl corpus for this purpose. Figure 4 shows the t-SNE projections for a number of English, French and German words. Even though the model did not use any parallel French-German data during training, it still managed to learn semantic word-word similarity across these two languages.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 264, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "5.4" }, { "text": "Going one step further, Figure 5 shows t-SNE projections for a number of short phrases in these three languages. We use the English the presi-dent and gender-specific expressions Mr President and Madam President as well as gender-specific equivalents in French and German. The projection demonstrates a number of interesting results: First, the model correctly clusters the words into three groups, corresponding to the three English forms and their associated translations. Second, a separation between genders can be observed, with male forms on the bottom half of the chart and female forms on the top, with the neutral the president in the vertical middle. Finally, if we assume a horizontal line going through the president, this line could be interpreted as a \"gender divide\", with male and female versions of one expression mirroring each other on that line. In the case of the president and its translations, this effect becomes even clearer, with the neutral English expression being projected close to the mid-point between each other language's gender-specific versions.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "5.4" }, { "text": "These results further support our hypothesis that the bilingual contrastive error function can learn semantically plausible embeddings and furthermore, that it can abstract away from mono-lingual surface realisations into a shared semantic space across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "5.4" }, { "text": "Distributed Representations Distributed representations can be learned through a number of approaches. In their simplest form, distributional information from large corpora can be used to learn embeddings, where the words appearing within a certain window of the target word are used to compute that word's embedding. This is related to topic-modelling techniques such as LSA (Dumais et al., 1988) , LSI, and LDA (Blei et al., 2003) , but these methods use a document-level context, and tend to capture the topics a word is used in rather than its more immediate syntactic context.", "cite_spans": [ { "start": 376, "end": 397, "text": "(Dumais et al., 1988)", "ref_id": "BIBREF12" }, { "start": 413, "end": 432, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Neural language models are another popular approach for inducing distributed word representations (Bengio et al., 2003) . They have received a lot of attention in recent years (Collobert and Weston, 2008; Mnih and Hinton, 2009; Mikolov et al., 2010, inter alia) and have achieved state of the art performance in language modelling. Collobert et al. (2011) further popularised using neural network architectures for learning word embeddings from large amounts of largely unlabelled data by showing the embeddings can then be used to improve standard supervised tasks.", "cite_spans": [ { "start": 98, "end": 119, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF2" }, { "start": 176, "end": 204, "text": "(Collobert and Weston, 2008;", "ref_id": "BIBREF9" }, { "start": 205, "end": 227, "text": "Mnih and Hinton, 2009;", "ref_id": "BIBREF30" }, { "start": 228, "end": 261, "text": "Mikolov et al., 2010, inter alia)", "ref_id": null }, { "start": 332, "end": 355, "text": "Collobert et al. (2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Unsupervised word representations can easily be plugged into a variety of NLP related tasks. Tasks, where the use of distributed representations has resulted in improvements include topic modelling (Blei et al., 2003) or named entity recognition (Turian et al., 2010; Collobert et al., 2011) .", "cite_spans": [ { "start": 198, "end": 217, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" }, { "start": 246, "end": 267, "text": "(Turian et al., 2010;", "ref_id": "BIBREF37" }, { "start": 268, "end": 291, "text": "Collobert et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Compositional Vector Models For a number of important problems, semantic representations of individual words do not suffice, but instead a semantic representation of a larger structure-e.g. a phrase or a sentence-is required. Self-evidently, sparsity prevents the learning of such representations using the same collocational methods as applied to the word level. Most literature instead focuses on learning composition functions that represent the semantics of a larger structure as a function of the representations of its parts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Very simple composition functions have been shown to suffice for tasks such as judging bigram semantic similarity (Mitchell and Lapata, 2008) . More complex composition functions using matrix-vector composition, convolutional neural networks or tensor composition have proved useful in tasks such as sentiment analysis (Socher et al., 2011; Hermann and Blunsom, 2013) , relational similarity (Turney, 2012) or dialogue analysis (Kalchbrenner and Blunsom, 2013) .", "cite_spans": [ { "start": 114, "end": 141, "text": "(Mitchell and Lapata, 2008)", "ref_id": "BIBREF29" }, { "start": 319, "end": 340, "text": "(Socher et al., 2011;", "ref_id": "BIBREF34" }, { "start": 341, "end": 367, "text": "Hermann and Blunsom, 2013)", "ref_id": "BIBREF18" }, { "start": 392, "end": 406, "text": "(Turney, 2012)", "ref_id": "BIBREF38" }, { "start": 428, "end": 460, "text": "(Kalchbrenner and Blunsom, 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Multilingual Representation Learning Most research on distributed representation induction has focused on single languages. English, with its large number of annotated resources, has enjoyed most attention. However, there exists a corpus of prior work on learning multilingual embeddings or on using parallel data to transfer linguistic information across languages. One has to differentiate between approaches such as Al-Rfou' et al. (2013) , that learn embeddings across a large variety of languages and models such as ours, that learn joint embeddings, that is a projection into a shared semantic space across multiple languages.", "cite_spans": [ { "start": 419, "end": 441, "text": "Al-Rfou' et al. (2013)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Related to our work, Yih et al. (2011) proposed S2Nets to learn joint embeddings of tf-idf vectors for comparable documents. Their architecture optimises the cosine similarity of documents, using relative semantic similarity scores during learning. More recently, Lauly et al. (2013) proposed a bag-of-words autoencoder model, where the bagof-words representation in one language is used to train the embeddings in another. By placing their vocabulary in a binary branching tree, the probabilistic setup of this model is similar to that of Mnih and Hinton (2009) . Similarly, Sarath Chandar et al. (2013) train a cross-lingual encoder, where an autoencoder is used to recreate words in two languages in parallel. This is effectively the linguistic extension of Ngiam et al. (2011) , who used a similar method for audio and video data. Hermann and Blunsom (2014) propose a largemargin learner for multilingual word representations, similar to the basic additive model proposed here, which, like the approaches above, relies on a bag-of-words model for sentence representations. Klementiev et al. (2012) , our baseline in \u00a75.2, use a form of multi-agent learning on wordaligned parallel data to transfer embeddings from one language to another. Earlier work, Haghighi et al. (2008) , proposed a method for inducing bilingual lexica using monolingual feature representations and a small initial lexicon to bootstrap with. This approach has recently been extended by Mikolov et al. (2013a) , Mikolov et al. (2013b) , who developed a method for learning transformation matrices to convert semantic vectors of one language into those of another. Is was demonstrated that this approach can be applied to improve tasks related to machine translation. Their CBOW model is also worth noting for its similarities to the ADD composition function used here. Using a slightly different approach, Zou et al. (2013) , also learned bilingual embeddings for machine translation.", "cite_spans": [ { "start": 21, "end": 38, "text": "Yih et al. (2011)", "ref_id": "BIBREF39" }, { "start": 264, "end": 283, "text": "Lauly et al. (2013)", "ref_id": "BIBREF23" }, { "start": 540, "end": 562, "text": "Mnih and Hinton (2009)", "ref_id": "BIBREF30" }, { "start": 761, "end": 780, "text": "Ngiam et al. (2011)", "ref_id": "BIBREF31" }, { "start": 835, "end": 861, "text": "Hermann and Blunsom (2014)", "ref_id": "BIBREF19" }, { "start": 1077, "end": 1101, "text": "Klementiev et al. (2012)", "ref_id": "BIBREF21" }, { "start": 1257, "end": 1279, "text": "Haghighi et al. (2008)", "ref_id": "BIBREF17" }, { "start": 1463, "end": 1485, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF27" }, { "start": 1488, "end": 1510, "text": "Mikolov et al. (2013b)", "ref_id": "BIBREF28" }, { "start": 1882, "end": 1899, "text": "Zou et al. (2013)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "To summarize, we have presented a novel method for learning multilingual word embeddings using parallel data in conjunction with a multilingual objective function for compositional vector models. This approach extends the distributional hypothesis to multilingual joint-space representations. Coupled with very simple composition functions, vectors learned with this method outperform the state of the art on the task of cross-lingual document classification. Further experiments and analysis support our hypothesis that bilingual signals are a useful tool for learning distributed representations by enabling models to abstract away from mono-lingual surface realisations into a deeper semantic space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://www.statmt.org/europarl/ 2 https://wit3.fbk.eu/ 3 http://www.clg.ox.ac.uk/tedcldc/ 4 English to Arabic, German, French, Spanish, Italian, Dutch, Polish, Brazilian Portuguese, Romanian, Russian and Turkish. Chinese, Farsi and Slowenian were removed due to the small size of those datasets.5 http://cdec-decoder.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "On the RCV task we also report results for d=40 which matches the dimensionality ofKlementiev et al. (2012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the implementation in Mallet(McCallum, 2002)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by a Xerox Foundation Award and EPSRC grant number EP/K036580/1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Polyglot: Distributed word representations for multilingual nlp", "authors": [ { "first": "R", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "B", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "S", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2013, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Al-Rfou', B. Perozzi, and S. Skiena. 2013. Poly- glot: Distributed word representations for multilin- gual nlp. In Proceedings of CoNLL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space", "authors": [ { "first": "M", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "R", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2010, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Baroni and R. Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of EMNLP.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A neural probabilistic language model", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "P", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "C", "middle": [], "last": "Janvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. 2003. A neural probabilistic language model. Jour- nal of Machine Learning Research, 3:1137-1155, March.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Latent dirichlet allocation", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. M. Blei, A. Y. Ng, and M. I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Precis of how children learn the meanings of words", "authors": [ { "first": "P", "middle": [], "last": "Bloom", "suffix": "" } ], "year": 2001, "venue": "Behavioral and Brain Sciences", "volume": "24", "issue": "", "pages": "1095--1103", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Bloom. 2001. Precis of how children learn the meanings of words. Behavioral and Brain Sciences, 24:1095-1103.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Wit 3 : Web inventory of transcribed and translated talks", "authors": [ { "first": "M", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "C", "middle": [], "last": "Girardi", "suffix": "" }, { "first": "M", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2012, "venue": "Proceedings of EAMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Cettolo, C. Girardi, and M. Federico. 2012. Wit 3 : Web inventory of transcribed and translated talks. In Proceedings of EAMT.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Combining symbolic and distributional models of meaning", "authors": [ { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "S", "middle": [], "last": "Pulman", "suffix": "" } ], "year": 2007, "venue": "Proceedings of AAAI Spring Symposium on Quantum Interaction", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Clark and S. Pulman. 2007. Combining symbolic and distributional models of meaning. In Proceed- ings of AAAI Spring Symposium on Quantum Inter- action. AAAI Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Machine translation by triangulation: Making effective use of multi-parallel corpora", "authors": [ { "first": "T", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Cohn and M. Lapata. 2007. Machine translation by triangulation: Making effective use of multi-parallel corpora. In Proceedings of ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of ACL- EMNLP.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collobert and J. Weston. 2008. A unified architec- ture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "M", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "K", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "P", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural lan- guage processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "J", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "E", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive sub- gradient methods for online learning and stochas- tic optimization. Journal of Machine Learning Re- search, 12:2121-2159, July.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Using latent semantic analysis to improve access to textual information", "authors": [ { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "G", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "S", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "R", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. T. Dumais, G. W. Furnas, T. K. Landauer, S. Deer- wester, and R. Harshman. 1988. Using latent se- mantic analysis to improve access to textual infor- mation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "cdec: A Decoder, Alignment, and Learning framework for finite-state and context-free translation models", "authors": [ { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "A", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "J", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "J", "middle": [], "last": "Weese", "suffix": "" }, { "first": "F", "middle": [], "last": "Ture", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "H", "middle": [], "last": "Setiawan", "suffix": "" }, { "first": "V", "middle": [], "last": "Eidelman", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Dyer, A. Lopez, J. Ganitkevitch, J. Weese, F. Ture, P. Blunsom, H. Setiawan, V. Eidelman, and P. Resnik. 2010. cdec: A Decoder, Alignment, and Learning framework for finite-state and context-free translation models. In Proceedings of ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A structured vector space model for word meaning in context", "authors": [ { "first": "K", "middle": [], "last": "Erk", "suffix": "" }, { "first": "S", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Erk and S. Pad\u00f3. 2008. A structured vector space model for word meaning in context. Proceedings of EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A synopsis of linguistic theory 1930-55", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Firth", "suffix": "" } ], "year": 1952, "venue": "", "volume": "59", "issue": "", "pages": "1--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Firth. 1957. A synopsis of linguistic theory 1930- 55. 1952-59:1-32.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Experimental support for a categorical compositional distributional model of meaning", "authors": [ { "first": "E", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "M", "middle": [], "last": "Sadrzadeh", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Grefenstette and M. Sadrzadeh. 2011. Experi- mental support for a categorical compositional dis- tributional model of meaning. In Proceedings of EMNLP.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning bilingual lexicons from monolingual corpora", "authors": [ { "first": "A", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "T", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Haghighi, P. Liang, T. Berg-Kirkpatrick, and D. Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL-HLT.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Role of Syntax in Vector Space Models of Compositional Semantics", "authors": [ { "first": "K", "middle": [ "M" ], "last": "Hermann", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. M. Hermann and P. Blunsom. 2013. The Role of Syntax in Vector Space Models of Compositional Semantics. In Proceedings of ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multilingual Distributed Representations without Word Alignment", "authors": [ { "first": "K", "middle": [ "M" ], "last": "Hermann", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. M. Hermann and P. Blunsom. 2014. Multilingual Distributed Representations without Word Align- ment. In Proceedings of ICLR.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Recurrent convolutional neural networks for discourse compositionality", "authors": [ { "first": "N", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the ACL Workshop on Continuous Vector Space Models and their Compositionality", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Kalchbrenner and P. Blunsom. 2013. Recurrent convolutional neural networks for discourse compo- sitionality. Proceedings of the ACL Workshop on Continuous Vector Space Models and their Compo- sitionality.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Inducing crosslingual distributed representations of words", "authors": [ { "first": "A", "middle": [], "last": "Klementiev", "suffix": "" }, { "first": "I", "middle": [], "last": "Titov", "suffix": "" }, { "first": "B", "middle": [], "last": "Bhattarai", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Klementiev, I. Titov, and B. Bhattarai. 2012. In- ducing crosslingual distributed representations of words. In Proceedings of COLING.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Machine Translation Summit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2005. Europarl: A Parallel Corpus for Sta- tistical Machine Translation. In Proceedings of the Machine Translation Summit.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning multilingual word representations using a bag-of-words autoencoder", "authors": [ { "first": "S", "middle": [], "last": "Lauly", "suffix": "" }, { "first": "A", "middle": [], "last": "Boulanger", "suffix": "" }, { "first": "H", "middle": [], "last": "Larochelle", "suffix": "" } ], "year": 2013, "venue": "Deep Learning Workshop at NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Lauly, A. Boulanger, and H. Larochelle. 2013. Learning multilingual word representations using a bag-of-words autoencoder. In Deep Learning Work- shop at NIPS.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Rcv1: A new benchmark collection for text categorization research", "authors": [ { "first": "D", "middle": [ "D" ], "last": "Lewis", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yang", "suffix": "" }, { "first": "T", "middle": [ "G" ], "last": "Rose", "suffix": "" }, { "first": "F", "middle": [], "last": "Li", "suffix": "" } ], "year": 2004, "venue": "Journal of Machine Learning Research", "volume": "5", "issue": "", "pages": "361--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. D. Lewis, Y. Yang, T. G. Rose, and F. Li. 2004. Rcv1: A new benchmark collection for text catego- rization research. Journal of Machine Learning Re- search, 5:361-397, December.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mallet: A machine learning for language toolkit", "authors": [ { "first": "A", "middle": [ "K" ], "last": "Mccallum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. K. McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Recurrent neural network based language model", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "M", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "L", "middle": [], "last": "Burget", "suffix": "" }, { "first": "J", "middle": [], "last": "\u010cernock\u00fd", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2010, "venue": "Proceedings of INTER-SPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mikolov, M. Karafi\u00e1t, L. Burget, J.\u010cernock\u00fd, and S. Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of INTER- SPEECH.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Efficient Estimation of Word Representations in Vector Space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013a. Efficient Estimation of Word Representations in Vector Space. CoRR.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Exploiting Similarities among Languages for Machine Translation", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mikolov, Q. V. Le, and I. Sutskever. 2013b. Ex- ploiting Similarities among Languages for Machine Translation. CoRR.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Vector-based models of semantic composition", "authors": [ { "first": "J", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Mitchell and M. Lapata. 2008. Vector-based models of semantic composition. In In Proceedings of ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A scalable hierarchical distributed language model", "authors": [ { "first": "A", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "G", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Mnih and G. Hinton. 2009. A scalable hierarchi- cal distributed language model. In Proceedings of NIPS.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Multimodal deep learning", "authors": [ { "first": "J", "middle": [], "last": "Ngiam", "suffix": "" }, { "first": "A", "middle": [], "last": "Khosla", "suffix": "" }, { "first": "M", "middle": [], "last": "Kim", "suffix": "" }, { "first": "J", "middle": [], "last": "Nam", "suffix": "" }, { "first": "H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. 2011. Multimodal deep learning. In ICML.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Grounded spoken language acquisition: Experiments in word learning", "authors": [ { "first": "D", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on Multimedia", "volume": "5", "issue": "2", "pages": "197--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Roy. 2003. Grounded spoken language acquisition: Experiments in word learning. IEEE Transactions on Multimedia, 5(2):197-209, June.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Multilingual deep learning", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Sarath Chandar", "suffix": "" }, { "first": "M", "middle": [ "K" ], "last": "Mitesh", "suffix": "" }, { "first": "B", "middle": [], "last": "Ravindran", "suffix": "" }, { "first": "V", "middle": [], "last": "Raykar", "suffix": "" }, { "first": "A", "middle": [], "last": "Saha", "suffix": "" } ], "year": 2013, "venue": "Deep Learning Workshop at NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Sarath Chandar, M. K. Mitesh, B. Ravindran, V. Raykar, and A. Saha. 2013. Multilingual deep learning. In Deep Learning Workshop at NIPS.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", "authors": [ { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "J", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "E", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Semantic compositionality through recursive matrix-vector spaces", "authors": [ { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "B", "middle": [], "last": "Huval", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proceedings of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "1201--1211", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP- CoNLL, pages 1201-1211.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Multimodal learning with deep boltzmann machines", "authors": [ { "first": "N", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2012, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Srivastava and R. Salakhutdinov. 2012. Multimodal learning with deep boltzmann machines. In Pro- ceedings of NIPS.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Word representations: a simple and general method for semisupervised learning", "authors": [ { "first": "J", "middle": [], "last": "Turian", "suffix": "" }, { "first": "L", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Turian, L. Ratinov, and Y. Bengio. 2010. Word rep- resentations: a simple and general method for semi- supervised learning. In Proceedings of ACL.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Domain and function: A dualspace model of semantic relations and compositions", "authors": [ { "first": "P", "middle": [ "D" ], "last": "Turney", "suffix": "" } ], "year": 2012, "venue": "Journal of Artificial Intelligence Research", "volume": "44", "issue": "", "pages": "533--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. D. Turney. 2012. Domain and function: A dual- space model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533- 585.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning discriminative projections for text similarity measures", "authors": [ { "first": "W.-T", "middle": [], "last": "Yih", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Platt", "suffix": "" }, { "first": "C", "middle": [], "last": "Meek", "suffix": "" } ], "year": 2011, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.-T. Yih, K. Toutanova, J. C. Platt, and C. Meek. 2011. Learning discriminative projections for text similarity measures. In Proceedings of CoNLL.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Bilingual word embeddings for phrase-based machine translation", "authors": [ { "first": "W", "middle": [ "Y" ], "last": "Zou", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "D", "middle": [], "last": "Cer", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Y. Zou, R. Socher, D. Cer, and C. D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of EMNLP.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Description of a parallel document-level compositional vector model (DOC). The model recursively computes semantic representations for each sentence of a document and then for the document itself, treating the sentence vectors as inputs for a second CVM." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Classification accuracy for a number of models (see" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "t-SNE projections for a number of short phrases in three languages as represented by the BI+ model. The projection demonstrates linguistic transfer through a pivot by. It separates phrases by gender (red for female, blue for male, and green for neutral) and aligns matching phrases across languages." }, "TABREF1": { "num": null, "html": null, "text": "SettingLanguagesArabic German Spanish French Italian Dutch Polish Pt-Br Roman. Russian Turkish", "type_str": "table", "content": "
en \u2192 L2
MT System0.429 0.465 0.518 0.526 0.514 0.505 0.445 0.470 0.493 0.432 0.409
ADD single0.328 0.343 0.401 0.275 0.282 0.317 0.141 0.227 0.282 0.338 0.241
BI single0.375 0.360 0.379 0.431 0.465 0.421 0.435 0.329 0.426 0.423 0.481
DOC/ADD single0.410 0.424 0.383 0.476 0.485 0.264 0.402 0.354 0.418 0.448 0.452
DOC/BI single0.389 0.428 0.416 0.445 0.473 0.219 0.403 0.400 0.467 0.421 0.457
DOC/ADD joint0.392 0.405 0.443 0.447 0.475 0.453 0.394 0.409 0.446 0.476 0.417
DOC/BI joint0.372 0.369 0.451 0.429 0.404 0.433 0.417 0.399 0.453 0.439 0.418
L2 \u2192 en
MT System0.448 0.469 0.486 0.358 0.481 0.463 0.460 0.374 0.486 0.404 0.441
ADD single0.380 0.337 0.446 0.293 0.357 0.295 0.327 0.235 0.293 0.355 0.375
BI single0.354 0.411 0.344 0.426 0.439 0.428 0.443 0.357 0.426 0.442 0.403
DOC/ADD single0.452 0.476 0.422 0.464 0.461 0.251 0.400 0.338 0.407 0.471 0.435
DOC/BI single0.406 0.442 0.365 0.479 0.460 0.235 0.393 0.380 0.426 0.467 0.477
DOC/ADD joint0.396 0.388 0.399 0.415 0.461 0.478 0.352 0.399 0.412 0.343 0.343
DOC/BI joint0.343 0.375 0.369 0.419 0.398 0.438 0.353 0.391 0.430 0.375 0.388
" }, "TABREF2": { "num": null, "html": null, "text": "F1-scores for the TED document classification task for individual languages. Results are reported for both directions (training on English, evaluating on L2 and vice versa). Bold indicates best result, underline best result amongst the vector-based systems.", "type_str": "table", "content": "
TrainingTest Language
LanguageArabic German Spanish French Italian Dutch Polish Pt-Br Rom'n Russian Turkish
Arabic0.378 0.436 0.432 0.444 0.438 0.389 0.425 0.420 0.446 0.397
German0.3680.474 0.460 0.464 0.440 0.375 0.417 0.447 0.458 0.443
Spanish0.353 0.3550.420 0.439 0.435 0.415 0.390 0.424 0.427 0.382
French0.383 0.366 0.4870.474 0.429 0.403 0.418 0.458 0.415 0.398
Italian0.398 0.405 0.461 0.4660.393 0.339 0.347 0.376 0.382 0.352
Dutch0.377 0.354 0.463 0.464 0.4600.405 0.386 0.415 0.407 0.395
Polish0.359 0.386 0.449 0.444 0.430 0.4410.401 0.434 0.398 0.408
Portuguese0.391 0.392 0.476 0.447 0.486 0.458 0.4030.457 0.431 0.431
Romanian0.416 0.320 0.473 0.476 0.460 0.434 0.416 0.4330.444 0.402
Russian0.372 0.352 0.492 0.427 0.438 0.452 0.430 0.419 0.4410.447
Turkish0.376 0.352 0.479 0.433 0.427 0.423 0.439 0.367 0.434 0.411
" }, "TABREF3": { "num": null, "html": null, "text": "F1-scores for TED corpus document classification results when training and testing on two languages that do not share any parallel data. We train a DOC/ADD model on all en-L2 language pairs together, and then use the resulting embeddings to train document classifiers in each language. These classifiers are subsequently used to classify data from all other languages.", "type_str": "table", "content": "
SettingLanguages
English Arabic German Spanish French Italian Dutch Polish Pt-Br Roman. Russian Turkish
Raw Data NB0.481 0.469 0.471 0.526 0.532 0.524 0.522 0.415 0.465 0.509 0.465 0.513
Senna0.400
Polyglot0.382 0.416 0.270 0.418 0.361 0.332 0.228 0.323 0.194 0.300 0.402 0.295
single Setting
DOC/ADD0.462 0.422 0.429 0.394 0.481 0.458 0.252 0.385 0.363 0.431 0.471 0.435
DOC/BI0.474 0.432 0.362 0.336 0.444 0.469 0.197 0.414 0.395 0.445 0.436 0.428
joint Setting
DOC/ADD0.475 0.371 0.386 0.472 0.451 0.398 0.439 0.304 0.394 0.453 0.402 0.441
DOC/BI0.378 0.329 0.358 0.472 0.454 0.399 0.409 0.340 0.431 0.379 0.395 0.435
" } } } }