{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:10:55.519962Z" }, "title": "Composing Byte-Pair Encodings for Morphological Sequence Classification", "authors": [ { "first": "Adam", "middle": [], "last": "Ek", "suffix": "", "affiliation": { "laboratory": "Centre for Linguistic Theory and Studies in Probability", "institution": "University of Gothenburg", "location": {} }, "email": "adam.ek@gu.se" }, { "first": "Jean-Philippe", "middle": [], "last": "Bernardy", "suffix": "", "affiliation": { "laboratory": "Centre for Linguistic Theory and Studies in Probability", "institution": "University of Gothenburg", "location": {} }, "email": "jean-philippe.bernardy@gu.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Byte-pair encodings is a method for splitting a word into sub-word tokens, a language model then assigns contextual representations separately to each of these tokens. In this paper, we evaluate four different methods of composing such sub-word representations into word representations. We evaluate the methods on morphological sequence classification, the task of predicting grammatical features of a word. Our experiments reveal that using an RNN to compute word representations is consistently more effective than the other methods tested across a sample of eight languages with different typology and varying numbers of byte-pair tokens per word.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Byte-pair encodings is a method for splitting a word into sub-word tokens, a language model then assigns contextual representations separately to each of these tokens. In this paper, we evaluate four different methods of composing such sub-word representations into word representations. We evaluate the methods on morphological sequence classification, the task of predicting grammatical features of a word. Our experiments reveal that using an RNN to compute word representations is consistently more effective than the other methods tested across a sample of eight languages with different typology and varying numbers of byte-pair tokens per word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "After its introduction, the Transformer model (Vaswani et al., 2017) has emerged as the dominant architecture for statistical language models, displacing recurrent neural networks, in particular, the LSTM and its variants. The Transformer owes its success to several factors, including the availability of pretrained models, which effectively yield rich contextual word embeddings. Such embeddings can be used as is (for so-called feature extraction), or the pre-trained models can be finetuned to specific tasks.", "cite_spans": [ { "start": 46, "end": 68, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At the same time as Transformer models became popular, the tokenization of natural language texts have shifted away from methods explicitly oriented towards words or morphemes. Rather, statistical approaches are favoured: strings of characters are split into units which are not necessarily meaningful linguistically, but rather have statistically balanced frequencies. For example, the word \"scientifically\" may be composed of the tokens: \"scient\", \"ifical\", \"ly\" -here the central token does not correspond to a morpheme. That is, rather than identifying complete words or morphemes, one aims to find relatively large sub-word units occurring significantly often, while maximizing the coverage of the corpus (the presence of the \"out of vocabulary\" token is minimized). Approaches for composing words from sub-word units have focused on combining character n-grams (Bojanowski et al., 2017) , while other approaches have looked at splitting words into roots and morphemes (El Kholy and Habash, 2012; Chaudhary et al., 2018; Xu and Liu, 2017) , and then combining them.", "cite_spans": [ { "start": 867, "end": 892, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF1" }, { "start": 978, "end": 1001, "text": "Kholy and Habash, 2012;", "ref_id": "BIBREF5" }, { "start": 1002, "end": 1025, "text": "Chaudhary et al., 2018;", "ref_id": "BIBREF2" }, { "start": 1026, "end": 1043, "text": "Xu and Liu, 2017)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we consider Byte-Pair Encodings (BPE) (Sennrich et al., 2015) . BPE has been popularized by its usage in translation and the BERT Transformer model (Devlin et al., 2018) . The BPE algorithm does not specifically look for either character n-grams or morphs, but rather it aims at splitting a corpus C into N tokens, where N is user defined. Even though BPE is not grounded in morphosyntactic theory, the characteristics of the sub-word units generated by BPE will be directly influenced by morphosyntactic patterns in a language. In particular, it is reasonable to expect that the statistical characteristics of BPE to be different between languages with different typologies. One issue with this tokenization scheme is that models based on BPE provide vector representations for the BPE tokens (which we call token embeddings from now on), while one is typically interested in representations for the semantically meaningful units in the original texts, words. In sum, one wants to combine token embeddings into word embeddings.", "cite_spans": [ { "start": 53, "end": 76, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF18" }, { "start": 163, "end": 184, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main goal is to explore how to best combine token embeddings in the context of sequence classification on words, that is, the task of assigning a label to every word in a sentence. Coming back to our example, we must combine the token embeddings assigned to the BPE tokens \"scient\", \"ifical\" and \"ly\" to form a word representation of \"scientifically\" (as a vector) which we can then assign a label to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To our knowledge, this is a little-studied problem. For the original BERT model Devlin et al. (2018) simply state that for named entity recognition the first sub-word token is used as the word representation. For morphological sequence classification Kondratyuk (2019; Kondratyuk and Straka (2019) report that only small differences in performance were found between averaging, taking the maximum value or first sub-word token. In this paper we explore the problem in further detail and identify the effect that different methods have on the final performance of a model. Additionally, with the increased interest in multilingual NLP it becomes important to explore how different computational methods perform cross-linguistically. That is, because languages are different morphosyntactically, one can expect various computational methods not to be uniformly effective.", "cite_spans": [ { "start": 251, "end": 268, "text": "Kondratyuk (2019;", "ref_id": "BIBREF8" }, { "start": 269, "end": 297, "text": "Kondratyuk and Straka (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To investigate composition methods for token embeddings we focus on the task of morphological sequence classification. The task is to assign a tag to a word that represent its grammatical features, such as gender, number and so on. In addition to the word-form, the system can use information from context words as cues. While the grammatical features primarily are given by the word-form, useful information is also found in the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "2" }, { "text": "Thus, we have to identify k different tags for a word, each with C i possible classes, making the task a multi-class classification problem. We simplify the classification problem by combining the different tags into a composite tag with up to k i C i classes (instead of making k separate predictions). This task is suitable for our goal as the output space is large, ranging from 100 to 1000 possible tags for a word, depending on the grammatical features present in the language 1 , and is directly linked to the affixes in the word-form. A system must efficiently encode information about the structure of the target words as well as the context words to be able to predict the correct grammatical features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task", "sec_num": "2" }, { "text": "For both training and testing data, we use the Universal Dependencies dataset (Nivre et al., 2018) annotated with the UniMorph schema (McCarthy et al., 2018) . We are mainly interested in how the accuracy is influenced by different composition methods, but also consider the type of morphology a language uses as a factor in this task. With this in mind, we consider both languages that use agglutinative morphology where each morpheme is mapped to one and only one grammatical feature, and languages that use fusional morphology where a morpheme can be mapped to one or more grammatical features. The fusional languages that we consider are Arabic, Czech, Polish and Spanish, and the agglutinative languages that we consider are Finnish, Basque, Turkish, and Estonian. We show the size, the average number of BPE tokens per word, and the number of morphological tags for each treebank in Table 1 .", "cite_spans": [ { "start": 78, "end": 98, "text": "(Nivre et al., 2018)", "ref_id": null }, { "start": 134, "end": 157, "text": "(McCarthy et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 889, "end": 896, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "The fusional languages were chosen such that two of them (Czech and Polish) have a higher BPE per word ratio than the other two (Arabic and Spanish). We make this choice because one factor that impacts the accuracy obtained by a composition method may be the BPE per word ratio. By having both fusional and agglutinative languages with similar BPE per word ratio we can take this variable into account properly in our analysis. Table 1 : Treebank statistics showing the language typology, average number of BPE tokens per word, the number of (composite) morphological tags and the size of the datasets in terms of words.", "cite_spans": [], "ref_spans": [ { "start": 428, "end": 435, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Our model is composed of three components, each of them detailed below. First, the input sequence of BPE tokens is fed to a Transformer model, which yields a contextual vector representation for each BPE token. The contextual information here is the surrounding BPE tokens in the sentence. Then, the token embeddings are combined using a composition module, which we vary for the purpose of evaluating each variant. This component yields one embedding per original word. Then we pass the word embeddings through a bidirectional LSTM, which is followed by two dense layers with GELU (Hendrycks and Gimpel, 2016) activation. These dense layers act on each word embedding separately (but share parameters across words). An outline of the model is presented in Figure 1 , where f represents the different methods we use to combine token embeddings.", "cite_spans": [ { "start": 582, "end": 610, "text": "(Hendrycks and Gimpel, 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 757, "end": 765, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "4.1" }, { "text": "To extract a embeddings for each BPE token, we use the XLM-RoBERTa (Conneau et al., 2019 ) model 3 . XLM-R is a masked language model based on the Transformer, specifically RoBERTa (Liu et al., 2019b) , and trained on data from 100 different languages, using a shared vocabulary of 250000 BPE tokens. All the languages that we test are included in the XLM-R model. In this experiment we use the XLM-R base model with 250M parameters. It has 12 encoder layers, 12 attention heads and use 768 dimensions for its hidden size.", "cite_spans": [ { "start": 67, "end": 88, "text": "(Conneau et al., 2019", "ref_id": "BIBREF3" }, { "start": 181, "end": 200, "text": "(Liu et al., 2019b)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Underlying Transformer Model", "sec_num": "4.1.1" }, { "text": "The XLM-R model uses 12 layers to compute a vector representation for a BPE token. It has been shown in previous research (Kondratyuk and Straka, 2019; Raganato et al., 2018; Liu et al., 2019a ) that the different layers of the Transformer model encode different types of information. To take advantage of this variety, we compute token embeddings as a weighted sum of the layer representation (Kondratyuk and Straka, 2019) , using a weight vector w, of size l, where l is the number of layers in the Transformer model. The weight vector w is initialized from a normal distribution of mean 0 and standard deviation 1. If r ji is the layer representation at layer j and token position i, we calculate the weighted sum as follows:", "cite_spans": [ { "start": 122, "end": 151, "text": "(Kondratyuk and Straka, 2019;", "ref_id": "BIBREF7" }, { "start": 152, "end": 174, "text": "Raganato et al., 2018;", "ref_id": "BIBREF16" }, { "start": 175, "end": 192, "text": "Liu et al., 2019a", "ref_id": "BIBREF9" }, { "start": 394, "end": 423, "text": "(Kondratyuk and Straka, 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Feature extraction", "sec_num": "4.1.2" }, { "text": "x i = l j=1 softmax(w) j r ji (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature extraction", "sec_num": "4.1.2" }, { "text": "Consequently, in end-to-end training, the optimiser will find a weight for extracting information from each layer (softmax(w) j ) which maximizes performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature extraction", "sec_num": "4.1.2" }, { "text": "The weighted sum yields a token embedding for each BPE token. We proceed to combine them into words as they appear in the data. The model that we use to combine token embeddings is as follows. Figure 1 : Model outline for one input. A word w n is tokenized into k BPE tokens. The Transformer model produces one embedding per token per layer. We then calculate a weighted sum over the layers to obtain one representation per token. The resulting token embeddings are then passed to a composition function f that combines the k different token embeddings into a word embedding. The word embedding is then passed to an LSTM followed by a dense prediction layer.", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 201, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Composition of BPE token embeddings", "sec_num": "4.1.3" }, { "text": "For each sentence we extract n token embeddings x 0 to x n\u22121 from XLM-R base , and then align them to words. We then pass all token embeddings in a word to a function f which combines the tokens into a word embedding. We consider four methods for composing token embeddings: taking the first token embedding, summation, averaging, and using an RNN. Taking the first token embedding, summation and averaging have been used in previous work (Sachan et al., 2020; Kondratyuk, 2019; Devlin et al., 2018) , but using an RNN has not been explored before to our knowledge.", "cite_spans": [ { "start": 439, "end": 460, "text": "(Sachan et al., 2020;", "ref_id": "BIBREF17" }, { "start": 461, "end": 478, "text": "Kondratyuk, 2019;", "ref_id": "BIBREF8" }, { "start": 479, "end": 499, "text": "Devlin et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Composition of BPE token embeddings", "sec_num": "4.1.3" }, { "text": "First: The first method is the standard one used by Devlin et al. (2018) , which is to use the first token embedding in a word.", "cite_spans": [ { "start": 52, "end": 72, "text": "Devlin et al. (2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Composition of BPE token embeddings", "sec_num": "4.1.3" }, { "text": "Sum: For the Sum method, we use an element-wise sum. That is, we calculate the vector sum of the token embeddings. Assuming that we have T token embeddings in a word X (the word is a matrix of size (T, 768)), for dimension i we calculate the word embedding by summing the token embeddings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition of BPE token embeddings", "sec_num": "4.1.3" }, { "text": "f (X) i = T j=1 x j i (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition of BPE token embeddings", "sec_num": "4.1.3" }, { "text": "Mean: In the mean method we calculate the sum as above and divide by the number of BPE tokens in the word. Thus, for word X we calculate the word embedding by averaging over the sum:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition of BPE token embeddings", "sec_num": "4.1.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (X) i = 1 T T j=1 x j i", "eq_num": "(3)" } ], "section": "Composition of BPE token embeddings", "sec_num": "4.1.3" }, { "text": "RNN: For this method we employ a bidirectional LSTM to compose the token embeddings. For each word, we pass the sequence of token embeddings through an LSTM and use the final output as the word representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition of BPE token embeddings", "sec_num": "4.1.3" }, { "text": "The above methods of composing BPE tokens produce one contextual embedding per word. We then pass the word embeddings through an LSTM to take into account the word contexts. While the BPE token embeddings are already contextual, they are conditioned on the BPE token context, not word context. We pass the hidden states for each word to a residual connection with the pre-LSTM representation. We then pass this to two dense layers with GELU activation followed by a dense layer that computes class-scores for each word. We then use a softmax layer to assign probabilities and compute the loss accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level features and classification", "sec_num": "4.1.4" }, { "text": "Commonly, systems analyzing morphology use character embeddings as an additional source of information. We opted not to include character embeddings because this would obfuscate the effect of the composition method and may mask some of the effects of the different methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level features and classification", "sec_num": "4.1.4" }, { "text": "Given that many of the languages have a large number of morphological tags, we want to prevent the model from growing overconfident for certain classes. To address this issue we introduce label smoothing (Szegedy et al., 2016) , that is, instead of the incorrect classes having 0% probability and the correct class 100% probability we let each of the incorrect classes have a small probability.", "cite_spans": [ { "start": 204, "end": 226, "text": "(Szegedy et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Label smoothing", "sec_num": "4.1.5" }, { "text": "Let \u03b1 be our smoothing value, in our model we follow (Kondratyuk and Straka, 2019) and use \u03b1 = 0.03, and C the number of classes, then given a one-hot encoded target vector t of size C, we calculate the smoothed probabilities as:", "cite_spans": [ { "start": 53, "end": 82, "text": "(Kondratyuk and Straka, 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Label smoothing", "sec_num": "4.1.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t smooth = (1 \u2212 \u03b1)t + \u03b1 C", "eq_num": "(4)" } ], "section": "Label smoothing", "sec_num": "4.1.5" }, { "text": "In words, we remove \u03b1 from the correct class then distribute \u03b1 uniformly among all classes. Table 2 : Hyperparameters used for training the model. Slashed indicates the value of a parameter when we finetune or extract features.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Label smoothing", "sec_num": "4.1.5" }, { "text": "In our experiments we consider two possible training regimes. In the first regime we finetune the XLM-R model's parameters, in the second we only extract weights for BPE tokens, that is, we use the model as a feature extractor. In all cases, we use end-to-end training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "When finetuning the model we freeze the XLM-R parameters for the first epoch, effectively not finetuning at first. When training the model we use a cosine annealing learning rate (Loshchilov and Hutter, 2016) with restarts every epoch, that is, the learning rate starts high then incrementally decreases to 1e\u221212 over N steps, where N is the number of batches in an epoch. We use the Adam optimizer with standard parameters, with a learning rate of 0.001 for layer importance parameter (w in Section 4.1.2), the parameters of the Word-LSTM, of the classification layer, and of the BPE-combination module (when an RNN is used). For the Transformer parameters, we use a lower learning rate of 1e\u22126. We summarize the hyperparameters used in Table 2 .", "cite_spans": [ { "start": 179, "end": 208, "text": "(Loshchilov and Hutter, 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 738, "end": 745, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "As an additional regularization in addition to weight decay and adaptive learning rate, we use dropout throughout the model. Generally, we apply dropout before some feature is computed. Initial experiments revealed that a high dropout yielded the best results. We summarize the dropout used as: We replace 20 percent of the BPE tokens with . Then, we compute a weighted sum of the layer representations, to regularize this operation we apply dropout on layer representations with a probability of 0.1, that is we set all representations in the layer to 0. We then combine the token embeddings into word embeddings and apply a dropout of 0.4%, and pass these into the Word-LSTM. Before the contextualized representation is passed to the classification layer, we apply a dropout of 0.4%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2" }, { "text": "Even though our aim is to compare the relative performance of various BPE-combination methods rather than to improve on the state of the art in absolute terms, we compare our results against the baseline reported by McCarthy et al. (2019) . This comparison serves the purpose of checking that our system is generally sound. In particular, the actual state of the art, as reported by McCarthy et al. (2019; Kondratyuk (2019) , uses treebank concatenation or other methods to incorporate information from all treebanks available in a language, which means that results are not reported on a strict per-treebank basis and thus our numbers are not directly comparable. We report the accuracy of prediction morphological tags for each of our composition methods, and for our two training regimes in Table 3 .", "cite_spans": [ { "start": 216, "end": 238, "text": "McCarthy et al. (2019)", "ref_id": "BIBREF13" }, { "start": 383, "end": 405, "text": "McCarthy et al. (2019;", "ref_id": "BIBREF13" }, { "start": 406, "end": 423, "text": "Kondratyuk (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 794, "end": 801, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Our system performs better than the baseline. As a general trend we see that the RNN method tends to perform better than all other tested methods. This trend is consistent across both language families (agglutinative and fusional) and training regimes showing that, while the advantage of the RNN is small, Table 4 : Accuracy for morphological tagging on all words that are composed of two or more BPE tokens.", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 314, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "it occurs consistently. In general we find that finetuning yields higher accuracy than plain feature extraction, on average the difference is about 5.8 percentage points. This difference is to be expected when finetuning has 250M more parameters tuned to the task than the feature extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Focusing on the finetuning regime only, we see the largest benefits of the RNN method for Basque with an increased performance of 3.25 points, and 2.7 points for Turkish over using mean or averaging. The First method for Basque and Turkish performs worse with a decrease of 4.4 percentage points for Basque and 3.6 points for Turkish compared to the RNN method. In the bare features extraction regime, we see a larger benefit for the RNN, of 3.7 percentage points (Turkish) and 4.95 points (Basque). Again, this is not unexpected: When finetuning the error rate is smaller, and therefore there is a smaller margin for a subsequent phase to yield and improvement. Table 3 reports average accuracy for every word, including those which are only composed of a single BPE token. To highlight the strengths and weaknesses of each composition method, we also compute the accuracy for longer words only (composed of two or more BPE tokens). The results can be seen in Table 4 . We see the same trend for accuracy on words that are composed of two or more BPE tokens, as in the overall accuracy, where the RNN outperforms all other methods. We can also see that the average increase in accuracy when using an RNN is larger. This holds both when finetuning or extracting bare features. Given that the number of BPE tokens per word varies in the different languages, we also look at the accuracy of the different methods given the number of BPE tokens. We show per-language performance with the different methods in Figure 2 . : Per-language accuracy on tokens with different numbers of BPE components, for the finetuning training regime. The last data point on the x-axis refers to all tokens composed of seven or more BPE tokens. We indicate the method by encoding First as brown, summation as green, averaging as blue and RNN as red. The accuracy is given on the y-axis. We show the Agresti-Coull approximation of a 95%-confidence interval for the RNN method (Agresti and Coull, 1998) . We do not show the intervals for other methods to avoid excessive clutter.", "cite_spans": [ { "start": 1952, "end": 1977, "text": "(Agresti and Coull, 1998)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 663, "end": 670, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 961, "end": 968, "text": "Table 4", "ref_id": null }, { "start": 1506, "end": 1514, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "For predicting morphological features, the RNN method is more effective than the other proposed methods (summing, averaging or taking the first BPE token). This holds regardless of training regime (finetuning versus feature extraction) and across languages with different BPE per word ratios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "As we see it, the advantage of the RNN over commutative methods (Sum, Mean) and taking the first BPE token is that it can take the order of elements into account. In broad terms, information about the order of elements in morphology allows a system to determine what is a stem, prefix, or suffix. Thus allowing a model to collect more predictive information from token embeddings.", "cite_spans": [ { "start": 64, "end": 75, "text": "(Sum, Mean)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We can suspect that the average BPE per word ratio in a language affects the performance of the composition method used. To further control this variable, in Figure 3 we plot the average number of BPE tokens per word in each language (x-axis), and compare this average against the gain in accuracy yielded by using the RNN method over summation (y-axis). For finetuning we see that in general the average number of BPE tokens does not matter that much. The two cases where it does matter is for Turkish and Basque, where we see a substantial improvement of about 3 percentage points. We note however that these are also the languages with the lowest amount of training data. For the other languages the improvements lie in the range .6 to 1.2 percentage points. This indicates that when finetuning, the model can provide information that allows commutative methods to properly compose BPE tokens. However, looking at bare feature extraction we see that there is a larger gap between the low BPE-ratio and the high BPE-ratio languages.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 166, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Our sample of languages contain both fusional and agglutinative languages, and the typology does not appear to have an effect in our experiments. We see about the same trends for the fusional languages with a high BPE per word ratio as the agglutinative languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The idea behind the First method is that the Transformer is sufficiently powerful to pool the relevant information into the first BPE token embedding. However, our experiments reveal that it is less efficient than any other method we tested for morphological sequence classification across languages. We see in 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 0 2 4 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First method", "sec_num": "6.1" }, { "text": "\u202210 \u22122 BPE per word ratio Finetuning Feature extraction Figure 3 : The difference in accuracy between summation and RNN plotted against average number of BPE tokens per word in all languages, with a linear regression line. Table 3 that the method is, on average, .4 and 1 percentage points lower than the next lowest scoring method for finetuning and feature extraction respectively. This effect is further enhanced when we consider the accuracy of words composed of more than two BPE tokens in Table 4 , where the difference is 1.6 and 2.7 points, compared against the next lowest scoring method, for finetuning and feature extraction respectively. When we compare the performance against the RNN this difference only increases, showing a gain of 3.2 percentage points and 7.7 points for finetuning and feature extraction respectively.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 64, "text": "Figure 3", "ref_id": null }, { "start": 223, "end": 230, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 495, "end": 502, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "First method", "sec_num": "6.1" }, { "text": "While the First method may be effective, primarily because of the expressivity of the Transformer architecture, the method forces the model to push the predictive information of several BPE token embeddings into the first one. This puts an additional burden on the Transformer model, and we believe that this is the reason for the performance degradation which we observe. Besides, putting this burden on the model is not necessary: pooling information from several BPE embeddings can be done effectively using additional layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First method", "sec_num": "6.1" }, { "text": "When we consider the commutative methods of combining token embeddings, summation or averaging, we see no clear advantage for either of them over the other one, when doing finetuning. However, when extracting features only we see hints that summation is more effective than averaging. For feature extraction, summation is .5 percentage points better than averaging, and words composed of two or more BPE tokens exhibit an advantage of .9 point for summation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sum and Mean", "sec_num": "6.2" }, { "text": "This discrepancy suggests that by averaging, we are removing some predictive information from the pretrained BPE token embeddings, that is, by reducing the values in the token embeddings uniformly across a sequence of token embeddings we lose useful information. We believe that some token embeddings contain more predictive information than others, and by summing them we retain all the information. But when we finetune, the difference between summing and averaging almost disappears: the model appears to learn how to distribute the information uniformly across the token embeddings that compose a word and is thus able to retain the information better. Interestingly, the model learns to distribute the information across multiple BPE token embeddings more efficiently than pushing the information into the first token. This is shown by the large difference in accuracy between finetuning and feature extraction for the First and averaging method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sum and Mean", "sec_num": "6.2" }, { "text": "One question that arises when looking at Figure 2 , specifically considering the performance on words composed of only one BPE token is the following: can the superiority of the RNNs be attributed to", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 49, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Parameterization of First, Sum and Mean", "sec_num": "6.3" }, { "text": "MethodIn this section we present the model used for morphological sequence classification, the methods that we use to compose token embeddings, and how the model is trained. 2 1 For practical reasons, we only consider tag combinations observed in the dataset 2 Our code is available at: https://github.com/adamlek/ud-morphological-tagging", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the huggingface implementation https://huggingface.co/transformers/model_doc/ xlmroberta.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research reported in this paper was supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Feature Table 5 : The accuracy of morphological tagging when we parameterize the First, Sum and Mean method with a non-linear transformation layer.extractionits ability to take context into account, or simply to containing more parameters and extra layers? We would expect that for the words with only one BPE token, the performance of the model would be the same for all methods. For practical reasons, we push all word embeddings through an RNN, effectively doing a non-linear transformation with tanh activations on the words composed of only one BPE token. Typically, the difference in accuracy between various methods for one-BPE-token words is small (barely visible in Figure 2 ). But for example in Finnish, we see a larger difference. Although in general if we perform better on longer words consisting of BPE tokens that also appear as words in the data, we could also expect the performance to be better for words of BPE length one, because we will have more accurate representations of the contextual words. We test this hypothesis by parameterizing the First, Sum, and Mean method. Essentially, we need to increase the capabilities of these methods. This is done by passing all BPE token embeddings through a non-linear transformation with ReLU activation before we compute the Sum, Mean, or select the first BPE-token. Our experiment, whose results are shown in Table 5 , shows that while adding parameters to the First, Sum, and Mean method generally improve their performance slightly, ranging between a change of \u22120.2 and +0.6 percentage points, but their performance never exceeds that of the RNN method.", "cite_spans": [], "ref_spans": [ { "start": 8, "end": 15, "text": "Table 5", "ref_id": null }, { "start": 675, "end": 683, "text": "Figure 2", "ref_id": null }, { "start": 1375, "end": 1382, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Finetuning", "sec_num": null }, { "text": "In conclusion, our results indicate that using an RNN to compose word representations from token representations, obtained from a large Transformer model, is more efficient than two commutative methods, summing and averaging, and also more effective than letting a Transformer model automatically pool the predictive word-level information into the first BPE token embedding. We show this for the task of morphological sequence classification, in eight different languages with varying morphology and wordlengths in term of BPE tokens, as well as for two training regimes, finetuning and feature extraction.In future work, we want to continue experimenting with the different BPE token embedding composition methods, specifically looking at more complex syntactic and semantic tasks, such as dependency and/or constituency parsing, semantic role labeling, named entity recognition, and natural language inference. We also wish to run our experiments on the hundreds of available UD treebanks to improve the robustness of our results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Approximate is better than exact for interval estimation of binomial proportions", "authors": [ { "first": "Alan", "middle": [], "last": "Agresti", "suffix": "" }, { "first": "A", "middle": [], "last": "Brent", "suffix": "" }, { "first": "", "middle": [], "last": "Coull", "suffix": "" } ], "year": 1998, "venue": "The American Statistician", "volume": "52", "issue": "2", "pages": "119--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Agresti and Brent A Coull. 1998. Approximate is better than exact for interval estimation of binomial proportions. The American Statistician, 52(2):119-126.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adapting word embeddings to new languages with morphological and phonological subword representations", "authors": [ { "first": "Aditi", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Chunting", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "David R Mortensen", "suffix": "" }, { "first": "", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3285--3295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R Mortensen, and Jaime G Carbonell. 2018. Adapting word embeddings to new languages with morphological and phonological subword representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3285-3295.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised crosslingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02116" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross- lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Orthographic and morphological processing for English-Arabic statistical machine translation. Machine Translation", "authors": [ { "first": "Ahmed", "middle": [ "El" ], "last": "Kholy", "suffix": "" }, { "first": "Nizar", "middle": [], "last": "Habash", "suffix": "" } ], "year": 2012, "venue": "", "volume": "26", "issue": "", "pages": "25--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ahmed El Kholy and Nizar Habash. 2012. Orthographic and morphological processing for English-Arabic statis- tical machine translation. Machine Translation, 26(1-2):25-45.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Gaussian error linear units (gelus)", "authors": [ { "first": "Dan", "middle": [], "last": "Hendrycks", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.08415" ] }, "num": null, "urls": [], "raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "75 languages, 1 model: Parsing universal dependencies universally", "authors": [ { "first": "Dan", "middle": [], "last": "Kondratyuk", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2779--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2779-2795, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Cross-lingual lemmatization and morphology tagging with two-stage multilingual BERT fine-tuning", "authors": [ { "first": "Dan", "middle": [], "last": "Kondratyuk", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", "volume": "", "issue": "", "pages": "12--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Kondratyuk. 2019. Cross-lingual lemmatization and morphology tagging with two-stage multilingual BERT fine-tuning. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 12-18.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "F", "middle": [], "last": "Nelson", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Peters", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1073--1094", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. 2019a. Linguistic knowl- edge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "RoBERTa: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SGDR: Stochastic gradient descent with warm restarts", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.03983" ] }, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2016. SGDR: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Marrying universal dependencies and universal morphology", "authors": [ { "first": "D", "middle": [], "last": "Arya", "suffix": "" }, { "first": "Miikka", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Silfverberg", "suffix": "" }, { "first": "Mans", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "David", "middle": [], "last": "Hulden", "suffix": "" }, { "first": "", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Second Workshop on Universal Dependencies", "volume": "", "issue": "", "pages": "91--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arya D McCarthy, Miikka Silfverberg, Ryan Cotterell, Mans Hulden, and David Yarowsky. 2018. Marrying universal dependencies and universal morphology. In Proceedings of the Second Workshop on Universal De- pendencies (UDW 2018), pages 91-101.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection", "authors": [ { "first": "D", "middle": [], "last": "Arya", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Shijie", "middle": [], "last": "Vylomova", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Malaviya", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Wolf-Sonkin", "suffix": "" }, { "first": "Christo", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "Miikka", "middle": [], "last": "Kirov", "suffix": "" }, { "first": "", "middle": [], "last": "Silfverberg", "suffix": "" }, { "first": "J", "middle": [], "last": "Sebastian", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Mielke", "suffix": "" }, { "first": "", "middle": [], "last": "Heinz", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", "volume": "", "issue": "", "pages": "229--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arya D McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett Nicolai, Christo Kirov, Miikka Silfverberg, Sebastian J Mielke, Jeffrey Heinz, et al. 2019. The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229-244.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An analysis of encoder representations in transformer-based machine translation", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. The Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Raganato, J\u00f6rg Tiedemann, et al. 2018. An analysis of encoder representations in transformer-based machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. The Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Do syntax trees help pre-trained transformers extract information?", "authors": [ { "first": "Devendra", "middle": [], "last": "Singh Sachan", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "William", "middle": [], "last": "Hamilton", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.09084" ] }, "num": null, "urls": [], "raw_text": "Devendra Singh Sachan, Yuhao Zhang, Peng Qi, and William Hamilton. 2020. Do syntax trees help pre-trained transformers extract information? arXiv preprint arXiv:2008.09084.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.07909" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Rethinking the inception architecture for computer vision", "authors": [ { "first": "Christian", "middle": [], "last": "Szegedy", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Vanhoucke", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Ioffe", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Zbigniew", "middle": [], "last": "Wojna", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "2818--2826", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Implicitly incorporating morphological information into word embedding", "authors": [ { "first": "Yang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1701.02481" ] }, "num": null, "urls": [], "raw_text": "Yang Xu and Jiawei Liu. 2017. Implicitly incorporating morphological information into word embedding. arXiv preprint arXiv:1701.02481.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "num": null, "text": "Figure 2: Per-language accuracy on tokens with different numbers of BPE components, for the finetuning training regime. The last data point on the x-axis refers to all tokens composed of seven or more BPE tokens. We indicate the method by encoding First as brown, summation as green, averaging as blue and RNN as red. The accuracy is given on the y-axis. We show the Agresti-Coull approximation of a 95%-confidence interval for the RNN method (Agresti and Coull, 1998). We do not show the intervals for other methods to avoid excessive clutter.", "uris": null }, "TABREF3": { "num": null, "text": "Accuracy for morphological tagging. We show scores both for finetuning the XLM-R model and extracting features.", "content": "
FinetuningFeature extraction
TreebankFirst Sum Mean RNN First Sum Mean RNN
Basque-BDT.739 .802 .790.835 .657 .715 .703.774
Finnish-TDT.940 .946 .946.952 .780 .805 .794.861
Turkish-IMST.730 .780 .778.818 .653 .683 .664.711
Estonian-EDT.938 .939 .939.949 .779 .805 .803.868
Spanish-AnCora .956 .961 .959.964 .922 .937 .930.947
Arabic-PADT.889 .896 .898.907 .902 .909 .906.923
Czech-CAC.940 .947 .947.959 .786 .849 .840.900
Polish-LFG.917 .920 .918.927 .696 .761 .752.812
Average.881 .899 .897.913 .772 .808 .799.849
", "html": null, "type_str": "table" } } } }