Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:01:44.249916Z"
},
"title": "Attentive Mimicking: Better Word Embeddings by Attending to Informative Contexts",
"authors": [
{
"first": "Timo",
"middle": [],
"last": "Schick",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sulzer GmbH Munich",
"location": {
"country": "Germany"
}
},
"email": "timo.schick@sulzer.de"
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Learning high-quality embeddings for rare words is a hard problem because of sparse context information. Mimicking (Pinter et al., 2017) has been proposed as a solution: given embeddings learned by a standard algorithm, a model is first trained to reproduce embeddings of frequent words from their surface form and then used to compute embeddings for rare words. In this paper, we introduce attentive mimicking: the mimicking model is given access not only to a word's surface form, but also to all available contexts and learns to attend to the most informative and reliable contexts for computing an embedding. In an evaluation on four tasks, we show that attentive mimicking outperforms previous work for both rare and medium-frequency words. Thus, compared to previous work, attentive mimicking improves embeddings for a much larger part of the vocabulary, including the mediumfrequency range.",
"pdf_parse": {
"paper_id": "N19-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "Learning high-quality embeddings for rare words is a hard problem because of sparse context information. Mimicking (Pinter et al., 2017) has been proposed as a solution: given embeddings learned by a standard algorithm, a model is first trained to reproduce embeddings of frequent words from their surface form and then used to compute embeddings for rare words. In this paper, we introduce attentive mimicking: the mimicking model is given access not only to a word's surface form, but also to all available contexts and learns to attend to the most informative and reliable contexts for computing an embedding. In an evaluation on four tasks, we show that attentive mimicking outperforms previous work for both rare and medium-frequency words. Thus, compared to previous work, attentive mimicking improves embeddings for a much larger part of the vocabulary, including the mediumfrequency range.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embeddings have led to large performance gains in natural language processing (NLP). However, embedding methods generally need many observations of a word to learn a good representation for it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One way to overcome this limitation and improve embeddings of infrequent words is to incorporate surface-form information into learning. This can either be done directly (Wieting et al., 2016; Bojanowski et al., 2017; Salle and Villavicencio, 2018) , or a two-step process is employed: first, an embedding model is trained on the word level and then, surface-form information is used either to fine-tune embeddings (Cotterell et al., 2016; Vuli\u0107 et al., 2017) or to completely recompute them. The latter can be achieved using a model trained to reproduce (or mimic) the original embeddings (Pinter et al., 2017) . However, these methods only work if a word's meaning can at least partially be predicted from its form.",
"cite_spans": [
{
"start": 170,
"end": 192,
"text": "(Wieting et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 193,
"end": 217,
"text": "Bojanowski et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 218,
"end": 248,
"text": "Salle and Villavicencio, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 415,
"end": 439,
"text": "(Cotterell et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 440,
"end": 459,
"text": "Vuli\u0107 et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 590,
"end": 611,
"text": "(Pinter et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A closely related line of research is embedding learning for novel words, where the goal is to obtain embeddings for previously unseen words from at most a handful of observations. While most contemporary approaches exclusively use context information for this task (e.g. Herbelot and Baroni, 2017; Khodak et al., 2018) , Schick and Sch\u00fctze (2019) recently introduced the form-context model and showed that joint learning from both surface form and context leads to better performance.",
"cite_spans": [
{
"start": 272,
"end": 298,
"text": "Herbelot and Baroni, 2017;",
"ref_id": "BIBREF3"
},
{
"start": 299,
"end": 319,
"text": "Khodak et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 322,
"end": 347,
"text": "Schick and Sch\u00fctze (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem we address in this paper is that often, only few of a word's contexts provide valuable information about its meaning. Nonetheless, the current state of the art treats all contexts the same. We address this issue by introducing a more intelligent mechanism of incorporating context into mimicking: instead of using all contexts, we learn -by way of self-attention -to pick a subset of especially informative and reliable contexts. This mechanism is based on the observation that in many cases, reliable contexts for a given word tend to resemble each other. We call our proposed architecture attentive mimicking (AM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are as follows: (i) We introduce the attentive mimicking model. It produces high-quality embeddings for rare and mediumfrequency words by attending to the most informative contexts. (ii) We propose a novel evaluation method based on VecMap (Artetxe et al., 2018) that allows us to easily evaluate the embedding quality of low-and medium-frequency words. (iii) We show that attentive mimicking improves word embeddings on various datasets.",
"cite_spans": [
{
"start": 258,
"end": 280,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Methods to train surface-form models to mimic word embeddings include those of Luong et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "(2013) (morpheme-based) and Pinter et al. (2017) (character-level) . In the area of fine-tuning methods, Cotterell et al. (2016) introduce a Gaussian graphical model that incorporates morphological information into word embeddings. Vuli\u0107 et al. (2017) retrofit embeddings using a set of language-specific rules. Models that directly incorporate surface-form information into embedding learning include fastText (Bojanowski et al., 2017) , LexVec (Salle and Villavicencio, 2018) and Charagram (Wieting et al., 2016) .",
"cite_spans": [
{
"start": 28,
"end": 66,
"text": "Pinter et al. (2017) (character-level)",
"ref_id": null
},
{
"start": 105,
"end": 128,
"text": "Cotterell et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 232,
"end": 251,
"text": "Vuli\u0107 et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 411,
"end": 436,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 446,
"end": 477,
"text": "(Salle and Villavicencio, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 492,
"end": 514,
"text": "(Wieting et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While many approaches to learning embeddings for novel words exclusively make use of context information (Lazaridou et al., 2017; Herbelot and Baroni, 2017; Khodak et al., 2018) , Schick and Sch\u00fctze (2019) 's form-context model combines surface-form and context information. Ling et al. (2015) also use attention in embedding learning, but their attention is within a context (picking words), not across contexts (picking contexts). Also, their attention is based only on word type and distance, not on the more complex factors available in our attentive mimicking model, e.g., the interaction with the word's surface form.",
"cite_spans": [
{
"start": 105,
"end": 129,
"text": "(Lazaridou et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 130,
"end": 156,
"text": "Herbelot and Baroni, 2017;",
"ref_id": "BIBREF3"
},
{
"start": 157,
"end": 177,
"text": "Khodak et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 180,
"end": 205,
"text": "Schick and Sch\u00fctze (2019)",
"ref_id": "BIBREF16"
},
{
"start": 275,
"end": 293,
"text": "Ling et al. (2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Attentive Mimicking",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We briefly review the architecture of the formcontext model (FCM), see Schick and Sch\u00fctze (2019) for more details.",
"cite_spans": [
{
"start": 71,
"end": 96,
"text": "Schick and Sch\u00fctze (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Form-Context Model",
"sec_num": "3.1"
},
{
"text": "FCM requires an embedding space of dimensionality d that assigns high-quality embeddings v \u2208 R d to frequent words. Given an infrequent or novel word w and a set of contexts C in which it occurs, FCM can then be used to infer an embedding v (w,C) for w that is appropriate for the given embedding space. This is achieved by first computing two distinct embeddings, one of which exclusively uses surface-form information and the other context information. The surface-form embedding, denoted v form (w,C) , is obtained from averaging over a set of n-gram embeddings learned by the model; the context embedding v context (w,C) is obtained from averaging over all embeddings of context words in C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-Context Model",
"sec_num": "3.1"
},
{
"text": "The two embeddings are then combined using a weighting coefficient \u03b1 and a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-Context Model",
"sec_num": "3.1"
},
{
"text": "d \u00d7 d matrix A, resulting in the form-context embedding v (w,C) = \u03b1 \u2022 Av context (w,C) + (1 \u2212 \u03b1) \u2022 v form (w,C) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-Context Model",
"sec_num": "3.1"
},
{
"text": "The weighing coefficient \u03b1 is a function of both embeddings, modeled as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-Context Model",
"sec_num": "3.1"
},
{
"text": "\u03b1 = \u03c3(u [v context (w,C) ; v form (w,C) ] + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-Context Model",
"sec_num": "3.1"
},
{
"text": "with u \u2208 R 2d , b \u2208 R being learnable parameters and \u03c3 denoting the sigmoid function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-Context Model",
"sec_num": "3.1"
},
{
"text": "FCM pays equal attention to all contexts of a word but often, only few contexts are actually suitable for inferring the word's meaning. We introduce attentive mimicking (AM) to address this problem: we allow our model to assign different weights to contexts based on some measure of their \"reliability\". To this end, let C = {C 1 , . . . , C m } where each C i is a multiset of words. We replace the context-embedding of FCM with a weighted embedding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "v context (w,C) = m i=1 \u03c1(C i , C) \u2022 v C i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "where v C i is the average of the embeddings of words in C i and \u03c1 measures context reliability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "To obtain a meaningful measure of reliability, our key observation is that reliable contexts typically agree with many other contexts. Consider a word w for which six out of ten contexts contain words referring to sports. Due to this high intercontext agreement, it is then reasonable to assume that w is from the same domain and, consequently, that the four contexts not related to sports are less informative. To formalize this idea, we first define the similarity between two contexts as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "s(C 1 , C 2 ) = (M v C 1 ) \u2022 (M v C 2 ) \u221a d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "with M \u2208 R d\u00d7d a learnable parameter, inspired by Vaswani et al. (2017) 's scaled dot-product attention. We then define the reliability of a context as",
"cite_spans": [
{
"start": 50,
"end": 71,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "\u03c1(C, C) = 1 Z m i=1 s(C, C i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "Z = m i=1 m j=1 s(C i , C j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "is a normalization constant, ensuring that all weights sum to one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "The model is trained by randomly sampling words w and contexts C from a large corpus and mimicking the original embedding of w, i.e., minimizing the squared distance between the original embedding and v (w,C) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Attention",
"sec_num": "3.2"
},
{
"text": "For our experiments, we follow the setup of Schick and Sch\u00fctze (2019) and use the Westbury Wikipedia Corpus (WWC) (Shaoul and Westbury, 2010) for training of all embedding models. To obtain training instances (w, C) for both FCM and AM, we sample words and contexts from the WWC based on their frequency, using only words that occur at least 100 times. We always train FCM and AM on skipgram embeddings (Mikolov et al., 2013) obtained using Gensim (\u0158eh\u016f\u0159ek and Sojka, 2010).",
"cite_spans": [
{
"start": 44,
"end": 69,
"text": "Schick and Sch\u00fctze (2019)",
"ref_id": "BIBREF16"
},
{
"start": 114,
"end": 141,
"text": "(Shaoul and Westbury, 2010)",
"ref_id": "BIBREF17"
},
{
"start": 403,
"end": 425,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Our experimental setup differs from that of Schick and Sch\u00fctze (2019) in two respects: (i) Instead of using a fixed number of contexts for C, we randomly sample between 1 and 64 contexts and (ii) we fix the number of training epochs to 5. The rationale behind our first modification is that we want our model to produce high-quality embeddings both when we only have a few contexts available and when there is a large number of contexts to pick from. We fix the number of epochs simply because our evaluation tasks come without development sets on which it may be optimized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To evaluate our model, we apply a novel, intrinsic evaluation method that compares embedding spaces by transforming them into a common space ( \u00a74.1). We also test our model on three word-level downstream tasks ( \u00a74.2, \u00a74.3, \u00a74.4) to demonstrate its versatile applicability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We introduce a novel evaluation method that explicitly evaluates embeddings for rare and medium-frequency words by downsampling frequent words from the WWC to a fixed number of occurrences. 1 We then compare \"gold\" skipgram embeddings obtained from the original corpus with embeddings learned by some model trained on the downsampled corpus. To this end, we transform the two embedding spaces into a common space using VecMap (Artetxe et al., 2018) , where we provide all but the downsampled words as a mapping dictionary. Intuitively, the better a model is at inferring an embedding from few observations, the more similar its embeddings must be to the gold embeddings in this common space. We thus measure the quality of a model by computing the average cosine similarity between its embeddings and the gold embeddings. As baselines, we train skipgram and fastText on the downsampled corpus. We then train Mimick (Pinter et al., 2017) as well as both FCM and AM on the skipgram embeddings. We also try a variant where the downsampled words are included in the training set (i.e., the mimicking models explicitly learn to reproduce their skipgram embeddings). This allows the model to learn representations of those words not completely from scratch, but to also make use of their original embeddings. Accordingly, we expect this variant to only be helpful if a word is not too rare, i.e. its original embedding is already of decent quality. Table 1 shows that for words with a frequency below 32, FCM and AM infer much better embeddings than all baselines. The comparably poor performance of Mimick is consistent with the observation of Pinter et al. (2017) that this method captures mostly syntactic information. Given four or more contexts, AM leads to consistent improvements over FCM. The variants that include downsampled words during training ( \u2020) still outperform skipgram for 32 and more observations, but perform worse than the default models for less frequent words.",
"cite_spans": [
{
"start": 426,
"end": 448,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 915,
"end": 936,
"text": "(Pinter et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 1639,
"end": 1659,
"text": "Pinter et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 1443,
"end": 1450,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "VecMap",
"sec_num": "4.1"
},
{
"text": "We follow the experimental setup of Rothe et al. (2016) 87.8 90.7 79.1 86.5 72.0 80.9 59.5 70.9 37.8 56.1 28.9 53.4 31.1 54.5 AM+skip 87.8 90.7 79.1 86.5 72.0 81.6 60.1 70.9 40.7 59.9 35.0 59.7 36.8 60.5 Table 3 : Results on the Name Typing dataset for various word frequencies f . The model that uses a linear combination of AM embeddings with skipgram is denoted AM+skip.",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "Rothe et al. (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentiment Dictionary",
"sec_num": "4.2"
},
{
"text": "2004) and the NRC Emotion lexicons (Mohammad and Turney, 2013) to obtain a training set of words with binary sentiment labels. On that data, we train a logistic regression model to classify words based on their embeddings. For our evaluation, we then use SemEval2015 Task 10E where words are assigned a sentiment rating between 0 (completely negative) and 1 (completely positive) and use Spearman's \u03c1 as a measure of similarity between gold and predicted ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Dictionary",
"sec_num": "4.2"
},
{
"text": "We train logistic regression models on both skipgram and fastText embeddings and, for testing, replace skipgram embeddings by embeddings inferred from the mimicking models. Table 2 shows that for rare and medium-frequency words, AM again outperforms all other models.",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 180,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Sentiment Dictionary",
"sec_num": "4.2"
},
{
"text": "We use Yaghoobzadeh et al. (2018)'s name typing dataset for the task of predicting the fine-grained named entity types of a word, e.g., PRESIDENT and LOCATION for \"Washington\". We train a logistic regression model using the same setup as in \u00a74.2 and evaluate on all words from the test set that occur \u2264100 times in WWC. Based on results in \u00a74.1, where AM only improved representations for words occurring fewer than 32 times, we also try the variant AM+skip that, in testing, replaces v (w,C) with the linear combination",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name Typing",
"sec_num": "4.3"
},
{
"text": "v w = \u03b2(f w ) \u2022 v (w,C) + (1 \u2212 \u03b2(f w )) \u2022 v w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name Typing",
"sec_num": "4.3"
},
{
"text": "where v w is the skipgram embedding of w, f w is the frequency of w and \u03b2(f w ) scales linearly from 1 for f w = 0 to 0 for f w = 32. Table 3 gives accuracy and micro F1 for several word frequency ranges. In accordance with results from previous experiments, AM performs drastically better than the baselines for up to 16 occurrences. Notably, the linear combination of skipgram and AM achieves by far the best overall results.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Name Typing",
"sec_num": "4.3"
},
{
"text": "The Chimeras (CHIMERA) dataset (Lazaridou et al., 2017) consists of similarity scores for pairs of made-up words and regular words. CHIMERA provides only six contexts for each made-up word, so it is not ideal for evaluating our model. Nonetheless, we can still use it to analyze the difference of FCM (no attention) and AM (using attention). As the surface-form of the made-up words was constructed randomly and thus carries no meaning at all, we restrict ourselves to the context parts of FCM and AM (referred to as FCMctx and AM-ctx). We use the test set of Herbelot and Baroni (2017) and compare the given similarity scores with the cosine similarities of the corresponding word embeddings, using FCM-ctx and AM-ctx to obtain embeddings for the made-up words. Table 4 gives Spearman's \u03c1 for our model and various baselines; baseline results are adopted from Khodak et al. (2018) . We do not report results for Mimick as its representations for novel words are entirely based on their surface form. While AM performs worse than previous methods for 2-4 sentences, it drastically improves over the best result currently published for 6 sentences. Again, context attention consistently improves results: AM-ctx performs better than FCM-ctx, regardless of the number of contexts. Since A La Carte (Khodak et al., 2018) , the method performing best for 2-4 contexts, is conceptually similar to FCM, it most likely would similarly benefit from context attention.",
"cite_spans": [
{
"start": 31,
"end": 55,
"text": "(Lazaridou et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 861,
"end": 881,
"text": "Khodak et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 1296,
"end": 1317,
"text": "(Khodak et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 763,
"end": 770,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Chimeras",
"sec_num": "4.4"
},
{
"text": "While the effect of context attention is more pronounced when there are many contexts available, we still perform a quantitative analysis of one exemplary instance of CHIMERA to better understand what AM learns; we consider the madeup word \"petfel\", a combination of \"saxophone\" and \"harmonica\", whose occurrences are shown in Table 4 : Spearman's \u03c1 for the Chimeras task given 2, 4 and 6 context sentences for the made-up word sentence \u03c1",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 334,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Chimeras",
"sec_num": "4.4"
},
{
"text": "\u2022 i doubt if we ll ever hear a man play a petfel like that again 0.19",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chimeras",
"sec_num": "4.4"
},
{
"text": "\u2022 also there were some other assorted instruments including a petfel and some wind chimes 0.31",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chimeras",
"sec_num": "4.4"
},
{
"text": "\u2022 they finished with new moon city a song about a suburb of drem which featured beautifully controlled petfel playing from callum 0.23",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chimeras",
"sec_num": "4.4"
},
{
"text": "\u2022 a programme of jazz and classical music showing the petfel as an instrument of both musical genres 0.27 Table 5 : Context sentences and corresponding attention weights for the made-up word \"petfel\"",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Chimeras",
"sec_num": "4.4"
},
{
"text": "(2) and (4); consistently, the embeddings obtained from those sentences are very similar. Furthermore, of all four sentences, these two are the ones best suited for a simple averaging model as they contain informative, frequent words like \"instrument\", \"chimes\" and \"music\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chimeras",
"sec_num": "4.4"
},
{
"text": "We have introduced attentive mimicking (AM) and showed that attending to informative and reliable contexts improves representations of rare and medium-frequency words for a diverse set of evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In future work, one might investigate whether attention mechanisms on the word level (cf. Ling et al., 2015) can further improve the model's performance. Furthermore, it would be interesting to investigate whether the proposed architecture is also beneficial for languages typologically different from English, e.g., morphologically rich languages.",
"cite_spans": [
{
"start": 90,
"end": 108,
"text": "Ling et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work was funded by the European Research Council (ERC #740516). We would like to thank the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5012--5019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Generalizing and improving bilingual word embed- ding mappings with a multi-step framework of lin- ear transformations. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, pages 5012-5019.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Morphological smoothing and extrapolation of word embeddings",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1651--1660",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Hinrich Sch\u00fctze, and Jason Eisner. 2016. Morphological smoothing and extrapolation of word embeddings. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1651- 1660. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "High-risk learning: acquiring new word vectors from tiny data",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "304--309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot and Marco Baroni. 2017. High-risk learning: acquiring new word vectors from tiny data. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 304-309. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A la carte embedding: Cheap but effective induction of semantic feature vectors",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Khodak",
"suffix": ""
},
{
"first": "Nikunj",
"middle": [],
"last": "Saunshi",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "12--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Khodak, Nikunj Saunshi, Yingyu Liang, Tengyu Ma, Brandon Stewart, and Sanjeev Arora. 2018. A la carte embedding: Cheap but effective induction of semantic feature vectors. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 12-22. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "The International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. The Inter- national Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multimodal word meaning induction from minimal exposure to natural text",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognitive Science",
"volume": "41",
"issue": "",
"pages": "677--705",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Marco Marelli, and Marco Baroni. 2017. Multimodal word meaning induction from minimal exposure to natural text. Cognitive Science, 41:677-705.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Not all contexts are created equal: Better word representations with variable attention",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Fermandez",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
},
{
"first": "Chu-Cheng",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1367--1372",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Yulia Tsvetkov, Silvio Amir, Ramon Fer- mandez, Chris Dyer, Alan W Black, Isabel Tran- coso, and Chu-Cheng Lin. 2015. Not all contexts are created equal: Better word representations with variable attention. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1367-1372.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Richard Socher, and Christopher Man- ning. 2013. Better word representations with recur- sive neural networks for morphology. In Proceed- ings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 104-113.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Crowdsourcing a word-emotion association lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Intelligence",
"volume": "29",
"issue": "3",
"pages": "436--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Peter D Turney. 2013. Crowd- sourcing a word-emotion association lexicon. Com- putational Intelligence, 29(3):436-465.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mimicking word embeddings using subword RNNs",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "102--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017. Mimicking word embeddings using subword RNNs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 102-112. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Ultradense word embeddings by orthogonal transformation",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ebert",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "767--777",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1091"
]
},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe, Sebastian Ebert, and Hinrich Sch\u00fctze. 2016. Ultradense word embeddings by orthogonal transformation. In Proceedings of the 2016 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 767-777. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Incorporating subword information into matrix factorization word embeddings",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Salle",
"suffix": ""
},
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Second Workshop on Subword/Character LEvel Models",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Salle and Aline Villavicencio. 2018. Incor- porating subword information into matrix factoriza- tion word embeddings. In Proceedings of the Sec- ond Workshop on Subword/Character LEvel Mod- els, pages 66-71. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning semantic representations for novel words: Leveraging both form and context",
"authors": [
{
"first": "Timo",
"middle": [],
"last": "Schick",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timo Schick and Hinrich Sch\u00fctze. 2019. Learning se- mantic representations for novel words: Leveraging both form and context. In Proceedings of the Thirty- Third AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The westbury lab wikipedia corpus",
"authors": [
{
"first": "Cyrus",
"middle": [],
"last": "Shaoul",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Westbury",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyrus Shaoul and Chris Westbury. 2010. The westbury lab wikipedia corpus.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "56--68",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1006"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Nikola Mrk\u0161i\u0107, Roi Reichart, Diarmuid O S\u00e9aghdha, Steve Young, and Anna Korhonen. 2017. Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules. In Pro- ceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 56-68. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Charagram: Embedding words and sentences via character n-grams",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. CoRR, abs/1607.02789.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Evaluating word embeddings in multi-label classification using fine-grained name typing",
"authors": [
{
"first": "Yadollah",
"middle": [],
"last": "Yaghoobzadeh",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "101--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yadollah Yaghoobzadeh, Katharina Kann, and Hin- rich Sch\u00fctze. 2018. Evaluating word embeddings in multi-label classification using fine-grained name typing. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 101-106. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td/><td>f = 1</td><td/><td colspan=\"2\">f \u2208 [2, 4)</td><td colspan=\"2\">f \u2208 [4, 8)</td><td colspan=\"2\">f \u2208 [8, 16)</td><td colspan=\"2\">f \u2208 [16, 32)</td><td colspan=\"2\">f \u2208 [32, 64)</td><td colspan=\"2\">f \u2208 [1, 100]</td></tr><tr><td>model</td><td>acc</td><td>F1</td><td>acc</td><td>F1</td><td>acc</td><td>F1</td><td>acc</td><td>F1</td><td>acc</td><td>F1</td><td>acc</td><td>F1</td><td>acc</td><td>F1</td></tr><tr><td>skipgram</td><td>0.0</td><td>2.6</td><td>2.2</td><td colspan=\"11\">7.8 11.5 30.7 44.7 64.5 37.8 59.4 35.0 59.7 33.5 58.3</td></tr><tr><td>fastText</td><td colspan=\"14\">44.6 51.1 50.5 65.1 48.4 62.9 44.3 59.6 34.1 53.5 29.8 55.7 31.4 56.4</td></tr><tr><td>Mimick</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>1.0</td><td>4.0</td><td>1.0</td><td>1.0</td><td colspan=\"2\">3.9 14.4</td><td colspan=\"2\">4.2 14.8</td></tr><tr><td>FCM</td><td colspan=\"14\">86.5 88.9 76.9 85.1 72.0 81.8 57.7 68.5 36.0 54.2 27.7 52.5 30.1 53.4</td></tr><tr><td>AM</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"text": "and fuse Opinion lexicon(Hu and Liu,",
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>model</td><td>2 sent.</td><td>4 sent.</td><td>6 sent.</td></tr><tr><td>skipgram</td><td>0.146</td><td>0.246</td><td>0.250</td></tr><tr><td>additive</td><td>0.363</td><td>0.370</td><td>0.360</td></tr><tr><td>additive \u2212 sw</td><td>0.338</td><td>0.362</td><td>0.408</td></tr><tr><td>Nonce2Vec</td><td>0.332</td><td>0.367</td><td>0.389</td></tr><tr><td>A La Carte</td><td>0.363</td><td>0.384</td><td>0.394</td></tr><tr><td>FCM-ctx</td><td>0.337</td><td>0.359</td><td>0.422</td></tr><tr><td>AM-ctx</td><td>0.342</td><td>0.376</td><td>0.436</td></tr></table>",
"num": null,
"text": "The model attends most to sentences",
"type_str": "table"
}
}
}
}