{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:29:41.332636Z" }, "title": "Enriching Word Embeddings with Temporal and Spatial Information", "authors": [ { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": {} }, "email": "hgong6@illinois.edu" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": {} }, "email": "spbhat2@illinois.edu" }, { "first": "Pramod", "middle": [], "last": "Viswanath", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": {} }, "email": "pramodv@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The meaning of a word is closely linked to sociocultural factors that can change over time and location, resulting in corresponding meaning changes. Taking a global view of words and their meanings in a widely used language, such as English, may require us to capture more refined semantics for use in time-specific or location-aware situations, such as the study of cultural trends or language use. However, popular vector representations for words do not adequately include temporal or spatial information. In this work, we present a model for learning word representation conditioned on time and location. In addition to capturing meaning changes over time and location, we require that the resulting word embeddings retain salient semantic and geometric properties. We train our model on time-and locationstamped corpora, and show using both quantitative and qualitative evaluations that it can capture semantics across time and locations. We note that our model compares favorably with the state-of-the-art for time-specific embedding, and serves as a new benchmark for location-specific embeddings.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The meaning of a word is closely linked to sociocultural factors that can change over time and location, resulting in corresponding meaning changes. Taking a global view of words and their meanings in a widely used language, such as English, may require us to capture more refined semantics for use in time-specific or location-aware situations, such as the study of cultural trends or language use. However, popular vector representations for words do not adequately include temporal or spatial information. In this work, we present a model for learning word representation conditioned on time and location. In addition to capturing meaning changes over time and location, we require that the resulting word embeddings retain salient semantic and geometric properties. We train our model on time-and locationstamped corpora, and show using both quantitative and qualitative evaluations that it can capture semantics across time and locations. We note that our model compares favorably with the state-of-the-art for time-specific embedding, and serves as a new benchmark for location-specific embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The use of word embeddings as a form of lexical representation has transformed the use of natural language processing for many applications such as machine translation (Qi et al., 2018) and language understanding (Peters et al., 2018) . The changing of word meaning over the course of time and space, termed semantic drift, has been the subject of long standing research in diachronic linguistics (Ullmann, 1979; Blank, 1999) . Additionally, the emergence of distinct geographically-qualified English varieties (e.g., South African English) has given rise to salient lexical variation giving several English words different meanings depending on the geographic location of their use, as documented in studies on World Englishes (Kachru et al., 2006; Mesthrie and Bhatt, 2008) . Considering the multiplicity of meanings that a word can take over the span of time and space owing to inevitable linguistic, and sociocultural factors among others, a static representation of a word as a single word embedding seems rather limited. Take the word apple as an example. Its early to near-recent mentions in written documents referred only to a fruit, but in the recent times it is also the name of a large technology company. Another example is the title for the head of government, which is \"president\" in the USA, and is \"prime minister\" in Canada.", "cite_spans": [ { "start": 168, "end": 185, "text": "(Qi et al., 2018)", "ref_id": "BIBREF26" }, { "start": 213, "end": 234, "text": "(Peters et al., 2018)", "ref_id": "BIBREF24" }, { "start": 397, "end": 412, "text": "(Ullmann, 1979;", "ref_id": "BIBREF32" }, { "start": 413, "end": 425, "text": "Blank, 1999)", "ref_id": "BIBREF3" }, { "start": 728, "end": 749, "text": "(Kachru et al., 2006;", "ref_id": "BIBREF15" }, { "start": 750, "end": 775, "text": "Mesthrie and Bhatt, 2008)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Naturally, we expect that one word should have different representations conditioned on the time or location. In this paper, we study how word embeddings can be enriched to encode their semantic drift in time and space. Extending a recent line of research on time-specific embeddings, including the works by Bamler and Mandt and Yao et al., we propose a model to capture varying lexical semantics across different conditions-of time and location.", "cite_spans": [ { "start": 308, "end": 340, "text": "Bamler and Mandt and Yao et al.,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A key technical challenge of learning conditioned embeddings is to put the embeddings (derived from different time periods or geographical locations) in the same vector space and preserve their geometry within and across different instances of the conditions.Traditional approaches involve a two-step mechanism of first learning the sets of embeddings separately under the different conditions, and then aligning them via appropriate transformations (Kulkarni et al., 2015; Hamilton et al., 2016; Zhang et al., 2016) . A primary limitation of these methods is their inadequate representation of word semantics, as we show in our comparative evaluation. Another approach to conditioned embedding uses a loss function with regularizers over word embeddings across conditions for their smooth trajectory in the vector space (Yao et al., 2018) . However, its scope is limited to modeling semantic drift over only time.", "cite_spans": [ { "start": 450, "end": 473, "text": "(Kulkarni et al., 2015;", "ref_id": "BIBREF16" }, { "start": 474, "end": 496, "text": "Hamilton et al., 2016;", "ref_id": "BIBREF12" }, { "start": 497, "end": 516, "text": "Zhang et al., 2016)", "ref_id": "BIBREF34" }, { "start": 821, "end": 839, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a model for general conditioned embeddings, with the novelty that it explicitly preserves embedding geometry under different conditions and captures different degrees of word semantic changes. We summarize our contributions below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We propose an unsupervised model to learn condition-specific embeddings including timespecific and location-specific embeddings;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Using benchmark datasets we demonstrate the state-of-the-art performance of the proposed model in accurately capturing word semantics across time periods and geographical regions;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. We provide the first dataset 1 to evaluate word embeddings across locations to foster research in this direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Time-specific embeddings. The evolution of word meaning with time has been a widely studied problem in sociolinguistics (Ullmann, 1979; Tang, 2018) . Early computational approaches to uncovering these trends have relied on frequency-based models, which have used frequency changes to trace semantic shift over time (Lijffijt et al., 2012; Choi and Varian, 2012; Michel et al., 2011) . More recent works have sought to study these phenomena using distributional models (Kutuzov et al., 2018; Huang and Paul, 2019; Schlechtweg et al., 2020) . Recent approaches on time-specific embeddings can be divided into three broad categories: aligning independently trained embeddings across time, joint training of time-dependent embeddings and using contextualized vectors from pre-trained models. Approaches of the first kind include the works by Kulkarni et al., Hamilton et al. and Zhang et al. . They rely on pre-training multiple sets of embeddings for different times independently, and then aligning one set of embeddings with another set so that two sets of embeddings are comparable.", "cite_spans": [ { "start": 120, "end": 135, "text": "(Ullmann, 1979;", "ref_id": "BIBREF32" }, { "start": 136, "end": 147, "text": "Tang, 2018)", "ref_id": "BIBREF31" }, { "start": 315, "end": 338, "text": "(Lijffijt et al., 2012;", "ref_id": "BIBREF19" }, { "start": 339, "end": 361, "text": "Choi and Varian, 2012;", "ref_id": "BIBREF5" }, { "start": 362, "end": 382, "text": "Michel et al., 2011)", "ref_id": "BIBREF21" }, { "start": 468, "end": 490, "text": "(Kutuzov et al., 2018;", "ref_id": "BIBREF18" }, { "start": 491, "end": 512, "text": "Huang and Paul, 2019;", "ref_id": "BIBREF14" }, { "start": 513, "end": 538, "text": "Schlechtweg et al., 2020)", "ref_id": "BIBREF29" }, { "start": 838, "end": 887, "text": "Kulkarni et al., Hamilton et al. and Zhang et al.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The second approach-joint training-aims to guarantee the alignment of embeddings in the same vectors space so that they are directly comparable. Compared with the previous category of ap-proaches, the joint learning of time-stamped embeddings has shown improved abilities to capture semantic changes across time. Bamler and Mandt used a probabilistic model to learn time-specific embeddings (Bamler and Mandt, 2017) . They make a parametric assumption (Gaussian) on the evolution of embeddings to guarantee the embedding alignment. Yao et al. learned embeddings by the factorization of a positive pointwise mutual information (PPMI) matrix. They imposed L2 constraints on embeddings from neighboring time periods for embedding alignment (Yao et al., 2018) . Rosenfeld and Erk proposed a neural model to first encode time and word information respectively and then to learn time-specific embeddings (Rosenfeld and Erk, 2018) . Dubossarsky et al. aligned word embeddings by sharing their context embeddings at different times (Dubossarsky et al., 2019) .", "cite_spans": [ { "start": 391, "end": 415, "text": "(Bamler and Mandt, 2017)", "ref_id": "BIBREF1" }, { "start": 737, "end": 755, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" }, { "start": 758, "end": 771, "text": "Rosenfeld and", "ref_id": null }, { "start": 898, "end": 923, "text": "(Rosenfeld and Erk, 2018)", "ref_id": "BIBREF27" }, { "start": 1024, "end": 1050, "text": "(Dubossarsky et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Some recent works fall in the third category, retrieving contextualized representations from pretrained models such as BERT (Devlin et al., 2018) as time-specific sense embeddings of words (Hu et al., 2019; Giulianelli et al., 2020) . These pretrained embeddings are limited to the scope of local contexts, while we learn the global representation of words in a given time or location.", "cite_spans": [ { "start": 124, "end": 145, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 189, "end": 206, "text": "(Hu et al., 2019;", "ref_id": "BIBREF13" }, { "start": 207, "end": 232, "text": "Giulianelli et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The underlying mathematical models of these previous works on temporal embeddings are discussed in the supplementary material. Our model belongs to the second category of joint embedding training. Different from previous works, our embedding is based on a model that explicitly takes into account the important semantic properties of time-specific embeddings. Embedding with spatial information. Lexical semantics is also sensitive to spatial factors. For example, the word denoting the head of government of a nation may be used differently depending on the region. For instance, the words can range from president to prime minister or king depending on the region. Language variation across regional contexts has been analyzed in sociolinguistics and dialectology studies (e.g., (Silva-Corval\u00e1n, 2006; Kulkarni et al., 2016) ). It is also understood that a deeper understanding of semantics enhanced with location information is critical to location-sensitive applications such as content localization of global search engines (Brandon Jr, 2001 ).", "cite_spans": [ { "start": 781, "end": 803, "text": "(Silva-Corval\u00e1n, 2006;", "ref_id": "BIBREF30" }, { "start": 804, "end": 826, "text": "Kulkarni et al., 2016)", "ref_id": "BIBREF17" }, { "start": 1029, "end": 1046, "text": "(Brandon Jr, 2001", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Some approaches towards this have included, a latent variable model proposed for geographical linguistic variation (Eisenstein et al., 2010) and a skip-gram model for geographically situated language (Bamman et al., 2014) . The current study is most similar to (Bamman et al., 2014) with the overlap in our intents to learn location-specific embeddings for measuring semantic drift. Most studies on location-dependent language resort to a qualitative evaluation, whereas (Bamman et al., 2014) resorts to a quantitative analysis for entity similarity. However, it is limited to a given region without exploring semantic equivalence of words across different geographic regions. To the extent we are aware, this is the first study to present a quantitative evaluation of word representations across geographical regions with the use of a dataset constructed for the purpose.", "cite_spans": [ { "start": 115, "end": 140, "text": "(Eisenstein et al., 2010)", "ref_id": "BIBREF8" }, { "start": 200, "end": 221, "text": "(Bamman et al., 2014)", "ref_id": "BIBREF2" }, { "start": 261, "end": 282, "text": "(Bamman et al., 2014)", "ref_id": "BIBREF2" }, { "start": 471, "end": 492, "text": "(Bamman et al., 2014)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We now introduce the model on which the condition-specific embedding training is based in this section. We assume access to a corpus divided into sub-corpora based on their conditions (time or location), and texts in the same condition (e.g., same time period) are gathered in each sub-corpus. For each condition, the co-occurrence counts of word pairs gathered from its sub-corpus are the corpus statistics we use for the embedding training. We note that because these sub-corpora vary in size, we scale the word co-occurrences of every condition so that all sub-corpora have the same total number of word pairs. We term the scaled value of word co-occurrences of word w i and w j in condition c as X i,j,c .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "A static model (without regard to the temporal or spatial conditions) proposed by Arora et al. provides the unifying theme for the seemingly different embedding approaches of word2vec and GloVe. In particular, It reveals that corpus statistics such as word co-occurrences could be estimated from embeddings. Inspired by this, we proposed a model for conditioned embeddings, and characterize such a model by its ability to capture the lexical semantic properties across different conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Before exploring the details of our model for condition-specific embeddings, we discuss some desired semantic properties of these embeddings. We expect the embeddings to capture time-and location-sensitive lexical semantics. We denote by c the condition we use to refine word embeddings, which can be a specific time period or a location. We then have temporal embeddings if the condition is time period, and spatial embeddings if the condition is location. For a word w, the condition-specific word embedding for condition c is denoted as v w,c . The key semantic properties of the condition-specific word embedding, which we consider in our model are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties of Conditioned Embeddings", "sec_num": "3.1" }, { "text": "(1) Preservation of geometry. One geometric property of static embeddings is that the difference vector encodes word relations, i.e., v (Mikolov et al., 2013) . Analogously, for the condition-specific embedding of semantically stable words across conditions, given word pairs (w 1 , w 2 ) and (w 3 , w 4 ) with the same underlying lexical relation, we expect the following equation to hold in any condition c.", "cite_spans": [ { "start": 136, "end": 158, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Properties of Conditioned Embeddings", "sec_num": "3.1" }, { "text": "bigger v big \u21e1 v greater v great", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties of Conditioned Embeddings", "sec_num": "3.1" }, { "text": "v w 1 ,c v w 2 ,c \u21e1 v w 3 ,c v w 4 ,c .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties of Conditioned Embeddings", "sec_num": "3.1" }, { "text": "( 1)This property is implicitly preserved in approaches aligning independently trained embeddings with linear transformations (Kulkarni et al., 2015) .", "cite_spans": [ { "start": 126, "end": 149, "text": "(Kulkarni et al., 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Properties of Conditioned Embeddings", "sec_num": "3.1" }, { "text": "(2) Consistency over conditions. Most word meanings change slowly over a given condition, i.e., their condition-specific word embeddings should be highly correlated (Hamilton et al., 2016) . When the condition is time period, for example, c 1 is the year 2000, and c 2 is the year 2001, we expect that for a given word, v w,c 1 and v w,c 2 have high similarity given their temporal proximity. The consistency property is preserved in models which jointly train embeddings across conditions (e.g., (Yao et al., 2018) ).", "cite_spans": [ { "start": 165, "end": 188, "text": "(Hamilton et al., 2016)", "ref_id": "BIBREF12" }, { "start": 497, "end": 515, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Properties of Conditioned Embeddings", "sec_num": "3.1" }, { "text": "(3) Different degrees of word change. Although word meanings change over time, not all words undergo this change to the same degree; some words change dramatically while others stay relatively stable across conditions (Blank, 1999) . In our formulation, we require the representation to capture the different degrees of word meaning change. This property is unexplored in prior studies. We incorporate these semantic properties as explicit constraints into our model for conditionspecific embeddings, which we formulate as an optimization problem.", "cite_spans": [ { "start": 218, "end": 231, "text": "(Blank, 1999)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Properties of Conditioned Embeddings", "sec_num": "3.1" }, { "text": "We propose a model that generates embeddings satisfying the semantic properties as discussed above. Writing the embedding v w,c of word w in condition c as a function of its condition-independent representation v w , condition representation vector q c and deviation embedding d w,c :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v w,c = v w q c + d w,c ,", "eq_num": "(2)" } ], "section": "Model", "sec_num": "3.2" }, { "text": "where is Hadamard product (i.e., elementwise multiplication). We decompose the conditioned representation into three component embeddings. This novel representation is motivated by the intuition that a word w usually carries its basic meaning v w and its meaning is influenced by different conditions represented by q c . Moreover, words have different degrees of meaning variation, which is captured by the deviation embedding d w,c .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "We begin with a model proposed by Arora et al. for static word embeddings regardless of the temporal or spatial conditions (Arora et al., 2016) . Let v w be the static representation of word w. For a pair of words w 1 and w 2 , the static model assumes that", "cite_spans": [ { "start": 123, "end": 143, "text": "(Arora et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log P(w 1 , w 2 ) \u21e1 1 2 kv w 1 + v w 2 k 2 ,", "eq_num": "(3)" } ], "section": "Model", "sec_num": "3.2" }, { "text": "where P(w 1 , w 2 ) is the co-occurrence probability of these two words in the training corpus. Let P c (w 1 , w 2 ) be the co-occurrence probability of word pair (w 1 , w 2 ) in the condition c. Based on the static model in Eq. 3, for a condition c we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "log P c (w 1 , w 2 ) \u21e1 1 2 kv w 1 ,c + u w 2 ,c k 2 . (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "Here, borrowing ideas from previous embedding algorithms including word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) , we use two sets of word embeddings {v w,c } and {u w,c } for a word w 1 and its context word w 2 respectively in condition c. Accordingly, we have two sets of condition-independent embeddings {v w } and {u w }, and two sets of deviation vectors {d w,c } and {d 0 w,c }. The condition-specific embeddings in Eq. (2) can be written as:", "cite_spans": [ { "start": 76, "end": 98, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF22" }, { "start": 109, "end": 134, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u21e2 v w 1 ,c = v w 1 q c + d w 1 ,c u w 2 ,c = u w 2 q c + d 0 w 2 ,c", "eq_num": "(5)" } ], "section": "Model", "sec_num": "3.2" }, { "text": "By combining Eq. (4) and (5), we derive the model for condition-specific embeddings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log P c (w 1 , w 2 ) \u21e1 1 2 k(v w 1 q c + d w 1 ,c ) + (u w 2 q c + d 0 w 2 ,c )k 2 .", "eq_num": "(" } ], "section": "Model", "sec_num": "3.2" }, { "text": "6) This model can be simplified as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log P c (w 1 , w 2 ) \u21e1 b w 1 ,c + b 0 w 2 ,c + (v w 1 q c + d w 1 ,c ) T (u w 2 q c + d 0 w 2 ,c ),", "eq_num": "(7)" } ], "section": "Model", "sec_num": "3.2" }, { "text": "where b w 1 ,c and b 0 w 2 ,c are bias terms introduced to replace the terms kv w 1 ,c k 2 and ku w 2 ,c k 2 respectively. We document the derivation details of Eq. (7) in the supplementary material. Optimization problem. This model enables us to use the conditioned embeddings to estimate the word co-occurrence probabilities in a specific condition. Conversely, we can formulate an optimization problem to train the conditioned embeddings from the word co-occurrences based on our model. We count the co-occurrences of all word pairs (w 1 , w 2 ) in different conditions based on the respective sub-corpora. For example, we count word co-occurrences over different time periods to incorporate temporal information into word embeddings, and we count word pairs in different locations to learn spatially sensitive word representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "Recall that X i,j,c is the scaled co-occurrence counts of w i and w j in condition c. Denote by W the total vocabulary and by C the number of conditions, where C is the number of time bins for the temporal condition or the number of locations for the location condition. Suppose that V is an (m \u21e5 |W |) condition-independent word embedding matrix, where each column corresponds to an m-dimension word vector v w . Matrix U is an (m \u21e5 |W |) basic context embedding matrix with each column as a context word vector u w . Matrix Q is an (m \u21e5 C) matrix, where each column is a condition vector q c . As for deviation matrices, D m\u21e5|W |\u21e5C and D 0 m\u21e5|W |\u21e5C consist of m-dimension deviation vectors d w,c and d 0 w,c respectively for word w in condition c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "Our goal is to learn embeddings U, Q and D so as to approximate the word co-occurrence counts based on the model in Eq.(7). Here, we design a loss function to be the approximation error of the embeddings, which is the mean square error between the condition-specific co-occurrences counted from the respective sub-corpora and their estimates from the embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "To satisfy the property 2 of condition-specific embeddings, we impose L 2 constraints kq a q b k 2 on the embeddings of condition a and b to guarantee the consistency over conditions. For timespecific embeddings, the constraints are for adjacent time bins. As for location-sensitive embeddings, the constraints are for all pairs of location embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "Furthermore, to account for the slow change in meaning of most words across conditions (as in time periods or locations) listed as property 3 of conditioned embeddings, we also include L 2 constraints kDk 2 and kD 0 k 2 on the deviation terms to penalize big changes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "Putting together the approximation error, constraints on condition embeddings and deviations, we have the following loss function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = C X c=1 |W | X i=1 |W | X j=1 \u21e3 (Vi Qc + Di,c) T (Uj Qc + D 0 j,c ) +bi,c + b 0 j,c log(Xi,j,c) 2 + \u21b5 2 X a,b kQa Q b k 2 + 2 (kDk 2 + kD 0 k 2 ).", "eq_num": "(8)" } ], "section": "Model", "sec_num": "3.2" }, { "text": "In addition to ensuring a smooth trajectory of the embeddings, the penalization on the deviations D and D 0 is necessary to avoid the degenerate case that Q c = 0, 8c. We note that, for the constraint on condition embeddings in the loss function L, for time-specific embeddings we use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "C 1 P c=1 kQ c+1 Q c k 2 , whereas", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "for location-specific embeddings, the constraint", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "becomes C 1 P a=1 C P b=a+1 kQ a Q b k 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "Model Properties. We have presented our approach to learning conditioned embeddings. Now we will show that the proposed model satisfies the aforementioned key properties in Section 3.1. We start with the property of geometry preservation. For a set of semantically stable words S = {w 1 , w 2 , w 3 , w 4 }, it is known that d w,c \u21e1 0 for w 2 S. Suppose that the relation between w 1 and w 2 is the same as the relation between w 3 and w 4 , i.e., v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w 1 v w 2 = v w 3 v w 4 . Given Eq. (2) for any condition c, it holds that v w 1 ,c v w 2 ,c \u21e1 (v w 1 v w 2 ) q c \u21e1 (v w 3 v w 4 ) q c \u21e1 v w 3 ,c v w 4 ,c .", "eq_num": "(9)" } ], "section": "Model", "sec_num": "3.2" }, { "text": "As for the second property of consistency over conditions, we again consider a stable word w. Its conditioned embedding v w,c in condition c can be written as v w,c = v w q c . As is shown in Eq. (8), the L 2 constraint kq a q b k 2 is put on different condition embeddings. The difference between word embeddings of w under two conditions a and b are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "kv w,a v w,b k 2 = kv w (q a q b )k 2 \uf8ff 1 2 kv w k 2 \u2022 kq a q b k 2 .", "eq_num": "(10)" } ], "section": "Model", "sec_num": "3.2" }, { "text": "According to Cauchy-Schwartz inequality, the L 2 constraint on condition vectors q a q b also acts as a constraint on word embeddings. With a large coefficient \u21b5, it prevents the embedding from differing too much across conditions, and guarantees the smooth trajectory of words. Lastly we show that our model captures the degree of word changes. The deviation vector d w,c we introduce in the model captures such changes. The L 2 constraint on kd w,c k shown in Eq. 8forces small deviation on most words which are smoothly changing across conditions. We assign a small coefficient to this constraint to allow sudden meaning changes in some words. The hyperparameter setting is discussed below. Embedding training. We have hyperparameters \u21b5 and as weights on the word consistency and the deviation constraints. We set \u21b5 = 1.5 and = 0.2 in time-specific embeddings, and \u21b5 = 1.0 and = 0.2 in location-specific embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "At each training step, we randomly select a nonzero element x i,j,c from the co-occurrence tensor X. Stochastic gradient descent with adaptive learning rate is applied to update V, U, Q, D, D 0 , d and d 0 , which are relevant to x i,j,c to minimize the loss L. The complexity of each step is O(m), where m is the embedding dimension. In each epoch, we traverse all nonzero elements of X. Thus we have nnz(X) steps where nnz(\u2022) is the number of nonzero elements. Although X contains O(|W | 2 ) elements, X is very sparse since many words do not co-occur, so nnz(X) \u2327 |W | 2 . The time complexity of our model is O(E \u2022 m \u2022 nnz(X)) for E epoch training. We set E = 40 in training both temporal and spatial word embeddings. Postprocessing. We note that embeddings under the same condition are not centered, i.e., the word vectors are distributed around some non-zero point. We center these vectors by removing the mean vector of all embeddings in the same condition. The centered embedding\u1e7d w,c of word w under condition c is:\u1e7d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w,c = v w,c 1 |W | X w2W vw ,c .", "eq_num": "(11)" } ], "section": "Model", "sec_num": "3.2" }, { "text": "The similarity between words across conditions is measured by the cosine similarity of their centered embeddings {\u1e7d w,c }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.2" }, { "text": "In this section, we compare our condition-specific word embedding models with corresponding state-Across time the, in, to, a, of, it, by, with, at, was, are, and, on, who, for, not, they, but, he, is, from, have, as, has, their, about, her, been, there, or, will, this, said, would Across regions in, from, at, could, its, which, out, but, on, all, has, so, is, are, had, he, been, by, an, it, as, for, was, this, his, be, they, we, her, that, and, with, a, of, the of-the-art models combined with temporal or spatial information. The dimension of all vectors is set as 50. We have the following baselines:", "cite_spans": [ { "start": 110, "end": 465, "text": "the, in, to, a, of, it, by, with, at, was, are, and, on, who, for, not, they, but, he, is, from, have, as, has, their, about, her, been, there, or, will, this, said, would Across regions in, from, at, could, its, which, out, but, on, all, has, so, is, are, had, he, been, by, an, it, as, for, was, this, his, be, they, we, her, that, and, with, a, of, the", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "(1) Basic word2vec (BW2V). It is word2vec CBOW model, which is trained on the entire corpus without considering any temporal or spatial partition (Mikolov et al., 2013) ;", "cite_spans": [ { "start": 146, "end": 168, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "(2) Transformed word2vec (TW2V). Multiple sets of embeddings are trained separately for each condition. Two sets of embeddings are then aligned via a linear transformation (Kulkarni et al., 2015) .", "cite_spans": [ { "start": 172, "end": 195, "text": "(Kulkarni et al., 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "(3) Aligned word2vec (AW2V): Similar to TW2V, sets of embeddings are first trained independently and then aligned via orthonormal transformations (Hamilton et al., 2016) . (4) Dynamic word embedding (DW2V): This approach proposes a joint training of word embeddings at different times with alignment constraints on temporally adjacent sets of embeddings (Yao et al., 2018) . We modify this baseline for location based embeddings by putting its alignment constraints on every two sets of embeddings.", "cite_spans": [ { "start": 146, "end": 169, "text": "(Hamilton et al., 2016)", "ref_id": "BIBREF12" }, { "start": 354, "end": 372, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We used two corpora as training data-the timestamped news corpus of the New York Times collected by (Yao et al., 2018) to train time-specific embeddings and a collection of location-specific texts in English, provided by the International Corpus of English project (ICE, 2019) for locationspecific embeddings. New York Times corpus. The news dataset from New York Times consists of 99, 872 articles from 1990 to 2016. We use time bins of size oneyear, and divide the corpus into 27 time bins.", "cite_spans": [ { "start": 100, "end": 118, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "International Corpus of English (ICE). The ICE project collected written and spoken material in English (one million words each) from different regions of the world after 1989. We used the written portions collected from Canada, East Africa, Hong Kong, India, Ireland, Jamaica, the Philippines, Singapore and the United States of America.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "Deviating from previous works, which remove both stop words and infrequent words from the vo-cabulary (Yao et al., 2018) , we only remove words with observed frequency count less than a threshold. We keep the stop words to show that the trained embedding is able to identify them as being semantically stable. The frequency threshold is set to 200 (the same as (Yao et al., 2018) ) for the New York Times corpus, and to 5 for the ICE corpus given that the smaller size of ICE corpus results in lower word frequency than the news corpus.", "cite_spans": [ { "start": 102, "end": 120, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" }, { "start": 361, "end": 379, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "We evaluate the enriched word embeddings for the following aspects:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "1. Degree of semantic change. As mentioned in the list of desired properties of conditioned embeddings, words undergo semantic change to different degrees. We check whether our embeddings can identify words whose meanings are relatively stable across conditions. These stable words will be discussed as part of the qualitative evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "2. Discovery of semantic change. Besides stable words, we also study words whose meaning changes drastically over conditions. Since a word's neighbors in the embedding space can reflect its meaning, we find the neighbors in different conditions to demonstrate how the word meaning changes. The discovery of semantic changes will be discussed as part of our qualitative evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "3. Semantic equivalence across conditions. All condition-specific embeddings are expected to be in the same vector space, i.e., the cosine similarity between a pair of embeddings reflects their lexical similarity even though they are from different condition values. Finding semantic equivalents with the derived embeddings will be discussed in the quantitative evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "We first identify words that are semantically stable across time and locations respectively. Cosine similarity of embeddings reflects the semantic similarity of words. The embeddings of stable words should have high similarity across conditions since their semantics do not change much with conditions. Therefore, we average the cosine similarity of words between different time durations or locations as the measure of word stability, and rank the words in terms of their stability. The most stable words are listed in Table 1 . We notice that a vast majority of these stable words are frequent words such as function words. It may be interpreted based on the fact that these are words that encode structure (Gong et al., 2017 (Gong et al., , 2018 , and that the structure of well-edited English text has not changed much across time or locations (Poirier, 2014) . It is also in line with our general linguistic knowledge; function words are those with high frequency in corpora, and are semantically relatively stable (Hamilton et al., 2016).", "cite_spans": [ { "start": 709, "end": 727, "text": "(Gong et al., 2017", "ref_id": null }, { "start": 728, "end": 748, "text": "(Gong et al., , 2018", "ref_id": "BIBREF10" }, { "start": 848, "end": 863, "text": "(Poirier, 2014)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 520, "end": 527, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Qualitative Evaluation", "sec_num": "4.2" }, { "text": "Next we focus on the words whose meaning varies with time or location. We first evaluate the semantic changes of embeddings trained on timestamped news corpus, and choose the word apple as an example (more examples are included in the supplementary material). We plot the trajectory of the embeddings of apple and its semantic neighbors over time in Fig. 1(a) . These word vectors are projected to a two-dimensional space using the locally linear embedding approach (Roweis and Saul, 2000) . We notice that the word apple usually referred to a fruit in 1990 given that its neighbors are food items such as pie and pudding. In recent years, the word has taken on the sense of the technology company Apple, which can be seen from the fact that apple is close to words denoting technology companies such as google and microsoft after 1998.", "cite_spans": [ { "start": 466, "end": 489, "text": "(Roweis and Saul, 2000)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 350, "end": 359, "text": "Fig. 1(a)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Qualitative Evaluation", "sec_num": "4.2" }, { "text": "We also evaluate the location-specific word embeddings trained on the ICE corpus on the task of semantic change discovery. Take the word president as an example. We list its neighbors in different locations in Fig. 1(b) . It is close to names of the regional leaders. The neighbors are president names such as bush and clinton in USA, and prime minister names such as harper in Canada and gandhi in India. This suggests that the embeddings are qualitatively shown to capture semantic changes across different conditions.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 219, "text": "Fig. 1(b)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Qualitative Evaluation", "sec_num": "4.2" }, { "text": "We also perform a quantitative evaluation of the condition-specific embeddings on the task of semantic equivalence across condition values. The joint embedding training is to bring the time-or location-specific embeddings to the same vector space so that they are comparable. Therefore, one key aspect of embeddings that we can evaluate is their semantic equivalence over time and locations. Two datasets with temporally-and spatially-equivalent word pairs were used for this part.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Evaluation", "sec_num": "4.3" }, { "text": "Temporal dataset. Yao et al. created two temporal testsets to examine the ability of the derived word embeddings to identify lexical equivalents over time (Yao et al., 2018) . For example, the word Clinton-1998 is semantically equivalent to the word Obama-2012, since Clinton was the US president in 1998 and Obama took office in 2012.", "cite_spans": [ { "start": 155, "end": 173, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.3.1" }, { "text": "The first temporal testset was built on the basis of public knowledge about famous roles at different times such as the U.S. presidents in history. It consists of 11, 028 word pairs which are semantically equivalent across time. For a given word in specific time, we find the closest neighbors of the time-dependent embedding in a target year. The neighbors are taken as its equivalents at the target time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.3.1" }, { "text": "The second testset is about technologies and historical events. Annotators generated 445 conceptually equivalent word-time pairs such as twitter-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.3.1" }, { "text": "Temporal testset 1 Temporal testset 2 Metric MRR MP@1 MP@3 MP@5 MP@10 MRR MP@1 MP@3 MP@5 MP@10 BW2V 0. Spatial dataset. To evaluate the quality of location-specific embeddings, we created a dataset of 714 semantically equivalent word pairs in different locations based on public knowledge. For example, the capitals of different countries have a semantic correspondence, resulting in the word Ottawa-Canada that refers to the word Ottawa for Canada to be equivalent to the word Dublin-Ireland that refers to the word Dublin used for Ireland. Two annotators chose a set of categories such as capitals and governors and independently came up with equivalent word pairs in different regions. Later they went through the word pairs together and decided the one to include. We will release this dataset upon acceptance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "In line with prior work (Yao et al., 2018) , we use two evaluation metrics-mean reciprocal rank (MRR) and mean precision@k (MP@K)-to evaluate semantic equivalence on both temporal and spatial datasets. MRR. For each query word, we rank all neighboring words in terms of their cosine similarity to the query word in a given condition, and identify the rank of the correct equivalent word. We define r i as the rank of the correct word of the i-th query, and MRR for N queries is defined as", "cite_spans": [ { "start": 24, "end": 42, "text": "(Yao et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation metric", "sec_num": "4.3.2" }, { "text": "MRR = 1 N N X i=1 1 r i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation metric", "sec_num": "4.3.2" }, { "text": "Note that we only consider the top 10 words, and the inverse rank 1/r i of the correct word is set as 0 if it does not appear among the top 10 neighbors. MP@K. For each query, we consider the top-K words closest to the word in terms of cosine similarity in a given condition. If the correct word is included, we define the precision of the i-th query P@K i as 1, otherwise, P@K i = 0. MP@K for N queries is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation metric", "sec_num": "4.3.2" }, { "text": "MP@K = 1 N N X i=1 P @K i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation metric", "sec_num": "4.3.2" }, { "text": "Temporal testset. We report the ranking results on the two temporal testsets in Table 2 , and report results on the spatial testset in Table 3 . Our condition-specific word embedding is denoted as CW2V in the tables. In the temporal testset 1, our model is consistently better than the three baselines BW2V, TW2V and AW2V, and is comparable to DW2V in all metrics. In the temporal tesetset 2, CW2V outperforms BW2V, TW2V and AW2V in all metrics and is comparable to DW2V with respect to precision in the top 1 and top 3 words, but falls behind DW2V in MP@5 and MP@10. This lower performance may actually be a misrepresentation of its actual performance, since the word pairs in testset 2 are generated based on human knowledge and is potentially more subjective than testset 1.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 135, "end": 142, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3.3" }, { "text": "As an illustration, consider the case of website-2014 in testset 2. Our embeddings show abc, nbc, cbs and magazine as semantically similar words in 1990. These words are reasonable results since a website acts as a news platform just like TV broadcasting companies and magazines. The ground truth neighbor of website-2014 is the word address. Another example is bitcoin-2015. The semantic neighbors of our embeddings are currency, monetary and stocks in 1992. These words are semantically similar to bitcoin in the sense that bitcoin is cryptocurrency and a form of electronic cash. However, the ground truth is investment in the testset. Spatial testset. Considering the evaluation on the spatial testset in Table 3 , our condition-specific embedding achieves the best performance in finding semantic equivalents across regions. We note that the approaches which align independently trained embeddings such as TW2V and AW2V have poor performance. Due to the disparity in word distributions across regions in the ICE corpus, words with high frequency in one region may seldom be seen in another region. These infrequent words tend to have low-quality embeddings. It hurts the accurate alignment between locations and further degrades the performance of location-specific embeddings.", "cite_spans": [], "ref_spans": [ { "start": 709, "end": 716, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3.3" }, { "text": "DW2V, the jointly trained embedding, does not perform well on the spatial testset. It puts alignment constraints on word embeddings between two regions to prevent major changes of word embeddings across regions. This may lead to an interference between regional embeddings especially in cases where there is a frequency disparity of the same word in different regional corpora. In such cases, the embedding of the frequent word in one region will be affected by the weak embedding of the same word occurring infrequently in another region. Our model decomposes a word embedding into three components: a condition-independent component, a condition vector, and a deviation vector. The condition vector for each region takes care of the regional disparity, while the conditionindependent vectors are not affected. Therefore, our model is more robust to such disparity in learning conditioned embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3.3" }, { "text": "We studied a model to enrich word embeddings with temporal and spatial information and showed how it explicitly encodes lexical semantic properties into the geometry of the embedding. We then empirically demonstrated how the model captures language evolution across time and location. We leave it to future work to explore concrete downstream applications, where these time-and locationsensitive embeddings can be fruitfully used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "All data and code pertaining to this study are available at https://github.com/HongyuGong/ EnrichedWordRepresentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR)-a research collaboration as part of the IBM AI Horizons Network. We would like to thank the anonymous reviewers for their constructive comments and suggestions. We also thank Daniel Polyakov and Yuchen Li for the data annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A latent variable model approach to pmi-based word embeddings", "authors": [ { "first": "Sanjeev", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Yuanzhi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yingyu", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Tengyu", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Risteski", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "385--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to pmi-based word embeddings. Transac- tions of the Association for Computational Linguis- tics, 4:385-399.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dynamic word embeddings", "authors": [ { "first": "Robert", "middle": [], "last": "Bamler", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Mandt", "suffix": "" } ], "year": 2017, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "380--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In International Conference on Machine Learning, pages 380-389.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Distributed representations of geographically situated language", "authors": [ { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "828--834", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bamman, Chris Dyer, and Noah A Smith. 2014. Distributed representations of geographically situ- ated language. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics, volume 2, pages 828-834.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Why do new meanings occur? a cognitive typology of the motivations for lexical semantic change. Historical semantics and cognition", "authors": [ { "first": "Andreas", "middle": [], "last": "Blank", "suffix": "" } ], "year": 1999, "venue": "", "volume": "13", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Blank. 1999. Why do new meanings occur? a cognitive typology of the motivations for lexical se- mantic change. Historical semantics and cognition, 13:6.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Localization of web content", "authors": [ { "first": "Daniel", "middle": [], "last": "Brandon", "suffix": "" } ], "year": 2001, "venue": "Journal of Computing Sciences in Colleges", "volume": "17", "issue": "2", "pages": "345--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Brandon Jr. 2001. Localization of web con- tent. Journal of Computing Sciences in Colleges, 17(2):345-358.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Predicting the present with google trends", "authors": [ { "first": "Hyunyoung", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Varian", "suffix": "" } ], "year": 2012, "venue": "Economic Record", "volume": "88", "issue": "", "pages": "2--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hyunyoung Choi and Hal Varian. 2012. Predicting the present with google trends. Economic Record, 88:2- 9.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Time-out: Temporal referencing for robust modeling of lexical semantic change", "authors": [ { "first": "Haim", "middle": [], "last": "Dubossarsky", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Hengchen", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Tahmasebi", "suffix": "" }, { "first": "Dominik", "middle": [], "last": "Schlechtweg", "suffix": "" } ], "year": 2019, "venue": "The 57th Annual Meeting of the Association for Computational Linguistics (ACL2019) Proceedings of the Conference. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haim Dubossarsky, Simon Hengchen, Nina Tahmasebi, Dominik Schlechtweg, et al. 2019. Time-out: Tem- poral referencing for robust modeling of lexical semantic change. In The 57th Annual Meeting of the Association for Computational Linguistics (ACL2019) Proceedings of the Conference. ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A latent variable model for geographic lexical variation", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "", "middle": [], "last": "Connor", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1277--1287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Eisenstein, Brendan O'Connor, Noah A Smith, and Eric P Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 1277-1287.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Analysing lexical semantic change with contextualised word representations", "authors": [ { "first": "Mario", "middle": [], "last": "Giulianelli", "suffix": "" }, { "first": "Marco", "middle": [ "Del" ], "last": "Tredici", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "2020", "issue": "", "pages": "3960--3973", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Giulianelli, Marco Del Tredici, and Raquel Fern\u00e1ndez. 2020. Analysing lexical semantic change with contextualised word representations. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 3960-3973. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Embedding syntax and semantics of prepositions via tensor decomposition", "authors": [ { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Pramod", "middle": [], "last": "Viswanath", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "896--906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyu Gong, Suma Bhat, and Pramod Viswanath. 2018. Embedding syntax and semantics of prepo- sitions via tensor decomposition. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 896-906.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Diachronic word embeddings reveal statistical laws of semantic change", "authors": [ { "first": "Jure", "middle": [], "last": "William L Hamilton", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1489--1501", "other_ids": {}, "num": null, "urls": [], "raw_text": "William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statisti- cal laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, pages 1489-1501.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Diachronic sense modeling with deep contextualized word embeddings: An ecological view", "authors": [ { "first": "Renfen", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Shen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shichen", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3899--3908", "other_ids": {}, "num": null, "urls": [], "raw_text": "Renfen Hu, Shen Li, and Shichen Liang. 2019. Di- achronic sense modeling with deep contextualized word embeddings: An ecological view. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899-3908.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural temporality adaptation for document classification: Diachronic word embeddings and domain adaptation models", "authors": [ { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Paul", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4113--4123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaolei Huang and Michael Paul. 2019. Neural tem- porality adaptation for document classification: Di- achronic word embeddings and domain adaptation models. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4113-4123.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The handbook of world Englishes", "authors": [ { "first": "B", "middle": [], "last": "Braj", "suffix": "" }, { "first": "Yamuna", "middle": [], "last": "Kachru", "suffix": "" }, { "first": "", "middle": [], "last": "Kachru", "suffix": "" }, { "first": "L", "middle": [], "last": "Cecil", "suffix": "" }, { "first": "", "middle": [], "last": "Nelson", "suffix": "" }, { "first": "R", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Zoya G", "middle": [], "last": "Davis", "suffix": "" }, { "first": "", "middle": [], "last": "Proshina", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Braj B Kachru, Yamuna Kachru, Cecil L Nelson, Daniel R Davis, and Zoya G Proshina. 2006. The handbook of world Englishes. Wiley Online Library.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Statistically significant detection of linguistic change", "authors": [ { "first": "Vivek", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "625--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detec- tion of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 625-635.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Freshman or fresher? quantifying the geographic variation of language in online social media", "authors": [ { "first": "Vivek", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2016, "venue": "Tenth International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivek Kulkarni, Bryan Perozzi, and Steven Skiena. 2016. Freshman or fresher? quantifying the geo- graphic variation of language in online social media. In Tenth International AAAI Conference on Web and Social Media.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Diachronic word embeddings and semantic shifts: a survey", "authors": [ { "first": "Andrey", "middle": [], "last": "Kutuzov", "suffix": "" }, { "first": "Lilja", "middle": [], "last": "\u00d8vrelid", "suffix": "" }, { "first": "Terrence", "middle": [], "last": "Szymanski", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Velldal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1384--1397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrey Kutuzov, Lilja \u00d8vrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embed- dings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 1384-1397.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Ceecing the baseline: Lexical stability and significant change in a historical corpus", "authors": [ { "first": "Jefrey", "middle": [], "last": "Lijffijt", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "S\u00e4ily", "suffix": "" }, { "first": "Terttu", "middle": [], "last": "Nevalainen", "suffix": "" } ], "year": 2012, "venue": "Studies in Variation", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jefrey Lijffijt, Tanja S\u00e4ily, and Terttu Nevalainen. 2012. Ceecing the baseline: Lexical stability and signifi- cant change in a historical corpus. In Studies in Vari- ation, Contacts and Change in English, volume 10.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "World Englishes: The study of new linguistic varieties", "authors": [ { "first": "Rajend", "middle": [], "last": "Mesthrie", "suffix": "" }, { "first": "M", "middle": [], "last": "Rakesh", "suffix": "" }, { "first": "", "middle": [], "last": "Bhatt", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajend Mesthrie and Rakesh M Bhatt. 2008. World En- glishes: The study of new linguistic varieties. Cam- bridge University Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Quantitative analysis of culture using millions of digitized books", "authors": [ { "first": "Jean-Baptiste", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Yuan Kui", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Aviva", "middle": [], "last": "Presser Aiden", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Veres", "suffix": "" }, { "first": "K", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "", "middle": [], "last": "Gray", "suffix": "" }, { "first": "P", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Pickett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Hoiberg", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clancy", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Norvig", "suffix": "" }, { "first": "", "middle": [], "last": "Orwant", "suffix": "" } ], "year": 2011, "venue": "science", "volume": "331", "issue": "6014", "pages": "176--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K Gray, Joseph P Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, et al. 2011. Quantitative analysis of culture using millions of digitized books. science, 331(6014):176-182.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "NAACL 2018", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In NAACL 2018, volume 1, pages 2227- 2237.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A method for automatic detection and manual localization of content-based translation errors and shifts", "authors": [ { "first": "Eric Andr\u00e9", "middle": [], "last": "Poirier", "suffix": "" } ], "year": 2014, "venue": "Journal of Innovation in Digital Ecosystems", "volume": "1", "issue": "1-2", "pages": "38--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Andr\u00e9 Poirier. 2014. A method for automatic detection and manual localization of content-based translation errors and shifts. Journal of Innovation in Digital Ecosystems, 1(1-2):38-46.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "When and why are pre-trained word embeddings useful for neural machine translation", "authors": [ { "first": "Ye", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Devendra", "middle": [], "last": "Sachan", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Felix", "suffix": "" } ], "year": 2018, "venue": "NAACL 2018", "volume": "2", "issue": "", "pages": "529--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Pad- manabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neu- ral machine translation? In NAACL 2018, volume 2, pages 529-535.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Deep neural models of semantic shift", "authors": [ { "first": "Alex", "middle": [], "last": "Rosenfeld", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "474--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Rosenfeld and Katrin Erk. 2018. Deep neural models of semantic shift. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 474-484.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Nonlinear dimensionality reduction by locally linear embedding. science", "authors": [ { "first": "T", "middle": [], "last": "Sam", "suffix": "" }, { "first": "Lawrence K", "middle": [], "last": "Roweis", "suffix": "" }, { "first": "", "middle": [], "last": "Saul", "suffix": "" } ], "year": 2000, "venue": "", "volume": "290", "issue": "", "pages": "2323--2326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam T Roweis and Lawrence K Saul. 2000. Nonlin- ear dimensionality reduction by locally linear em- bedding. science, 290(5500):2323-2326.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Semeval-2020 task 1: Unsupervised lexical semantic change detection", "authors": [ { "first": "Dominik", "middle": [], "last": "Schlechtweg", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Mcgillivray", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Hengchen", "suffix": "" }, { "first": "Haim", "middle": [], "last": "Dubossarsky", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Tahmasebi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.11464" ] }, "num": null, "urls": [], "raw_text": "Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. Semeval-2020 task 1: Unsupervised lex- ical semantic change detection. arXiv preprint arXiv:2007.11464.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Analyzing linguistic variation: Statistical models and methods", "authors": [ { "first": "Carmen", "middle": [], "last": "Silva-Corval\u00e1n", "suffix": "" } ], "year": 2006, "venue": "Journal of Linguistic Anthropology", "volume": "16", "issue": "2", "pages": "295--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Silva-Corval\u00e1n. 2006. Analyzing linguistic variation: Statistical models and methods. Journal of Linguistic Anthropology, 16(2):295-296.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A state-of-the-art of semantic change computation", "authors": [ { "first": "Xuri", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2018, "venue": "Natural Language Engineering", "volume": "24", "issue": "5", "pages": "649--676", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuri Tang. 2018. A state-of-the-art of semantic change computation. Natural Language Engineer- ing, 24(5):649-676.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Semantics: an introduction to the science of meaning", "authors": [ { "first": "", "middle": [], "last": "Stephen Ullmann", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Ullmann. 1979. Semantics: an introduction to the science of meaning.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Dynamic word embeddings for evolving semantic discovery", "authors": [ { "first": "Zijun", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Weicong", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "673--681", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic word embeddings for evolving semantic discovery. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 673-681. ACM.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The past is not a foreign country: Detecting semantically similar terms across time", "authors": [ { "first": "Yating", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Jatowt", "suffix": "" }, { "first": "S", "middle": [], "last": "Sourav", "suffix": "" }, { "first": "Katsumi", "middle": [], "last": "Bhowmick", "suffix": "" }, { "first": "", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 2016, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "28", "issue": "10", "pages": "2793--2807", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yating Zhang, Adam Jatowt, Sourav S Bhowmick, and Katsumi Tanaka. 2016. The past is not a for- eign country: Detecting semantically similar terms across time. IEEE Transactions on Knowledge and Data Engineering, 28(10):2793-2807.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Word \"president\" and its neighbors across locations.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "The trajectory of word embeddings over time and locations.", "uris": null }, "TABREF0": { "type_str": "table", "content": "", "text": "Stable Words across Time and Locations", "num": null, "html": null }, "TABREF2": { "type_str": "table", "content": "
Metric MRR MP@1 MP@3 MP@5 MP@10
BW2V 0.250.200.270.290.35
TW2V 0.000.000.000.000.00
AW2V 0.170.110.180.240.33
DW2V 0.120.110.110.130.14
CW2V 0.310.240.350.390.46
", "text": "Ranking Results on Temporal Testsets", "num": null, "html": null }, "TABREF3": { "type_str": "table", "content": "
: Ranking Results on Spatial Testset
2012 and newspaper-1990. Here the equivalence is
functional considering that Twitter played the role
of an information dissemination platform in 2012
just as the newspaper did in 1990.
", "text": "", "num": null, "html": null } } } }