{ "paper_id": "P16-1048", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:57:26.891983Z" }, "title": "CSE: Conceptual Sentence Embeddings based on Attention Model", "authors": [ { "first": "Yashen", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "yswang@bit.edu.cn" }, { "first": "Heyan", "middle": [], "last": "Huang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Chong", "middle": [], "last": "Feng", "suffix": "", "affiliation": {}, "email": "fengchong@bit.edu.cn" }, { "first": "Qiang", "middle": [], "last": "Zhou", "suffix": "", "affiliation": {}, "email": "qzhou@bit.edu.cn" }, { "first": "Jiahui", "middle": [], "last": "Gu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Xiong", "middle": [], "last": "Gao", "suffix": "", "affiliation": {}, "email": "gaoxiong@bit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most sentence embedding models typically represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance representation capability of sentence, we employ conceptualization model to assign associated concepts for each sentence in the text corpus, and then learn conceptual sentence embedding (CSE). Hence, this semantic representation is more expressive than some widely-used text representation models such as latent topic model, especially for short-text. Moreover, we further extend CSE models by utilizing a local attention-based model that select relevant words within the context to make more efficient prediction. In the experiments, we evaluate the CSE models on two tasks, text classification and information retrieval. The experimental results show that the proposed models outperform typical sentence embedding models.", "pdf_parse": { "paper_id": "P16-1048", "_pdf_hash": "", "abstract": [ { "text": "Most sentence embedding models typically represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance representation capability of sentence, we employ conceptualization model to assign associated concepts for each sentence in the text corpus, and then learn conceptual sentence embedding (CSE). Hence, this semantic representation is more expressive than some widely-used text representation models such as latent topic model, especially for short-text. Moreover, we further extend CSE models by utilizing a local attention-based model that select relevant words within the context to make more efficient prediction. In the experiments, we evaluate the CSE models on two tasks, text classification and information retrieval. The experimental results show that the proposed models outperform typical sentence embedding models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many natural language processing applications require the input text to be represented as a fixedlength feature, of which sentence representation is very important. Perhaps the most common fixedlength vector representation for texts is the bag-ofwords or bag-of-n-grams (Harris, 1970) . However, they suffer severely from data sparsity and high dimensionality, and have very little sense about the semantics of words or the distances between the words. Recently, in sentence representation and classification, deep neural network (DNN) approaches have achieved state-of-the-art results (Le * The contact author. and Mikolov, 2014; Liu et al., 2015; Palangi et al., 2015; Wieting et al., 2015) . Despite of their usefulness, recent sentence embeddings face several challenges: (i) Most sentence embedding models represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous polysemy; (ii) For short-text, however, neither parsing nor topic modeling works well because there are simply not enough signals in the input; (iii) Setting window size of context words is very difficult. To solve these problems, we must derive more semantic signals from the input sentence, e.g., concepts. Besides, we should assigned different attention for different contextual word, to enhance the influence of words that are relevant for each prediction.", "cite_spans": [ { "start": 270, "end": 284, "text": "(Harris, 1970)", "ref_id": "BIBREF5" }, { "start": 612, "end": 630, "text": "and Mikolov, 2014;", "ref_id": "BIBREF7" }, { "start": 631, "end": 648, "text": "Liu et al., 2015;", "ref_id": "BIBREF9" }, { "start": 649, "end": 670, "text": "Palangi et al., 2015;", "ref_id": "BIBREF16" }, { "start": 671, "end": 692, "text": "Wieting et al., 2015)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper proposed Conceptual Sentence Embedding (CSE), an unsupervised framework that learns continuous distributed vector representations for sentence. Specially, by innovatively introducing concept information, this concept-level vector representations of sentence are learned to predict the surrounding words or target word in contexts. Our research is inspired by the recent work in learning vector representations of words using deep learning strategy (Mikolov et al., 2013a; Le and Mikolov, 2014) . More precisely, we first obtain concept distribution of the sentence, and generate corresponding concept vector. Then we concatenate or average the sentence vector, contextual word vectors with concept vector of the sentence, and predict the target word in the given context. All of the sentence vectors and word vectors are trained by the stochastic gradient descent and backpropagation (Rumelhart et al., 1986) . At prediction time, sentence vectors are inferred by fixing the word vectors and observed sentence vectors, and training the new sentence vector until convergence.", "cite_spans": [ { "start": 459, "end": 482, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF12" }, { "start": 483, "end": 504, "text": "Le and Mikolov, 2014)", "ref_id": "BIBREF7" }, { "start": 895, "end": 919, "text": "(Rumelhart et al., 1986)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In parallel, the concept of attention has gained popularity recently in neural natural language processing researches, which allowing models to learn alignments between different modalities (Bahdanau et al., 2014; Bansal et al., 2014; Rush et al., 2015) . In this work, we further propose the extensions to CSE, which adds an attention model that considers contextual words differently depending on the word type and its relative position to the predicted word. The main intuition behind the extended model is that prediction of a word is mainly dependent on certain words surrounding it. In summary, the basic idea of CSE is that, we allow each word to have different embeddings under different concepts. Taking word apple into consideration, it may indicate a fruit under the concept food, and indicate an IT company under the concept information technology. Hence, concept information significantly contributes to the discriminative of sentence vector. Moreover, an important advantage of the proposed conceptual sentence embeddings is that they could be learned from unlabeled data. Another advantage is that we take the word order into account, in the same way of ngram model, while bag-of-n-grams model would create a very high-dimensional representation that tends to generalize poorly.", "cite_spans": [ { "start": 190, "end": 213, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 214, "end": 234, "text": "Bansal et al., 2014;", "ref_id": "BIBREF2" }, { "start": 235, "end": 253, "text": "Rush et al., 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To summarize, this work contributes on the following aspects: We integrate concepts and attention-based strategy into basic sentence embedding representation, and allow the resulting conceptual sentence embedding to model different meanings of a word under different concept. The experimental results on text classification task and information retrieval task demonstrate that this concept-level sentence representation is robust. The outline of the paper is as follows. Section 2 surveys related researches. Section 3 formally de-scribes the proposed model of conceptual sentence embedding. Corresponding experimental results are shown in Section 4. Finally, we conclude the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Conventionally, one-hot sentence representation has been widely used as the basis of bag-of-words (BOW) text model. However, it can-not take the semantic information into consideration. Recently, in sentence representation and classification, deep neural network approaches have achieved state-of-the-art results (Le and Mikolov, 2014; Liu et al., 2015; Ma et al., 2015; Palangi et al., 2015; Wieting et al., 2015) , most of which are inspired by word embedding (Mikolov et al., 2013a) . (Le and Mikolov, 2014) proposed the paragraph vector (PV) that learns fixed-length representations from variable-length pieces of texts. Their model represents each document by a dense vector which is trained to predict words in the document. However, their model depends only on word surface, ignoring semantic information such as topics or concepts. In this paper, we extent PV by introducing concept information.", "cite_spans": [ { "start": 321, "end": 335, "text": "Mikolov, 2014;", "ref_id": "BIBREF7" }, { "start": 336, "end": 353, "text": "Liu et al., 2015;", "ref_id": "BIBREF9" }, { "start": 354, "end": 370, "text": "Ma et al., 2015;", "ref_id": "BIBREF11" }, { "start": 371, "end": 392, "text": "Palangi et al., 2015;", "ref_id": "BIBREF16" }, { "start": 393, "end": 414, "text": "Wieting et al., 2015)", "ref_id": "BIBREF28" }, { "start": 462, "end": 485, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF12" }, { "start": 488, "end": 510, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Aiming at enhancing discriminativeness for ubiquitous polysemy, (Liu et al., 2015) employed latent topic models to assign topics for each word in the text corpus, and learn topical word embeddings (TWE) and sentence embeddings based on both words and their topics. Besides, to combine deep learning with linguistic structures, many syntax-based embedding algorithms have been proposed (Severyn et al., 2014; Wang et al., 2015b) to utilize long-distance dependencies. However, short-texts usually do not observe the syntax of a written language, nor do they contain enough signals for statistical inference (e.g., topic model). Therefore, neither parsing nor topic modeling works well because there are simply not enough signals in the input, and we must derive more semantic signals from the input, e.g., concepts, which have been demonstrated effective in knowledge representation (Wang et al., 2015c; . Shot-text conceptualization, is an interesting task to infer the most likely concepts for terms in the short-text, which could help better make sense of text data, and extend the texts with categorical or topical information (Song et al., 2011) . Therefore, our models utilize shorttext conceptualization algorithm to discriminate concept-level sentence senses and provide a good performance on short-texts. Recently, attention model has been used to improve many neural natural language pro-cessing researches by selectively focusing on parts of the source data (Bahdanau et al., 2014; Bansal et al., 2014; Wang et al., 2015a) . To the best of our knowledge, there has not been any other work exploring the use of attentional mechanism for sentence embeddings.", "cite_spans": [ { "start": 64, "end": 82, "text": "(Liu et al., 2015)", "ref_id": "BIBREF9" }, { "start": 385, "end": 407, "text": "(Severyn et al., 2014;", "ref_id": "BIBREF20" }, { "start": 408, "end": 427, "text": "Wang et al., 2015b)", "ref_id": "BIBREF26" }, { "start": 882, "end": 902, "text": "(Wang et al., 2015c;", "ref_id": "BIBREF27" }, { "start": 1130, "end": 1149, "text": "(Song et al., 2011)", "ref_id": "BIBREF23" }, { "start": 1468, "end": 1491, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 1492, "end": 1512, "text": "Bansal et al., 2014;", "ref_id": "BIBREF2" }, { "start": 1513, "end": 1532, "text": "Wang et al., 2015a)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "This paper proposes four conceptual sentence embedding models. The first one is based on continu-ous bag-of-word model (denoted as CSE-1) which have not taken word order into consideration. To overcome this drawback, its extension model (denoted as CSE-2), which is based on Skip-Gram model, is proposed. Based on the basic conceptual sentence embedding models above, we obtain their variants (aCSE-1 and aCSE-2) by introducing attention model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conceptual Sentence Embedding", "sec_num": "3" }, { "text": "As inspiration of the proposed conceptual sentence embedding models, we start by discussing previous models for learning word vectors (Mikolov et al., 2013a; Mikolov et al., 2013b) firstly.", "cite_spans": [ { "start": 134, "end": 157, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF12" }, { "start": 158, "end": 180, "text": "Mikolov et al., 2013b)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "Let us overview the framework of Continuous Bag-of-Words (CBOW) firstly, which is shown in Figure 1 (a). Each word is typically mapped to an unique vector, represented by a column in a word matrix W \u2208 d * |V | . Wherein, V denotes the word vocabulary and d is embedding dimension of word. The column is indexed by position of the word in V . The concatenation or average of the vectors, the context vector w t , is then used as features for predicting the target word in the current context. Formally, Given a sentence S = {w 1 , w 2 , . . . , w l }, the objective of CBOW is to maximize the average log probability:", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "L(S)= 1 (l\u22122k\u22122) l\u2212k t=k+1 log P r(wt|w t\u2212k ,\u2022\u2022\u2022,w t+k ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "Wherein, k is the context windows size of target word w t . The prediction task is typically done via a softmax function, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P r(w t |w t\u2212k , \u2022 \u2022 \u2022 , w t+k ) = e yw t w i \u2208V e yw i", "eq_num": "(2)" } ], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "Each of y ( w t ) is an un-normalized logprobability for each target word w t , as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y wt = Uh(w t\u2212k , . . . , w t+k ); W) + b", "eq_num": "(3)" } ], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "Wherein, U and b are softmax parameters. And h(\u2022) is constructed by a concatenation or average of word vectors {w t\u2212k , . . . , w t+k } extracted from word matrix W according to {w t\u2212k , . . . , w t+k }. For illustration purposes, we utilize average here. On the condition of average, the context vector c t is obtained by averaging the embeddings of each word, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "c t = 1 2k \u2212k\u2264c\u2264k,c =0 w t+c (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "The framework of Skip-Gram (Figure 1 (b)) aims to predict context words given a target word w t in a sliding window, instead of predicting the current word based on its context. Formally, given a sentence S = {w 1 , w 2 , . . . , w l }, the objective of Skip-Gram is to maximize the following average log probability:", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 36, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "L(S)= 1 (l\u22122k) l\u2212k t=k+1 \u2212k\u2264c\u2264k,c =0 log P r(w t+c |wt) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "Wherein, w t and w c are respectively the vector representations of the target word w t and the context word w c . Usually, during the training stage of CBOW and Skip-Gram: (i) in order to make the models efficient for learning, the techniques of hierarchical softmax and negative sampling are used to ensure the models efficient for learning (Morin and Bengio, 2005; Mikolov et al., 2013a) ; (ii) the word vectors are trained by using stochastic gradient descent where the gradient is obtained via backpropagation (Rumelhart et al., 1986) . After the training stage converges, words with similar meaning are mapped to a similar position in the semantic vector space. e.g., 'powerful' and 'strong' are close to each other. ", "cite_spans": [ { "start": 343, "end": 367, "text": "(Morin and Bengio, 2005;", "ref_id": "BIBREF14" }, { "start": 368, "end": 390, "text": "Mikolov et al., 2013a)", "ref_id": "BIBREF12" }, { "start": 515, "end": 539, "text": "(Rumelhart et al., 1986)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "W w t-k w t-k+1 w t+k-1 w t+k \u2026 W W W w t w t W w t-k w t-k+1 w t+k-1 w t+k \u2026 (a) (b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CBOW Model & Skip-Gram Model", "sec_num": "3.1" }, { "text": "Intuitively, the proposed (attention-based) conceptual sentence embedding model for learning sentence vectors, is inspired by the methods for learning the word vectors. The inspiration is that, in researches of word embeddings: (i) The word vectors are asked to contribute to a prediction task about the target word or the surrounding words in the context; (ii) The word representation vectors are initialized randomly, however they could finally capture precise semantics as an indirect result. Therefore, we will utilize this idea in our sentence vectors in a similar manner: The conceptassociated sentence vectors are also asked to contribute to the prediction task of the target word or surrounding words in given contextual text windows. Furthermore, attention model will attribute different influence value to different contextual words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CSE based on CBOW Model", "sec_num": "3.2" }, { "text": "We describe the first conceptual sentence embedding model, denoted as CSE-1, which is based on CBOW. In the framework of CSE-1 ( Figure 2 (a)), each sentence, denoted by sentence ID, is mapped to a unique vector s, represented by a column in matrix S. And its concept distribution \u03b8 C are generated from a knowledge-based text conceptualization algorithm (Wang et al., 2015c) . Moreover, similar to word embedding methods, each word w i is also mapped to a unique vector w i , represented by a column in matrix W. The surrounding words in contextual text window {w t\u2212k , . . . , w t+k }, sentence ID and concept distribution \u03b8 C corresponding to this sentence are the inputs. Besides, C is a fixed linear operator similar to the one used in (Huang et al., 2013) that converts the concept distribution \u03b8 C to a concept vector, denoted as c. Note that, this makes our model very different from (Le and Mikolov, 2014) where no concept information is used, and experimental results demonstrate the efficiency of introducing concept information. It is clear that CSE-1 also does not take word order into consideration just like CBOW.", "cite_spans": [ { "start": 356, "end": 376, "text": "(Wang et al., 2015c)", "ref_id": "BIBREF27" }, { "start": 742, "end": 762, "text": "(Huang et al., 2013)", "ref_id": "BIBREF6" }, { "start": 901, "end": 915, "text": "Mikolov, 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 129, "end": 138, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "CSE based on CBOW Model", "sec_num": "3.2" }, { "text": "Afterward, the sentence vector s, surrounding word vectors {w t\u2212k , . . . , w t+k } and the concept vector c are concatenated or averaged to predict the target word w t in current context. In reality, the only change in this model compared to the word embedding method is in Eq. 3, where h(\u2022) is constructed from not only W but also C and S. Note that, the sentence vector is shared across all contexts generated from the same sentence but not across sentences. Wherein, the contexts are fixedlength (length is 2k) and sampled from a sliding window over the current sentence. However, the word matrix W is shared across sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CSE based on CBOW Model", "sec_num": "3.2" }, { "text": "In summary, the procedure of CSE-1 itself is described as follows. A probabilistic conceptualization algorithm (Wang et al., 2015c ) is employed here to obtain the corresponding concepts about given sentence: Firstly, we preprosess and segment the given sentence into a set of words;", "cite_spans": [ { "start": 111, "end": 130, "text": "(Wang et al., 2015c", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "CSE based on CBOW Model", "sec_num": "3.2" }, { "text": "Then, based on a probabilistic lexical knowledgebase Probase (Wu et al., 2012) , the heterogeneous semantic graph for these words and their corresponding concepts are constructed (Figure 3 shows an example); Finally, we utilize a simple iterative process to identify the most likely mapping from words to concepts. After efforts above, we could conceptualize words in given sentence, and access the concepts and corresponding probabilities, which is the concept distribution \u03b8 C mentioned before. Note that, the concept distribution yields an important influence on the entire framework of conceptual sentence embedding, by contributing greatly to the semantic representation.", "cite_spans": [ { "start": 61, "end": 78, "text": "(Wu et al., 2012)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 179, "end": 188, "text": "(Figure 3", "ref_id": null } ], "eq_spans": [], "section": "CSE based on CBOW Model", "sec_num": "3.2" }, { "text": "During the training stage, we aim at obtaining word matrix W, sentence matrix S, and softmax weights {U, b} on already observed sentences. The techniques of hierarchical softmax and negative sampling are used to make the model efficient for learning. W and S are trained using stochastic gradient descent: At each step of stochastic gradient descent, we sample a fixed-length context from the given sentence, compute the error gradient which is obtained via backpropagation, and then use the gradient to update the parameters. During the inferring stage, we get sentence vectors for new sentences (unobserved before) by adding more columns in S and gradient descending on S while holding W, U and b fixed. Finally, we use S to make a prediction about multi-labels by using a standard classifier in output layer. Figure 3: Semantic graph of example sentence microsoft unveils office for apples ipad. Rectangles indicate terms occurred in given sentence, and ellipses indicate concept defined in knowledge-base (e.g., Probase). Bule solid links indicate isA relationship between terms and concepts, and red dashed lines indicate correlation relationship between two concepts. Numerical values on the line is corresponding probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CSE based on CBOW Model", "sec_num": "3.2" }, { "text": "The above method considers the combination of the sentence vector with the surrounding word vectors and concept vector to predict the target word in given text window. However, it loss information about word order somehow, just like CBOW. In fact, there exists another for modeling the prediction procedure: we could ignore the context words in the input, but force the model to predict words randomly sampled from the fix-length contexts in the output. As is shown in Figure 2 (b), only sentence vector s and concept vector c are used to predict the next word in a text window. That means, contextual words are no longer used as inputs, whereas they become what the output layer predict. Hence, this model is similar to the Skip-Gram model in word embedding (Mikolov et al., 2013b) . In reality, what this means is that at each iteration of stochastic gradient descent, we sample a text window {w t\u2212k , . . . , w t+k }, then sample a random word from this text window and form a classification task given the sentence vector s and corresponding concept vector c.", "cite_spans": [ { "start": 759, "end": 782, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 469, "end": 477, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "CSE based on Skip-Gram Model", "sec_num": "3.3" }, { "text": "We denote this sort of conceptual sentence embedding model as CSE-2. The scheme of CSE-2 is similar to that of CSE-1 as described above. In addition to being conceptually simple, CSE-2 requires to store less data. We only need to store {U,b,S} as opposed to {U,b,S,W} in CSE-1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CSE based on Skip-Gram Model", "sec_num": "3.3" }, { "text": "As mentioned above, setting a good value for contextual window size k is difficult. Because a larger value of k may introduce a degenerative behavior in the model, and more effort is spent predict-ing words that are conditioned on unrelated words, while a smaller value of k may lead to cases where the window size is not large enough include words that are semantically related (Bansal et al., 2014; Wang et al., 2015a) . To solve these problems , we extend the proposed models by introducing attention model (Bahdanau et al., 2014; Rush et al., 2015) , by allowing it to consider contextual words within the window in a non-uniform way. For illustration purposes, we extend CSE-1 here with attention model. Following (Wang et al., 2015a) , we rewrite Eq.(4) as follows:", "cite_spans": [ { "start": 379, "end": 400, "text": "(Bansal et al., 2014;", "ref_id": "BIBREF2" }, { "start": 401, "end": 420, "text": "Wang et al., 2015a)", "ref_id": "BIBREF25" }, { "start": 510, "end": 533, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 534, "end": 552, "text": "Rush et al., 2015)", "ref_id": "BIBREF18" }, { "start": 719, "end": 739, "text": "(Wang et al., 2015a)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "CSE based on Attention Model", "sec_num": "3.4" }, { "text": "c t = 1 2k \u2212k\u2264c\u2264k,c =0 a t+c (w t+c )w t+c (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CSE based on Attention Model", "sec_num": "3.4" }, { "text": "Wherein we replace the average of the surrounding word vectors in Eq.(4) with a weighted sum of the these vectors. That means, each contextual word w t+c is attributed a different attention level, representing how much the attention model believes whether it is important to look at in order to predict the target word w t . The attention factor a i (w i ) for word w i in position i is formulated as a softmax function over contextual words (Bahdanau et al., 2014) , as follows:", "cite_spans": [ { "start": 442, "end": 465, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "CSE based on Attention Model", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a i (w) = e d w,i + r i \u2212k\u2264c\u2264k,c =0 e dw,c + r c", "eq_num": "(7)" } ], "section": "CSE based on Attention Model", "sec_num": "3.4" }, { "text": "Wherein, d w,i is an element of matrix D \u2208 |V | * 2k , which is a set of parameters determining the importance of each word type in each relative position i (distance to the left/right of target word w t ). Moreover, r i , an element of R \u2208 2k , is a bias, which is conditioned only on the relative position i. Note that, attention models have been reported expensive for large tables in terms of storage and performance (Bahdanau et al., 2014; Wang et al., 2015a) . Nevertheless the computation consumption here is simple, and compute the attention of all words in the input requires 2k operations, as it simply requires retrieving on value from the lookup-matrix D for each word and one value from the bias vector R for each word in the context. Although this strategy may not be the best approach and there exist more elaborate attention models (Bahdanau et al., 2014; Luong et al., 2015) , the proposed attention model is a proper balance of computational efficiency and complexity.", "cite_spans": [ { "start": 421, "end": 444, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 445, "end": 464, "text": "Wang et al., 2015a)", "ref_id": "BIBREF25" }, { "start": 848, "end": 871, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 872, "end": 891, "text": "Luong et al., 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "CSE based on Attention Model", "sec_num": "3.4" }, { "text": "Thus, besides {W,C,S} in CSE models, D and R are added into parameter set which relates to gradients of the loss function Eq.(1). All parameters are computed with backpropagation and updated after each training instance using a fixed learning rate. We denote the attention-based CSE-1 model above as aCSE-1. With limitation of space, attention variant of CSE-2, denoted as aCSE-2, is not described here, however the principle is similar to aCSE-1. ... \u03b8c c s Figure 4 : aCSE-1 model. The illustration of example sentence 'mcrosoft unveils office for apple's ipad' for predicting word 'apple'.", "cite_spans": [], "ref_spans": [ { "start": 459, "end": 467, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "CSE based on Attention Model", "sec_num": "3.4" }, { "text": "Taking example 'microsoft unveils office for apple's ipad' into consideration. The prediction of the polysemy word 'apple' by CSE-1 is shown in Figure 4 , and darker cycle cell indicate higher attention value. We could observe that preposition word 'for' tend to be attributed very low attention, while context words, especially noun-words which contribute much to conceptualization (such as 'ipad', 'office', and 'microsoft') are attributed higher weights as these word own more predictive power. Wherein, 'ipad' is assigned the highest attention value as it close to the predicted word and co-occurs with it more frequently.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "CSE based on Attention Model", "sec_num": "3.4" }, { "text": "As described before, concept distribution \u03b8 C yields a considerable influence on conceptual sentence embedding. This is because, each dimensionality of this distribution denotes the probability of the concept (topic or category) this sentence is respect to. In other words, the concept distribution is a solid semantic representation of the sentence. Nevertheless, the information in each dimensionality of sentence (or word) vector makes no sense. Hence, there exist a linear operator in CSE-1, CSE-2, aCSE-1, and aCSE-2, which transmit the concept distribution into word vector and sentence vector, as shown in Figure 2 ", "cite_spans": [], "ref_spans": [ { "start": 613, "end": 621, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "CSE based on Attention Model", "sec_num": "3.4" }, { "text": "In this section, we show experiments on two text understanding problems, text classification and information retrieval, to evaluate related models in several aspects. These tasks are always used to evaluate the performance of sentence embedding methods (Liu et al., 2015; Le and Mikolov, 2014) . The source codes and datasets of this paper are publicly available 1 .", "cite_spans": [ { "start": 253, "end": 271, "text": "(Liu et al., 2015;", "ref_id": "BIBREF9" }, { "start": 272, "end": 293, "text": "Le and Mikolov, 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "We utilize four datasets for training and evaluating. For text classification task, we use three datasets: NewsTile, TREC and Twitter. Dataset Tweet11 is used for evaluation in information retrieval task. Moreover, we construct dataset Wiki to fully train topic model-based models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "NewsTitle: The news articles are extracted from a large news corpus, which contains about one million articles searched from Web pages. We organize volunteers to classify these news articles manually into topics according its article content , and we select six topics: company, health, entertainment, food, politician, and sports. We randomly select 3,000 news articles in each topic, and only keep its title and its first one line of article. The average word count of titles is 9.41.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "TREC: It is the corpus for question classification on TREC (Li and Roth, 2002) , which is widely used as benchmark in text classification task. There are 5,952 sentences in the entire dataset, classified into the 6 categories as follows: person, abbreviation, entity, description, location and numeric.", "cite_spans": [ { "start": 59, "end": 78, "text": "(Li and Roth, 2002)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Tweet11: This is the official tweet collections used in TREC Microblog Task 2011 and 2012 (Ounis et al., 2011; Soboroff et al., 2012) . Using the official API, we crawled a set of local copies of the corpus. Our local Tweets11 collection has a sample of about 16 million tweets, and a set of 49 (TMB2011) and 60 (TMB2012) timestamped topics.", "cite_spans": [ { "start": 90, "end": 110, "text": "(Ounis et al., 2011;", "ref_id": "BIBREF15" }, { "start": 111, "end": 133, "text": "Soboroff et al., 2012)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Twitter: This dataset is constructed by manually labeling the previous dataset Tweet11. Similar to dataset NewsTitle, we ask our volunteers to label these tweets. After manually labeling, the dataset contains 12,456 tweets which are in four categories: company, country, entertainment, and device. The average length of the tweets is 13.16 words. Because of its noise and sparsity, this social media dataset is very challenging for the comparative models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Moreover, we also construct a Wikipedia dataset (denoted as Wiki) for training. We preprocess the Wikipedia articles 2 with the following rules. First, we remove the articles less than 100 words, as well as the articles less than 10 links. Then we remove all the category pages and disambiguation pages. Moreover, we move the content to the right redirection pages. Finally we obtain about 3.74 million Wikipedia articles for indexing and training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "We compare the proposed models with the following comparative algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Algorithms", "sec_num": "4.2" }, { "text": "BOW: It is a simple baseline which represents each sentence as bag-of-words, and uses TF-IDF scores (Salton and Mcgill, 1986) as features to generate sentence vector.", "cite_spans": [ { "start": 100, "end": 125, "text": "(Salton and Mcgill, 1986)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Algorithms", "sec_num": "4.2" }, { "text": "LDA: It represents each sentence as its topic distribution inferred by latent dirichlet allocation (Blei et al., 2003) . We train this model in two ways: (i) on both Wikipedia articles and the evaluation datasets above, and (ii) only on the evaluation datasets. We report the better of the two.", "cite_spans": [ { "start": 99, "end": 118, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Algorithms", "sec_num": "4.2" }, { "text": "PV: Paragraph Vector models are variablelength text embedding models, including the distributed memory model (PV-DM) and the distributed bag-of-words model (PV-DBOW). It has been reported to achieve the state-of-the-art performance on task of sentiment classification (Le and Mikolov, 2014), however it only utilizes word surface.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Algorithms", "sec_num": "4.2" }, { "text": "TWE: By taking advantage of topic model, it overcome ambiguity to some extent (Liu et al., 2015) . Typically, TWE learn topic models on training set. It further learn topical word embeddings using the training set, then generate sentence embeddings for both training set and testing set. (Liu et al., 2015) proposed three models for topical word embedding, and we present the best results here. Besides, We also train TWE in two ways like LDA.", "cite_spans": [ { "start": 78, "end": 96, "text": "(Liu et al., 2015)", "ref_id": "BIBREF9" }, { "start": 288, "end": 306, "text": "(Liu et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Algorithms", "sec_num": "4.2" }, { "text": "The details about parameter settings of the comparative algorithms are described in this section, respectively. For TWE, CSE-1, CSE-2 and their attention variants aCSE-1, and aCSE-2, the structure of the hierarchical softmax is a binary Huffman tree (Mikolov et al., 2013a; Mikolov et al., 2013b) , where short codes are assigned to frequent words. This is a good speedup trick because common words are accessed quickly (Le and Mikolov, 2014) .We set the dimensions of sentence, word, topic and concept embeddings as 5,000, which is like the number of concept clusters in Probase (Wu et al., 2012; Wang et al., 2015c) . Meanwhile, we have done many experiments on choosing the context window size (k). We perform experiments on increasing windows size from 3 to 11, and different size works differently on different dataset with different average length of short-texts. And we choose the result of windows size of 5 present here, because it performs best in almost datasets. Usually, in project layer, the sentence vector, the context vector and the concept vectors could be averaged or concatenated for combination to predict the next word in a context. We perform experiments following these two strategies respectively, and report the better of the two. In fact, the concatenation performs better since averaging different types of vectors may cause loss of information somehow.", "cite_spans": [ { "start": 250, "end": 273, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF12" }, { "start": 274, "end": 296, "text": "Mikolov et al., 2013b)", "ref_id": "BIBREF13" }, { "start": 428, "end": 442, "text": "Mikolov, 2014)", "ref_id": "BIBREF7" }, { "start": 580, "end": 597, "text": "(Wu et al., 2012;", "ref_id": "BIBREF29" }, { "start": 598, "end": 617, "text": "Wang et al., 2015c)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4.3" }, { "text": "For BOW and LDA, we remove stop words by using InQuery stop-word list. For BOW, we select top 50,000 words according to TF-IDF scores as features. For both LDA and TWE, in the text classification task, we set the topic number to be the cluster number or twice, and report the better of the two; while in the information retrieval task, we experimented with a varying number of topics from 100 to 500, which gives similar performance, and we report the final results of using 500 topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4.3" }, { "text": "In summary, we use the sentence vectors generated by each algorithm as features and run a linear classifier using Liblinear (Fan et al., 2010) for evaluation.", "cite_spans": [ { "start": 124, "end": 142, "text": "(Fan et al., 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4.3" }, { "text": "In this section, we run the multi-class text classification experiments on the dataset NewsTitle, Twitter, and TREC. We report precision, recall and F-measure for comparison (as shown in Ta cide whether the improvement by method A over method B is significant, the t-test calculates a value p based on the performance of A and B. The smaller p is, the more significant is the improvement. If the p is small enough (p < 0.05), we conclude that the improvement is statistically significant. In Table 1 , the superscript \u03b1 and \u03b2 respectively denote statistically significant improvements over TWE and PV-DM.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 189, "text": "Ta", "ref_id": null }, { "start": 492, "end": 499, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Text Classification", "sec_num": "4.4" }, { "text": "Without regard to attention-based model firstly, we could conclude that CSE-2 outperforms all the baselines significantly (expect for recall in Twitter). This fully indicates that the proposed model could capture more precise semantic information of sentence as compared to topic model-based models and other embedding models. Because the concepts we obtained contribute significantly to the semantic representation of sentence, meanwhile suffer slightly from texts noisy and sparsity. Moreover, as compared to BOW, CSE-1 and CSE-2 manage to reduce the feature space by 90 percent, while among them, CSE-2 needs to store less data comparing with CSE-1. By introducing attention model, performances of CSE models are entirely promoted, as compared aCSE-2 with original CSE-2, which demonstrates the advantage of attention model. PV-DM and PV-DBOW are reported as the state-of-the-art model for sentence embedding. From the results we can also see that, the proposed model CSE-2 and aCSE-2 significantly outperforms PV-DBOW. As expected, LDA performs worst, even worse than BOW, because it is trained on very sparse short-texts (i.e., question and social media text), where there is no enough statistical information to infer word co-occurrence and word topics, and latent topic model suffer extremely from the sparsity of the short-text. Besides, the number of topics slightly impacts the performance of LDA. In future, we may conduct more experiments to explore genuine reasons. As described in section 3, aCSE-2 (CSE-2) performs better than aCSE-1 (CSE-1), because the former one take word order into consideration. Based on Skip-Gram similarly, CSE-2 outperforms TWE. Although TWE aims at enhancing sentence representation by using topic model, neither parsing nor topic modeling would work well because shorttexts lack enough signals for inference. Whats more, sentence embeedings are generated by simple aggregating over all topical word embeddings of each word in this sentence in TWE, which limits its capability of semantic representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Classification", "sec_num": "4.4" }, { "text": "Overall, nearly all the alternative algorithms perform worse on Twitter, especially LDA and TWE. This is mainly because that data in Twitter are more challenging for topic model as short-texts are noisy, sparse, and ambiguous. Although the training on larger corpus, i.e., way (i), contributes greatly to improving the performance of these topic-model based algorithms, they only have similar performance to CSE-1 and could not transcend the attention-based variants. Certainly, we could also train TWE (even LDA) on a very larger corpus, and could expect a letter better results. However, training latent topic model on very large dataset is very slow, although many fast algorithms of topic models are available (Smola and Narayanamurthy, 2010; Ahmed et al., 2012) . Whats more, from the complexity analysis, we could conclude that, compared with PV, CSE only need a little more space to store look-ups matrix D and R; while compared with CSE and PV, TWE require more parameters to store more discriminative information for word embedding.", "cite_spans": [ { "start": 714, "end": 746, "text": "(Smola and Narayanamurthy, 2010;", "ref_id": "BIBREF21" }, { "start": 747, "end": 766, "text": "Ahmed et al., 2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Text Classification", "sec_num": "4.4" }, { "text": "The information retrieval task is also utilized to evaluate the proposed models, and we want to examine whether a sentence should be retrieved given a query. Specially, we mainly focus on shorttext retrieval by utilizing official tweet collection Tweet11, which is the benchmark dataset for microblog retrieval. We index all tweets in this collection by using Indri toolkit, and then perform a general relevance-pseudo feedback procedure, as follows: (i) Given a query, we firstly obtain associated tweets, which are before query issue time, via preliminary retrieval as feedback tweets. (ii) We generate the sentence representation vector of both original query and these feedback tweets by the alternative algorithms above. (iii) With efforts above, we compute cosine scores between query vector and each tweet vector to measure the semantic similarity between the query and candidate tweets, and then re-rank the feedback tweets with descending cosine scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval", "sec_num": "4.5" }, { "text": "We utilize the official metric for the TREC Microblog track, i.e., Precision at 30 (P@30), and Mean Average Precision (MAP), for evaluating the ranking performance of different algorithms. Experimental results for this task are shown in Table 2. Besides, we also operate a query-by-query analysis and conduct t-test to demonstrate the improvements on both metrics are statistically significant. In Table 2 , the superscript \u03b1 and \u03b2 respectively denote statistically significant improvements over TWE and PV-DM (p < 0.05).", "cite_spans": [], "ref_spans": [ { "start": 398, "end": 405, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Information Retrieval", "sec_num": "4.5" }, { "text": "As shown in Table 2 , the CSE-2 significantly outperforms all these models, and exceeds the best baseline model (TWE) by 11.9% in MAP and 4.5% in P@30, which is a statistically significant improvement. Without regard to attention-based model firstly, such an improvement comes from the CSE-2's ability to embed the contextual and semantic information of the sentences into a finite dimension vector. Topic model based algorithms (e.g., LDA and TWE) suffer extremely from the sparsity and noise of tweet collection. For the twitter data, since we are not able to find appropriate long texts, latent topic models are not performed.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Information Retrieval", "sec_num": "4.5" }, { "text": "We could observe that attention-based CSE model (aCSE-1 and aCSE-2) improves over o- riginal CSE model (CSE-1 and CSE-2). However, attention model promotes CSE-1 significantly, while aCSE-2 obtain similar results compared to CSE-2, indicating that attention model leads to small improvement for Skip-Gram based CSE model. We argue that it is because Skip-Gram itself gives less weight to the distant words by sampling less from those words, which is essentially similar to attention model somehow.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Retrieval", "sec_num": "4.5" }, { "text": "By inducing concept information, the proposed conceptual sentence embedding maintains and enhances the semantic information of sentence embedding. Furthermore, we extend the proposed models by introducing attention model, which allows it to consider contextual words within the window in a non-uniform way while maintaining the efficiency. We compare them with different algorithms, including bag-of-word models, topic model-based model and other state-of-the-art sentence embedding models. The experimental results demonstrate that the proposed method performs the best and shows improvement over the compared methods, especially for short-texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "http://hlipca.org/index. php/2014-12-09-02-55-58/ 2014-12-09-02-56-24/58-acse", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://en.wikipedia.org/wiki/ Wikipedia:Databasedown-load", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Scalable inference in latent variable models", "authors": [ { "first": "Amr", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Moahmed", "middle": [], "last": "Aly", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Gonzalez", "suffix": "" }, { "first": "Shravan", "middle": [], "last": "Narayanamurthy", "suffix": "" }, { "first": "Alexander", "middle": [ "J" ], "last": "Smola", "suffix": "" } ], "year": 2012, "venue": "International Conference on Web Search and Web Data Mining, WSDM 2012", "volume": "", "issue": "", "pages": "123--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amr Ahmed, Moahmed Aly, Joseph Gonzalez, Shra- van Narayanamurthy, and Alexander J. Smola. 2012. Scalable inference in latent variable model- s. In International Conference on Web Search and Web Data Mining, WSDM 2012, Seattle, Wa, Usa, February, pages 123-132.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. Eprint Arxiv.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Tailoring continuous word representations for dependency parsing", "authors": [ { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2014, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "809--815", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In Meeting of the Association for Computational Linguistics, pages 809-815.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Latent dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Liblinear: A library for large linear classification", "authors": [ { "first": "Rongen", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Kaiwei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Jui", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Xiangrui", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Chih Jen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2010, "venue": "Journal of Machine Learning Research", "volume": "9", "issue": "12", "pages": "1871--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rongen Fan, Kaiwei Chang, Cho Jui Hsieh, Xiangrui Wang, and Chih Jen Lin. 2010. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9(12):1871-1874.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Distributional Structure", "authors": [ { "first": "S", "middle": [], "last": "Zellig", "suffix": "" }, { "first": "", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1970, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig S. Harris. 1970. Distributional Structure. Springer Netherlands.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning deep structured semantic models for web search using clickthrough data", "authors": [ { "first": "Posen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Acero", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2013, "venue": "ACM International Conference on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "2333--2338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Posen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In ACM International Confer- ence on Conference on Information and Knowledge Management, pages 2333-2338.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "", "volume": "4", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc V. Le and Tomas. Mikolov. 2014. Distributed representations of sentences and documents. Eprint Arxiv, 4:1188-1196.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning question classifiers", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li and Dan Roth. 2002. Learning question classi- fiers. In COLING.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tat-Seng Chua, and Maosong Sun", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Twenty-Ninth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In Twenty- Ninth AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Dependency-based convolutional neural networks for sentence embedding", "authors": [ { "first": "Mingbo", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. 2015. Dependency-based convolutional neu- ral networks for sentence embedding. In Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Lan- guage Processing.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Computer Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. Computer Science.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corra- do, and Jeffrey Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. Advances in Neural Information Processing Systems, 26:3111-3119.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Hierarchical probabilistic neural network language model", "authors": [ { "first": "Frederic", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frederic Morin and Yoshua Bengio. 2005. Hierar- chical probabilistic neural network language model. Aistats.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Overview of the trec-2011 microblog track", "authors": [ { "first": "Iadh", "middle": [], "last": "Ounis", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Macdonald", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Soboroff", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iadh Ounis, Craig MacDonald, Jimmy Lin, and Ian Soboroff. 2011. Overview of the trec-2011 mi- croblog track.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deep sentence embedding using the long short term memory network: Analysis and application to information retrieval", "authors": [ { "first": "H", "middle": [], "last": "Palangi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Deng", "suffix": "" }, { "first": "J", "middle": [], "last": "Shen", "suffix": "" }, { "first": "", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "He", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" }, { "first": "R", "middle": [], "last": "Song", "suffix": "" }, { "first": "", "middle": [], "last": "Ward", "suffix": "" } ], "year": 2015, "venue": "Arxiv", "volume": "24", "issue": "4", "pages": "694--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "H Palangi, L Deng, Y Shen, J Gao, X He, J Chen, X Song, and R Ward. 2015. Deep sentence em- bedding using the long short term memory network: Analysis and application to information retrieval. Arxiv, 24(4):694-707.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning representations by backpropagating errors", "authors": [ { "first": "David", "middle": [ "E" ], "last": "Rumelhart", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Ronald", "middle": [ "J" ], "last": "Williams", "suffix": "" } ], "year": 1986, "venue": "Nature", "volume": "323", "issue": "6088", "pages": "533--536", "other_ids": {}, "num": null, "urls": [], "raw_text": "David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back- propagating errors. Nature, 323(6088):533-536.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Introduction to modern information retrieval", "authors": [ { "first": "Gerard", "middle": [], "last": "Salton", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Mcgill", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard Salton and Michael J. Mcgill. 1986. Introduc- tion to modern information retrieval. McGraw-Hill,.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A syntax-aware re-ranker for microblog retrieval", "authors": [ { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Manos", "middle": [], "last": "Tsagkias", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Berendsen", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2014, "venue": "SIGIR", "volume": "", "issue": "", "pages": "1067--1070", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aliaksei Severyn, Alessandro Moschitti, Manos T- sagkias, Richard Berendsen, and Maarten De Rijke. 2014. A syntax-aware re-ranker for microblog re- trieval. In SIGIR, pages 1067-1070.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "An architecture for parallel topic models", "authors": [ { "first": "Alexander", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Shravan", "middle": [], "last": "Narayanamurthy", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Vldb Endowment", "volume": "3", "issue": "", "pages": "703--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Smola and Shravan Narayanamurthy. 2010. An architecture for parallel topic models. Proceed- ings of the Vldb Endowment, 3(1):703-710.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Overview of the trec-2012 microblog track", "authors": [ { "first": "Ian", "middle": [], "last": "Soboroff", "suffix": "" }, { "first": "Iadh", "middle": [], "last": "Ounis", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Macdonald", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2012, "venue": "TREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Soboroff, Iadh Ounis, Craig MacDonald, and Jim- my Lin. 2012. Overview of the trec-2012 microblog track. In TREC.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Short text conceptualization using a probabilistic knowledgebase", "authors": [ { "first": "Yangqiu", "middle": [], "last": "Song", "suffix": "" }, { "first": "Haixun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhongyuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hongsong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Twenty-Second international joint conference on Artificial Intelligence -Volume Volume Three", "volume": "", "issue": "", "pages": "2330--2336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangqiu Song, Haixun Wang, Zhongyuan Wang, Hongsong Li, and Weizhu Chen. 2011. Short text conceptualization using a probabilistic knowledge- base. In Proceedings of the Twenty-Second inter- national joint conference on Artificial Intelligence - Volume Volume Three, pages 2330-2336.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Open domain short text conceptualization: a generative + descriptive modeling approach", "authors": [ { "first": "Yangqiu", "middle": [], "last": "Song", "suffix": "" }, { "first": "Shusen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Haixun", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangqiu Song, Shusen Wang, and Haixun Wang. 2015. Open domain short text conceptualization: a gener- ative + descriptive modeling approach. In Proceed- ings of the 24th International Conference on Artifi- cial Intelligence.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Not all contexts are created equal: Better word representations with variable attention", "authors": [ { "first": "Ling", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tsvetkov", "middle": [], "last": "Yulia", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Silvio", "suffix": "" }, { "first": "Fermandez", "middle": [], "last": "Ramon", "suffix": "" }, { "first": "Dyer", "middle": [], "last": "Chris", "suffix": "" }, { "first": "W", "middle": [], "last": "Black Alan", "suffix": "" }, { "first": "Trancoso", "middle": [], "last": "Isabel", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Chu-Cheng", "suffix": "" } ], "year": 2015, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1367--1372", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ling Wang, Tsvetkov Yulia, Amir Silvio, Fermandez Ramon, Dyer Chris, Black Alan W, Trancoso Isabel, and Lin Chu-Cheng. 2015a. Not all contexts are created equal: Better word representations with vari- able attention. In Conference on Empirical Methods in Natural Language Processing, pages 1367-1372.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Syntax-based deep matching of short texts", "authors": [ { "first": "Mingxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Computer Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015b. Syntax-based deep matching of short texts. Computer Science.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Query understanding through knowledge-based conceptualization", "authors": [ { "first": "Zhongyuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kejun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Haixun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaofeng", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Ji-Rong", "middle": [], "last": "Wen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongyuan Wang, Kejun Zhao, Haixun Wang, Xi- aofeng Meng, and Ji-Rong Wen. 2015c. Query understanding through knowledge-based conceptu- alization. In Proceedings of the 24th International Conference on Artificial Intelligence.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Towards universal paraphrastic sentence embeddings", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2015, "venue": "Computer Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal paraphrastic sen- tence embeddings. Computer Science.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Probase: a probabilistic taxonomy for text understanding", "authors": [ { "first": "Wentao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hongsong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haixun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kenny", "middle": [ "Q" ], "last": "Zhu", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data", "volume": "", "issue": "", "pages": "481--492", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wentao Wu, Hongsong Li, Haixun Wang, and Ken- ny Q. Zhu. 2012. Probase: a probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD International Conference on Man- agement of Data, pages 481-492.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "(a) CBOW model and (b) Skip-Gram model.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "CSE-1 model (a) and CSE-2 model (b). Green circles indicate word embeddings, blue circles indicate concept embeddings, and purple circles indicate sentence embeddings. Besides, orange circles indicate concept distribution \u03b8 C generated by knowledge-based text conceptualization algorithm.", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "and Figure 3.", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "text": "\u03b1\u03b2 0.820 \u03b1\u03b2 0.825 \u03b1\u03b2 0.477 \u03b1\u03b2 0.450 \u03b1\u03b2 0.463 \u03b1\u03b2 0.909 \u03b1\u03b2 0.904 \u03b1\u03b2 0.906 \u03b1\u03b2", "num": null, "content": "
NewsTitleTwitterTREC
ModelPRFPRFPRF
BOW0.782 0.791 0.786 0.437 0.429 0.433 0.892 0.891 0.891
LDA0.717 0.705 0.711 0.342 0.308 0.324 0.813 0.809 0.811
PV-DBOW 0.725 0.719 0.722 0.413 0.408 0.410 0.824 0.819 0.821
PV-DM0.748 0.740 0.744 0.426 0.424 0.425 0.836 0.825 0.830
TWE0.811 \u03b2 0.803 \u03b2 0.807 \u03b2 0.459 \u03b2 0.438 0.448 \u03b2 0.898 \u03b2 0.886 \u03b2 0.892 \u03b2
CSE-10.815 0.809 0.812 0.461 0.449 0.454 0.896 0.890 0.893
CSE-20.827 0.817 0.822 0.475 0.447 0.462 0.901 0.895 0.898
aCSE-10.824 0.818 0.821 0.471 0.454 0.462 0.901 0.897 0.899
aCSE-20.831
-
ble 1). Statistical t-test are employed here. To de-
", "html": null }, "TABREF1": { "type_str": "table", "text": "Evaluation results of multi-class text classification task.", "num": null, "content": "", "html": null }, "TABREF2": { "type_str": "table", "text": ".412 0.321 0.494 LDA 0.281 0.409 0.311 0.486 PV-DBOW 0.285 0.412 0.324 0.491 \u03b1\u03b2 0.464 \u03b1\u03b2 0.364 \u03b1\u03b2 0.522 \u03b1\u03b2", "num": null, "content": "
TMB2011TMB2012
ModelMAP P@30 MAP P@30
BOW 0.304 0PV-DM 0.327 0.431 0.340 0.524
TWE0.331 0.446 \u03b2 0.347 \u03b2 0.511
CSE-10.337 0.451 0.344 0.512
CSE-20.367 0.461 0.360 0.517
aCSE-10.342 0.459 0.351 0.516
aCSE-20.370
", "html": null }, "TABREF3": { "type_str": "table", "text": "Results of information retrieval.", "num": null, "content": "", "html": null } } } }