{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:20.388147Z" }, "title": "User Factor Adaptation for User Embedding via Multitask Learning", "authors": [ { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "", "affiliation": {}, "email": "xiaolei.huang@memphis.edu" }, { "first": "Michael", "middle": [ "J" ], "last": "Paul", "suffix": "", "affiliation": {}, "email": "mpaul@colorado.edu" }, { "first": "Robin", "middle": [], "last": "Burke", "suffix": "", "affiliation": {}, "email": "robin.burke@colorado.edu" }, { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "", "affiliation": {}, "email": "dernonco@adobe.com" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "", "affiliation": {}, "email": "mdredze@cs.jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Language varies across users and their interested fields in social media data: words authored by a user across his/her interests may have different meanings (e.g., cool) or sentiments (e.g., fast). However, most of the existing methods to train user embeddings ignore the variations across user interests, such as product and movie categories (e.g., drama vs. action). In this study, we treat the user interest as domains and empirically examine how the user language can vary across the user factor in three English social media datasets. We then propose a user embedding model to account for the language variability of user interests via a multitask learning framework. The model learns user language and its variations without human supervision. While existing work mainly evaluated the user embedding by extrinsic tasks, we propose an intrinsic evaluation via clustering and evaluate user embeddings by an extrinsic task, text classification. The experiments on the three Englishlanguage social media datasets show that our proposed approach can generally outperform baselines via adapting the user factor.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Language varies across users and their interested fields in social media data: words authored by a user across his/her interests may have different meanings (e.g., cool) or sentiments (e.g., fast). However, most of the existing methods to train user embeddings ignore the variations across user interests, such as product and movie categories (e.g., drama vs. action). In this study, we treat the user interest as domains and empirically examine how the user language can vary across the user factor in three English social media datasets. We then propose a user embedding model to account for the language variability of user interests via a multitask learning framework. The model learns user language and its variations without human supervision. While existing work mainly evaluated the user embedding by extrinsic tasks, we propose an intrinsic evaluation via clustering and evaluate user embeddings by an extrinsic task, text classification. The experiments on the three Englishlanguage social media datasets show that our proposed approach can generally outperform baselines via adapting the user factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Language varies across user factors including user interests, demographic attributes, personalities, and latent factors from user history. Research shows that language usage diversifies according to online user groups (Volkova et al., 2013) , which women were more likely to use the word weakness in a positive way while men were the opposite. In social media, the user interests can include topics of user reviews (e.g., home vs. health services in Yelp) and categories of reviewed items (electronic vs kitchen products in Amazon). The ways that users express themselves depend on current contexts of user interests (Oba et al., 2019) that users may use the same words for opposite meanings and different words for the same meaning. For example, online users can use the word \"fast\" to criticize battery quality of the electronic domain or praise medicine effectiveness of the medical products; users can also use the words \"cool\" to describe a property of AC products or express sentiments.", "cite_spans": [ { "start": 218, "end": 240, "text": "(Volkova et al., 2013)", "ref_id": "BIBREF43" }, { "start": 450, "end": 455, "text": "Yelp)", "ref_id": null }, { "start": 617, "end": 635, "text": "(Oba et al., 2019)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "User embedding, which is to learn a fixed-length representation based on multiple user reviews of each user, can infer the user latent information into a unified vector space (Benton, 2018; Pan and Ding, 2019) . The inferred latent representations from online content can predict user profile (Volkova et al., 2015; Wang et al., 2018; Farnadi et al., 2018; Lynn et al., 2020) and behaviors (Zhang et al., 2015; Amir et al., 2017; Benton et al., 2017; Ding et al., 2017) . User embeddings can personalize classification models, and further improve model performance (Tang et al., 2015; Chen et al., 2016a; Yang and Eisenstein, 2017; Zeng et al., 2019; . The representations of user language can help models better understand documents as global contexts.", "cite_spans": [ { "start": 175, "end": 189, "text": "(Benton, 2018;", "ref_id": "BIBREF2" }, { "start": 190, "end": 209, "text": "Pan and Ding, 2019)", "ref_id": "BIBREF34" }, { "start": 293, "end": 315, "text": "(Volkova et al., 2015;", "ref_id": "BIBREF42" }, { "start": 316, "end": 334, "text": "Wang et al., 2018;", "ref_id": "BIBREF44" }, { "start": 335, "end": 356, "text": "Farnadi et al., 2018;", "ref_id": "BIBREF13" }, { "start": 357, "end": 375, "text": "Lynn et al., 2020)", "ref_id": "BIBREF28" }, { "start": 390, "end": 410, "text": "(Zhang et al., 2015;", "ref_id": "BIBREF55" }, { "start": 411, "end": 429, "text": "Amir et al., 2017;", "ref_id": "BIBREF0" }, { "start": 430, "end": 450, "text": "Benton et al., 2017;", "ref_id": "BIBREF4" }, { "start": 451, "end": 469, "text": "Ding et al., 2017)", "ref_id": "BIBREF12" }, { "start": 565, "end": 584, "text": "(Tang et al., 2015;", "ref_id": "BIBREF41" }, { "start": 585, "end": 604, "text": "Chen et al., 2016a;", "ref_id": "BIBREF7" }, { "start": 605, "end": 631, "text": "Yang and Eisenstein, 2017;", "ref_id": "BIBREF51" }, { "start": 632, "end": 650, "text": "Zeng et al., 2019;", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, existing user embedding methods (Amir et al., 2016; Benton et al., 2016; Xing and Paul, 2017; Pan and Ding, 2019 ) mainly focus on extracting features from language itself while ignoring user interests. Recent research has demonstrated that adapting the user factors can further improve user geolocation prediction (Miura et al., 2017) , demographic attribute prediction (Farnadi et al., 2018) , and sentiment analysis (Yang and Eisenstein, 2017) . Lynn et al. (2017) ; Huang and Paul (2019) treated the language variations as a domain adaptation problem and referred to this idea as user factor adaptation.", "cite_spans": [ { "start": 41, "end": 60, "text": "(Amir et al., 2016;", "ref_id": "BIBREF1" }, { "start": 61, "end": 81, "text": "Benton et al., 2016;", "ref_id": "BIBREF3" }, { "start": 82, "end": 102, "text": "Xing and Paul, 2017;", "ref_id": "BIBREF50" }, { "start": 103, "end": 121, "text": "Pan and Ding, 2019", "ref_id": "BIBREF34" }, { "start": 324, "end": 344, "text": "(Miura et al., 2017)", "ref_id": "BIBREF32" }, { "start": 380, "end": 402, "text": "(Farnadi et al., 2018)", "ref_id": "BIBREF13" }, { "start": 428, "end": 455, "text": "(Yang and Eisenstein, 2017)", "ref_id": "BIBREF51" }, { "start": 458, "end": 476, "text": "Lynn et al. (2017)", "ref_id": "BIBREF30" }, { "start": 479, "end": 500, "text": "Huang and Paul (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we treat the user interest as domains (e.g., restaurants vs. home services domains) and propose a multitask framework to model language variations and incorporate the user factor into user embeddings. We focus on three online review datasets from Amazon, IMDb, and Yelp containing diverse behaviors conditioned on user interests, which refer to genres of reviewed items. For example, if any Yelp users have reviews on items of the home services, then their user interests will include the home services.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We start with exploring how the user factor, user interest, can cause language and classification variations in Section 3. We then propose our user embedding model that adapts the user interests using a multitask learning framework in Section 4. Research (Pan and Ding, 2019) generally evaluates the user embedding via downstream tasks, but user annotations sometimes are hard to obtain and those evaluations are extrinsic instead of intrinsic tasks. For example, the MyPersonality (Kosinski et al., 2015) that was used in previous work (Ding et al., 2017; Farnadi et al., 2018; Pan and Ding, 2019) is no longer available, and an extrinsic task is to evaluate if user embeddings can help text classifiers. Research (Schnabel et al., 2015) suggests that the intrinsic evaluation including clustering is better than the extrinsic evaluation for controlling less hyperparameters. We propose an intrinsic evaluation for user embedding, which can provide a new perspective for testing future experiments. We show that our user-factor-adapted user embedding can generally outperform the existing methods on both intrinsic and extrinsic tasks.", "cite_spans": [ { "start": 255, "end": 275, "text": "(Pan and Ding, 2019)", "ref_id": "BIBREF34" }, { "start": 482, "end": 505, "text": "(Kosinski et al., 2015)", "ref_id": "BIBREF24" }, { "start": 537, "end": 556, "text": "(Ding et al., 2017;", "ref_id": "BIBREF12" }, { "start": 557, "end": 578, "text": "Farnadi et al., 2018;", "ref_id": "BIBREF13" }, { "start": 579, "end": 598, "text": "Pan and Ding, 2019)", "ref_id": "BIBREF34" }, { "start": 715, "end": 738, "text": "(Schnabel et al., 2015)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We collected English reviews of Amazon (health product), IMDb and Yelp from the publicly available sources (He and McAuley, 2016; Yelp, 2018; IMDb, 2020) . For the IMDb dataset, we included English movies produced in the US from 1960 to 2019. Each review associates with its author and the rated item, which refers to a movie in the IMDb data, a business unit in the Yelp data and a product in the Amazon data. To keep consistency in each dataset, we retain top 4 frequent genres of rated items and the review documents with no less than 10 tokens. 1 We dropped non-English review documents by the language detector (Lui and Baldwin, 2012) , lowercased all tokens and tokenized the corpora using NLTK (Bird and Loper, 2004) . The review datasets have different score scales. We normalize the scales and encode each review score into three discrete categories: positive (> 3 for the Yelp and Amazon, > 6 for the IMDb), negative (< 3 for the Yelp and Amazon, < 5 for the IMDb) and neutral. Table 1 shows a summary of the datasets.", "cite_spans": [ { "start": 107, "end": 129, "text": "(He and McAuley, 2016;", "ref_id": "BIBREF18" }, { "start": 130, "end": 141, "text": "Yelp, 2018;", "ref_id": "BIBREF52" }, { "start": 142, "end": 153, "text": "IMDb, 2020)", "ref_id": null }, { "start": 549, "end": 550, "text": "1", "ref_id": null }, { "start": 616, "end": 639, "text": "(Lui and Baldwin, 2012)", "ref_id": null }, { "start": 701, "end": 723, "text": "(Bird and Loper, 2004)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 988, "end": 995, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "To protect user privacy, we anonymize all userrelated information via hashing, and our experiments only use publicly available datasets for research demonstration. Any URLs, hashtags and capitalized English names were removed. Due to the potential sensitivity of user reviews, we only use information necessary for this study. We do not use any user profile in our experiments, except, our evaluations use anonymized author ID of each review entry for training user embeddings. We will not release any private user reviews associated with user identities. Instead, we will open source our source code and provide instructions of how to access the public datasets in enough detail so that our proposed method can be replicated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Privacy Considerations", "sec_num": "2.1" }, { "text": "Language varies across user factors such as user interests (Oba et al., 2019) , demographic attributes (Huang and Paul, 2019) , social relations (Yang and Eisenstein, 2017; Gong et al., 2020) . In this section, our goal is to quantitatively analyze whether the user interests cause user language variations, which can reduce effectiveness and robustness of user embeddings. We approach this by two analysis tasks, first by measuring word feature similarity based on user interests, and second by examining how classifier performance depends on the grouped user interests in which the model is trained and applied.", "cite_spans": [ { "start": 59, "end": 77, "text": "(Oba et al., 2019)", "ref_id": "BIBREF33" }, { "start": 103, "end": 125, "text": "(Huang and Paul, 2019)", "ref_id": "BIBREF22" }, { "start": 145, "end": 172, "text": "(Yang and Eisenstein, 2017;", "ref_id": "BIBREF51" }, { "start": 173, "end": 191, "text": "Gong et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Exploratory Analysis of User Variations", "sec_num": "3" }, { "text": "Existing methods mainly infer user embeddings from features of text contents (Pan and Ding, 2019) . Therefore, word usage variations across user interests will change word distributions and further impact the stability of user embeddings. We aim to test whether there are language variations across the user interests in our datasets and how strong they are. We consider the word usage as it relates to user embeddings by estimating the overlap of top word features across the genres of rated items, the categories of reviewed products in Amazon, business units in Yelp and movies in IMDb. To solve data sparsity caused by single user preference, we grouped users and therefore their generated documents according to genres of user reviewed items. We refer to this as genre domains. We build a unified feature vectorizer (Pedregosa et al., 2011) with TF-IDF weighted n-gram features (n \u2208 {1, 2, 3}), removing features that appeared in less than 2 documents. We rank and select the top 1000 word features for each genre domain by mutual information. We then compute the intersection percentage between every two genre domains: let F 0 is the set of top features for one genre domain and F 1 is the set of top features for the other domain, then the overlap is |F 0 \u2229 F 1 |/1000. We show the results in Figure 1 . The overlap varies significantly across genre domains. This indicates that the word usage and its contexts of users change across user interests and preferences. Since the training of user embeddings relies heavily on the language features of users, this suggests that it is important to consider the language variations in user interests for the user embeddings.", "cite_spans": [ { "start": 77, "end": 97, "text": "(Pan and Ding, 2019)", "ref_id": "BIBREF34" }, { "start": 821, "end": 845, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 1301, "end": 1309, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Word Usage Variations", "sec_num": "3.1" }, { "text": "User embeddings are effective to understand user behaviors in the classification setting (Amir et al., 2016; Ding et al., 2018) . Research has found that combining user and document representations can benefit classification performance (Chen et al., 2016b; Yuan et al., 2019) . We explore how the language variations in user interests can affect classification models.", "cite_spans": [ { "start": 89, "end": 108, "text": "(Amir et al., 2016;", "ref_id": "BIBREF1" }, { "start": 109, "end": 127, "text": "Ding et al., 2018)", "ref_id": "BIBREF11" }, { "start": 237, "end": 257, "text": "(Chen et al., 2016b;", "ref_id": "BIBREF8" }, { "start": 258, "end": 276, "text": "Yuan et al., 2019)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Classification Performance Variations", "sec_num": "3.2" }, { "text": "We conduct an analysis by training and testing classifiers that group users by the categories of reviewed items. We first group items and users according to item genres, which can be treated as different domains of user interests. For each domain, we downsampled documents, users and items within each group to match their numbers in the smallest group, so that classification performance differences are not due to data sizes of document, user and item. For each grouped documents, we shuffle and split the data into training (80%) and test (20%) sets. We train logistic regression classifiers with default hyperparameters from scikit-learn (Pedregosa et al., 2011) using TF-IDF weighted uni-, bi-and tri-gram features. We report weighted F1 scores across grouped users and show the results in Figure 2 .", "cite_spans": [ { "start": 642, "end": 666, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 795, "end": 803, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Classification Performance Variations", "sec_num": "3.2" }, { "text": "We can observe that classification performance varies across the grouped users. Higher performance variations between in-and out-user groups suggest higher user variations and vice versa. If no variations of user language exist, the performance of classifiers should be similar across the domains. The performance variations suggest that user behaviors vary across the categories of user interests. We can also observe that classification models generally perform better when tests within the same user groups while worse in the other user groups. This suggests a variability connection between the user interests and language usage, which derives user embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Performance Variations", "sec_num": "3.2" }, { "text": "We present the architecture of our proposed model in Figure 3 on the left. Methods (Pan and Ding, 2019) to train text-based user embedding mainly focus on the user-generated documents while ignoring user factors, the user interests. A close work to ours only trained user embeddings by predicting if users co-occurred with sampled words (Amir et al., 2017) . We extend this line of work by adapting user interests into modeling steps. The proposed unsupervised model trains four joint tasks based on the Skip-Gram (Mikolov et al., 2013) : word and word, user and word, item and word, and user and item. Note that we do not use the categories of rated items and user interests in our training steps. Then we can optimize the model by minimizing the following loss function:", "cite_spans": [ { "start": 83, "end": 103, "text": "(Pan and Ding, 2019)", "ref_id": "BIBREF34" }, { "start": 337, "end": 356, "text": "(Amir et al., 2017)", "ref_id": "BIBREF0" }, { "start": 514, "end": 536, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 53, "end": 61, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "L = L(w, w) + L(u, w) + L(p, w) + L(p, u)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "where w, u, p are the notations of words, users and rated items respectively. Considering the large size of the vocabulary, users and rated items, we approximate our optimization objectives by the negative sampling. Then we can treat each task as a classification problem and calculate loss values by the binary cross-entropy. We present the details of each optimization task as following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "Word and word is a standard way to train Word2vec (Mikolov et al., 2013) models. The prediction task is to predict if the sampled words have co-occurred within the window context. The training process uses the negative sampling to approximate objective function. We choose 5 as the number of negative samples. We keep the top 20,000 frequent words and finally replace the rest with a special token, < unk >.", "cite_spans": [ { "start": 50, "end": 72, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "User and word predicts if a user authored the sampled words by the contexts of user posts. The goal is to learn patterns of user language usage from user historical posts. Given a document i, its author u i and the user's vocabulary", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "V u i = {w 1,i , ..., w j,i , ..., w n,i },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "where n is the number of frequent words authored by the user. Our objective is to minimize the following function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "L(u, w) = \u2212 w j \u2208Vu i w k \u2208V w k / \u2208Vu i [log(\u03b8(e(u i )\u2022 e(w j ))) + log(\u03b8(e(u i ) \u2022 e(w k )))]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "where w j is a negative sample from the whole vocabulary V , e(u) and e(w) are fixed-length user and word vectors respectively, and \u03b8 is a sigmoid function to normalize values of dot production. We extend the previous work (Amir et al., 2017) to integrate both local and global user language usage by sampling w j from a combined token list of both the input document and the user's vocabulary. This can help the model learn contextual information of each user.", "cite_spans": [ { "start": 223, "end": 242, "text": "(Amir et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "Item and word follows the prediction task of user and word to classify if sampled words describe the selected item. This task is to use review documents to train representations of rated items. Then we can have Figure 3 : Illustrations of User Embedding via multitask learning framework on the left and personalized document classifiers using trained embedding models on the right. The arrows and their colors refer to the input directions and input sources respectively. We use the logos of people, shopping cart and ABC to represent users, reviewed items and word inputs. The is the concatenation operation.", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 219, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "L(p, w) = \u2212 w j \u2208Vp i ,w k \u2208V,w k / \u2208Vp i log(1 \u2212 \u2022 \u210e \u2022 \u210e \u2022 \u2022 \u210e DOCs Embedding D 1 : w 1 , w 2 , \u2026, w n D k : w 1 , w 2 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "\u03b8(e(p i ) \u2022 e(w j ))) + log(\u03b8(e(p i ) \u2022 e(w k ))) where V p i is the vocabulary of the rated item p i and w k is a negative sample of words. The language can be viewed as a bridge between an interactive relation of user and item, which predicts language usage for both rated items and users.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "User and item learns if a user commented on the sampled items. The prediction task aims to adapt latent user factors into the user embeddings. Given a document i, its author u i and the reviewed item p i , we can optimize the task by minimizing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "L(u, p) = \u2212 p k \u2208P,p k / \u2208Pu i log(\u03b8(e(p i ) \u2022 e(u i )))+log(\u03b8(e(p k )\u2022e(u i )))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "where the P is a collection of all items, the P u i is a reviewed item and the p k is a negative sample that the user does not review. The constraints between reviewed items and users can help user embeddings identify language variations across domains of item genres. And in turn, the relation of user and item can help infer item vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "For model settings, we used Adam (Kingma and Ba, 2014) for the model optimization with a learning rate of 1e-5. We set the training epochs as 5. The model initializes embedding vectors randomly and learns 300-dimension representations for words, users and reviewed items. We empirically use 5 as the number of negative samples. For the other parameters, we keep the same as the defaults in the Keras (Chollet and Others, 2015).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multitask User Embedding", "sec_num": "4" }, { "text": "We evaluate the effectiveness of the user factor adapted embedding model by an intrinsic evaluation, user clustering task and an extrinsic evaluation, personalized classification task. The first task aims to measure the purity of clusters with respect to categories of user interests, and the second task uses the document classification as a proxy of qualifying quality of user embeddings. We conduct a qualitative analysis of the user embeddings comparing with our close work (Amir et al., 2017) .", "cite_spans": [ { "start": 478, "end": 497, "text": "(Amir et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The unsupervised evaluation of embedding models focuses on four main categories: relatedness, analogy, categorization and selectional preference (Schnabel et al., 2015) . We approach the user embedding evaluation by categorizing users into different clusters. User communities or groups gather users by their interests and behaviors, such as engaging in the same filed of topics (Benton et al., 2016; Yang and Eisenstein, 2017) . In our datasets, the user-purchased Amazon products, the user-visited Yelp business units and the userwatched IMDb movies have their item categories. The categories can imply user preferences and inter-ests, and therefore can help evaluate user clusters. In this study, our proposed multitask model learns interactive relations across language, user and item instead of using the item categories. We compare our proposed model with other 5 baseline models:", "cite_spans": [ { "start": 145, "end": 168, "text": "(Schnabel et al., 2015)", "ref_id": "BIBREF39" }, { "start": 379, "end": 400, "text": "(Benton et al., 2016;", "ref_id": "BIBREF3" }, { "start": 401, "end": 427, "text": "Yang and Eisenstein, 2017)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "User Clustering Evaluation", "sec_num": "5.1" }, { "text": "word2user represents users by aggregating word representations (Benton et al., 2016) . We compute a user representation by averaging embeddings of all tokens that were authored by the user. To obtain the word embeddings, for each dataset, we trained a word2vec model for 5 epochs using Gensim (Rehurek and Sojka, 2010) with 300-dimensional vectors.", "cite_spans": [ { "start": 63, "end": 84, "text": "(Benton et al., 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "User Clustering Evaluation", "sec_num": "5.1" }, { "text": "lda2user generates user representations by applying Latent Dirichlet Allocation (LDA) (Blei et al., 2003) on user documents (Pennacchiotti and Popescu, 2011) . We set the number of topics as 300 and leave the rest of the parameters as their defaults in Gensim (Rehurek and Sojka, 2010) . We apply the LDA model on each user document to obtain a document vector, and then get a user vector by averaging the vectors of all the user's documents.", "cite_spans": [ { "start": 86, "end": 105, "text": "(Blei et al., 2003)", "ref_id": "BIBREF6" }, { "start": 124, "end": 157, "text": "(Pennacchiotti and Popescu, 2011)", "ref_id": "BIBREF36" }, { "start": 260, "end": 285, "text": "(Rehurek and Sojka, 2010)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "User Clustering Evaluation", "sec_num": "5.1" }, { "text": "doc2user applies paragraph2vec (Le and Mikolov, 2014) to obtain user vectors. We implemented the User-D-DBOW model which achieved the best performance in the previous work (Ding et al., 2017) . The implementation keeps parameters with default values in the Gensim (Rehurek and Sojka, 2010) . We aggregate each user's documents as a single document. Then the User-D-DBOW model can derive a single user vector from the aggregated document.", "cite_spans": [ { "start": 31, "end": 53, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF25" }, { "start": 172, "end": 191, "text": "(Ding et al., 2017)", "ref_id": "BIBREF12" }, { "start": 264, "end": 289, "text": "(Rehurek and Sojka, 2010)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "User Clustering Evaluation", "sec_num": "5.1" }, { "text": "bert2user follows a similar process of the lda2user. We use the \"bert-base-uncased\" pretrained BERT model for English from the transformers toolkit (Wolf et al., 2019) with default parameter and model settings. After inserting \"[CLS]\" and \"[SEP]\" to the beginning and end of each document, the BERT model encodes a document into a fixed-length (768) document vector. We can then generate user embeddings by averaging each user's all document vectors.", "cite_spans": [ { "start": 148, "end": 167, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "User Clustering Evaluation", "sec_num": "5.1" }, { "text": "user2vec trains user embeddings by predicting word usage by users. We follow the existing work (Amir et al., 2017) but set the user vector dimension as 300.", "cite_spans": [ { "start": 95, "end": 114, "text": "(Amir et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "User Clustering Evaluation", "sec_num": "5.1" }, { "text": "We use the SpectralClustering algorithm from scikit-learn (Pedregosa et al., 2011) toolkit to cluster users into three clustering sizes, 4, 8 and 12. We set the affinity as cosine and leave other parameters as their defaults. To measure cluster quality, we select every two users from the clusters without repetition. We count the user pair as a correct option if two users have overlaps within the same item genre and from the same cluster or if the user pair does not overlap and is from the different clusters. Otherwise, we will count the selection as the wrong option. Therefore, we can have a list of predicted labels and ground truths by using the item genres as a proxy. Finally, we measure the clustering purity by the F1 score.", "cite_spans": [ { "start": 58, "end": 82, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "User Clustering Evaluation", "sec_num": "5.1" }, { "text": "We present results at Table 2 . The results show that our multitask user embedding model outperforms the other baselines by a large portion on the IMDb and Yelp datasets. The improvements suggest the user factor adapted model can understand semantic variations in diverse user interests. The performance of our model and user2vec has similar scores on the Amazon-Health dataset. Comparing to the other two datasets, the Amazon-Health data has more similar topics of review items.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "User Clustering Evaluation", "sec_num": "5.1" }, { "text": "We train three classifiers to evaluate user embeddings on the document classification task. We split each dataset into training (80%), development (10%) and test (10%) sets, as shown in Table 1 . The models oversample the minority during the training process. We test the classifiers when they achieve the best performance on the development set. Finally, we report precision, recall and F1 scores using the classification report from scikitlearn (Pedregosa et al., 2011) . Figure 3 illustrates personalizing classifiers by concatenating document representations with user embeddings. We compare our proposed model with classifiers using existing user2vec (Amir et al., 2017) and nonpersonalized classifiers. To ensure a fair comparison, classifiers use the same settings for models with and without user embeddings.", "cite_spans": [ { "start": 447, "end": 471, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF35" }, { "start": 656, "end": 675, "text": "(Amir et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 186, "end": 193, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 474, "end": 482, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Personalized Classifier Evaluation", "sec_num": "5.2" }, { "text": "LR. We build a logistic regression classifier using LogisticRegression from scikit-learn (Pedregosa et al., 2011) . The classifier extracts uni-, bi-and tri-gram features on the corpora with the most frequent 15K features with default parameters.", "cite_spans": [ { "start": 89, "end": 113, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Personalized Classifier Evaluation", "sec_num": "5.2" }, { "text": "GRU. We build a bi-directional Gated Recurrent Unit (GRU) (Cho et al., 2014) classifier. We padded documents to the average document length of each corpus. We set the output dimension of GRU as 200 Amazon-Health IMDb Yelp F1@4 F1@8 F1@12 F1@4 F1@8 F1@12 F1@4 F1@8 F1@12 Baselines word2user", "cite_spans": [ { "start": 58, "end": 76, "text": "(Cho et al., 2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Personalized Classifier Evaluation", "sec_num": "5.2" }, { "text": ". Table 2 : Performance summary of different user embedding models. We report F1 scores at multiple numbers of clusters. The bold fonts indicate the best performance in each evaluation task.", "cite_spans": [], "ref_spans": [ { "start": 2, "end": 9, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Personalized Classifier Evaluation", "sec_num": "5.2" }, { "text": "and apply a dense layer on the output. The dense layer uses ReLU (Hahnloser et al., 2000) as the activation function, applies a dropout (Srivastava et al., 2014) rate of 0.2 and outputs 200 dimensions for final document class prediction. We train the classifier for 20 epochs.", "cite_spans": [ { "start": 60, "end": 89, "text": "ReLU (Hahnloser et al., 2000)", "ref_id": null }, { "start": 136, "end": 161, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Personalized Classifier Evaluation", "sec_num": "5.2" }, { "text": "BERT. We implement a BERT-based classifier by HuggingFace's transformers toolkit (Wolf et al., 2019) . The classifier loads the \"bert-base-uncased\" pre-trained BERT model for English, encodes each document into a fixed-length (768) vector and feeds to a linear prediction layer for prediction. We conduct fine-tuning steps for 10 epochs with a batch size of 32 and optimize the model by AdamW with a learning rate of 9e \u22125 . We show the performance results in Table 3 . Comparing to the baselines, the classifiers personalized by our proposed model generally achieve the best performance across the three datasets. This highlights adapting user factors can help embedding models learn user variations and benefit the classification performance. We can also observe that the personalized classifiers generally outperform the non-personalized classifiers. This indicates personalizing the classifiers with user history boosts classification performance in our study.", "cite_spans": [ { "start": 81, "end": 100, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF45" } ], "ref_spans": [ { "start": 460, "end": 467, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Personalized Classifier Evaluation", "sec_num": "5.2" }, { "text": "To further evaluate the effectiveness of user embedding models, we map users into a 2-D space using user embeddings and plot them in Figure 4 . We group users according to user interests using the domain categories of rated items. To map the 300d user embeddings, we use the TSNE algorithm from scikit-learn (Pedregosa et al., 2011) to compress the dimension into 2-d vectors. We set the n component as 2 and leave the other parameters as their defaults in the TSNE. We can observe that the MTL user embedding model shows more cluster-ing patterns with regard to user interests (categories of reviewed items). This indicates that the unsupervised multitask learning framework can adapt the latent user factors into the user embedding. Users may have multiple interests. In the right plot, we can also find that there is a cluster that mixes with multiple colors on the right bottom.", "cite_spans": [ { "start": 308, "end": 332, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 133, "end": 141, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Visualization Analysis", "sec_num": "5.3" }, { "text": "User Profiling is a common task in natural language processing. Online generated user texts show demographic variations in the linguistic styles, and the linguistic style variability could be used for predicting user's personality and demographic attributes (Rosenthal and McKeown, 2011; Zhang et al., 2016; Hovy and Fornaciari, 2018; Wood-Doughty et al., 2020; Gjurkovi\u0107 et al., 2020; Lynn et al., 2020) . The demographic user factors influence how online users express their opinions (Volkova et al., 2013; Hovy, 2015; Wood-Doughty et al., 2017) and show promising improvements in the text classification task (Lynn et al., 2017; Huang and Paul, 2019; Lynn et al., 2019) . However, in this work, the goal of modeling user factor is to train robust user embeddings via domain adaptation, rather than the end goal being demographic factor prediction and document classification itself.", "cite_spans": [ { "start": 258, "end": 287, "text": "(Rosenthal and McKeown, 2011;", "ref_id": "BIBREF38" }, { "start": 288, "end": 307, "text": "Zhang et al., 2016;", "ref_id": "BIBREF56" }, { "start": 308, "end": 334, "text": "Hovy and Fornaciari, 2018;", "ref_id": "BIBREF20" }, { "start": 335, "end": 361, "text": "Wood-Doughty et al., 2020;", "ref_id": "BIBREF47" }, { "start": 362, "end": 385, "text": "Gjurkovi\u0107 et al., 2020;", "ref_id": "BIBREF15" }, { "start": 386, "end": 404, "text": "Lynn et al., 2020)", "ref_id": "BIBREF28" }, { "start": 486, "end": 508, "text": "(Volkova et al., 2013;", "ref_id": "BIBREF43" }, { "start": 509, "end": 520, "text": "Hovy, 2015;", "ref_id": "BIBREF19" }, { "start": 521, "end": 547, "text": "Wood-Doughty et al., 2017)", "ref_id": "BIBREF46" }, { "start": 612, "end": 631, "text": "(Lynn et al., 2017;", "ref_id": "BIBREF30" }, { "start": 632, "end": 653, "text": "Huang and Paul, 2019;", "ref_id": "BIBREF22" }, { "start": 654, "end": 672, "text": "Lynn et al., 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Personalized classification generally improves the performance of document classifiers (Flek, 2020) . The multitask learning framework has been applied for personalizing document classifiers by optimizing the classifiers on multiple document levels (Benton et al., 2017) or general and individual levels (Wu and Huang, 2016) . The social relation can bridge connections between users and generalize classification models across users (Wu and Huang, 2016; Yang and Eisenstein, 2017) . For example, (Wu and Huang, 2016) Table 3 : Performance scores of document classifiers on the review datasets. '-u' means personalized classifiers using user2vec (Amir et al., 2017) and '-up' indicates personalizing classifiers via our proposed method. We use the bold fonts to highlight the best performance of each classifier on separate datasets. classifiers by two optimization tasks, sentiment classification and user social relation minimization, which allows classifiers to minimize the impacts of user community variations. This work personalizes classifiers in a different way, where we train user embedding models under a multitask learning framework and use the personalized classifiers to evaluate user embedding models.", "cite_spans": [ { "start": 87, "end": 99, "text": "(Flek, 2020)", "ref_id": "BIBREF14" }, { "start": 249, "end": 270, "text": "(Benton et al., 2017)", "ref_id": "BIBREF4" }, { "start": 304, "end": 324, "text": "(Wu and Huang, 2016)", "ref_id": "BIBREF48" }, { "start": 434, "end": 454, "text": "(Wu and Huang, 2016;", "ref_id": "BIBREF48" }, { "start": 455, "end": 481, "text": "Yang and Eisenstein, 2017)", "ref_id": "BIBREF51" }, { "start": 497, "end": 517, "text": "(Wu and Huang, 2016)", "ref_id": "BIBREF48" }, { "start": 646, "end": 665, "text": "(Amir et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 518, "end": 525, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this study, we have proposed user factor adaptation for building user embedding under a multitask framework. Our analyses show how the user factor causes semantic variations in relation to word usage and document classification, showing that the user factor is rooted in language. We have evaluated the proposed user embedding mod-els in both intrinsic and extrinsic tasks. The user factor adapted model has shown its robustness to language variations in both instrinsic and extrinsic evaluations, learning user representations and personalizing classifiers. We release our source code and instructions of data access at https: //github.com/xiaoleihuang/UserEmbedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Our work in user factor adaptation highlights several future directions to explore. First, our method models latent user factors inferred from user posts. A combination of user embedding and explicit attributes (e.g., demographic factors) may improve model personalization. Second, user behaviors shift over time. A time-adapted user embedding can jointly model temporality and user attributes in online social media and can be extended to other fields, such as public health.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The top 4 rated categories of Amazon-Health, IMDb and Yelp are [sports nutrition, sexual wellness, shaving & hair removal, vitamins & dietary supplements], [comedy, thriller, drama, action] and [restaurants, health & medical, home services, beauty & spas] respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors thank the anonymous reviews. This work was supported in part by the National Science Foundation under award number IIS-1657338. This work was also supported in part by a research gift from Adobe Research. The first author would thank the computational support from the JHU CLSP cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": "8" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Quantifying mental health from social media with neural user embeddings", "authors": [ { "first": "Silvio", "middle": [], "last": "Amir", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Coppersmith", "suffix": "" }, { "first": "Paula", "middle": [], "last": "Carvalho", "suffix": "" }, { "first": "Mario", "middle": [ "J" ], "last": "Silva", "suffix": "" }, { "first": "Bryon", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2nd Machine Learning for Healthcare Conference", "volume": "", "issue": "", "pages": "306--321", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silvio Amir, Glen Coppersmith, Paula Carvalho, Mario J. Silva, and Bryon C. Wallace. 2017. Quan- tifying mental health from social media with neu- ral user embeddings. In Proceedings of the 2nd Machine Learning for Healthcare Conference, vol- ume 68 of Proceedings of Machine Learning Re- search, pages 306-321, Boston, Massachusetts. PMLR.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Modelling context with user embeddings for sarcasm detection in social media", "authors": [ { "first": "Silvio", "middle": [], "last": "Amir", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Paula", "middle": [], "last": "Carvalho", "suffix": "" }, { "first": "M\u00e1rio", "middle": [ "J" ], "last": "Silva", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "167--177", "other_ids": { "DOI": [ "10.18653/v1/K16-1017" ] }, "num": null, "urls": [], "raw_text": "Silvio Amir, Byron C. Wallace, Hao Lyu, Paula Car- valho, and M\u00e1rio J. Silva. 2016. Modelling context with user embeddings for sarcasm detection in social media. In Proceedings of The 20th SIGNLL Con- ference on Computational Natural Language Learn- ing, pages 167-177, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning representations of social media users", "authors": [ { "first": "Adrian", "middle": [], "last": "Benton", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.00436" ] }, "num": null, "urls": [], "raw_text": "Adrian Benton. 2018. Learning representations of so- cial media users. arXiv preprint arXiv:1812.00436.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning multiview embeddings of twitter users", "authors": [ { "first": "Adrian", "middle": [], "last": "Benton", "suffix": "" }, { "first": "Raman", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "14--19", "other_ids": { "DOI": [ "10.18653/v1/P16-2003" ] }, "num": null, "urls": [], "raw_text": "Adrian Benton, Raman Arora, and Mark Dredze. 2016. Learning multiview embeddings of twitter users. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 14-19, Berlin, Germany. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multitask learning for mental health conditions with limited social media data", "authors": [ { "first": "Adrian", "middle": [], "last": "Benton", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "152--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multitask learning for mental health condi- tions with limited social media data. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 152-162, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "NLTK: The natural language toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the ACL Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "214--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird and Edward Loper. 2004. NLTK: The nat- ural language toolkit. In Proceedings of the ACL In- teractive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Compu- tational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Latent {D}irichlet {A}llocation", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Blei", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Michael I Jordan", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent {D}irichlet {A}llocation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural sentiment classification with user and product attention", "authors": [ { "first": "Huimin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Cunchao", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1650--1659", "other_ids": { "DOI": [ "10.18653/v1/D16-1171" ] }, "num": null, "urls": [], "raw_text": "Huimin Chen, Maosong Sun, Cunchao Tu, Yankai Lin, and Zhiyuan Liu. 2016a. Neural sentiment classifi- cation with user and product attention. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1650-1659.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning user and product distributed representations using a sequence model for sentiment analysis", "authors": [ { "first": "Tao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" }, { "first": "Yunqing", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Xuan", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "IEEE Computational Intelligence Magazine", "volume": "11", "issue": "3", "pages": "34--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Chen, Ruifeng Xu, Yulan He, Yunqing Xia, and Xuan Wang. 2016b. Learning user and product distributed representations using a sequence model for sentiment analysis. IEEE Computational Intelli- gence Magazine, 11(3):34-44.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "103--111", "other_ids": { "DOI": [ "10.3115/v1/W14-4012" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Fran\u00e7ois Chollet and Others", "authors": [], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois Chollet and Others. 2015. Keras.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Predicting delay discounting from social media likes with unsupervised feature learning", "authors": [ { "first": "T", "middle": [], "last": "Ding", "suffix": "" }, { "first": "W", "middle": [ "K" ], "last": "Bickel", "suffix": "" }, { "first": "S", "middle": [], "last": "Pan", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)", "volume": "", "issue": "", "pages": "254--257", "other_ids": { "DOI": [ "10.1109/ASONAM.2018.8508277" ] }, "num": null, "urls": [], "raw_text": "T. Ding, W. K. Bickel, and S. Pan. 2018. Predicting delay discounting from social media likes with un- supervised feature learning. In 2018 IEEE/ACM In- ternational Conference on Advances in Social Net- works Analysis and Mining (ASONAM), pages 254- 257.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multi-view unsupervised user feature embedding for social media-based substance use prediction", "authors": [ { "first": "Tao", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Warren", "middle": [ "K" ], "last": "Bickel", "suffix": "" }, { "first": "Shimei", "middle": [], "last": "Pan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2275--2284", "other_ids": { "DOI": [ "10.18653/v1/D17-1241" ] }, "num": null, "urls": [], "raw_text": "Tao Ding, Warren K. Bickel, and Shimei Pan. 2017. Multi-view unsupervised user feature embedding for social media-based substance use prediction. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 2275- 2284, Copenhagen, Denmark. Association for Com- putational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "User profiling through deep multimodal fusion", "authors": [ { "first": "Golnoosh", "middle": [], "last": "Farnadi", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Martine", "middle": [ "De" ], "last": "Cock", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM '18", "volume": "", "issue": "", "pages": "171--179", "other_ids": { "DOI": [ "10.1145/3159652.3159691" ] }, "num": null, "urls": [], "raw_text": "Golnoosh Farnadi, Jie Tang, Martine De Cock, and Marie-Francine Moens. 2018. User profiling through deep multimodal fusion. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM '18, page 171-179, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Returning the N to NLP: Towards contextually personalized classification models", "authors": [ { "first": "Lucie", "middle": [], "last": "Flek", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "7828--7838", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.700" ] }, "num": null, "urls": [], "raw_text": "Lucie Flek. 2020. Returning the N to NLP: Towards contextually personalized classification models. In Proceedings of the ACL, pages 7828-7838.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Pandora talks: Personality and demographics on reddit", "authors": [ { "first": "Matej", "middle": [], "last": "Gjurkovi\u0107", "suffix": "" }, { "first": "Mladen", "middle": [], "last": "Karan", "suffix": "" }, { "first": "Iva", "middle": [], "last": "Vukojevi\u0107", "suffix": "" }, { "first": "Mihaela", "middle": [], "last": "Bo\u0161njak", "suffix": "" }, { "first": "Jan\u0161najder", "middle": [], "last": "", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.04460" ] }, "num": null, "urls": [], "raw_text": "Matej Gjurkovi\u0107, Mladen Karan, Iva Vukojevi\u0107, Mi- haela Bo\u0161njak, and Jan\u0160najder. 2020. Pandora talks: Personality and demographics on reddit. arXiv preprint arXiv:2004.04460.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Jnet: Learning user representations via joint network embedding and topic embedding", "authors": [ { "first": "Lin", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Weihao", "middle": [], "last": "Song", "suffix": "" }, { "first": "Hongning", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 13th International Conference on Web Search and Data Mining, WSDM '20", "volume": "", "issue": "", "pages": "205--213", "other_ids": { "DOI": [ "10.1145/3336191.3371770" ] }, "num": null, "urls": [], "raw_text": "Lin Gong, Lu Lin, Weihao Song, and Hongning Wang. 2020. Jnet: Learning user representations via joint network embedding and topic embedding. In Pro- ceedings of the 13th International Conference on Web Search and Data Mining, WSDM '20, page 205-213, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit", "authors": [ { "first": "H R", "middle": [], "last": "Richard", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Hahnloser", "suffix": "" }, { "first": "Misha", "middle": [ "A" ], "last": "Sarpeshkar", "suffix": "" }, { "first": "", "middle": [], "last": "Mahowald", "suffix": "" }, { "first": "J", "middle": [], "last": "Rodney", "suffix": "" }, { "first": "H Sebastian", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "", "middle": [], "last": "Seung", "suffix": "" } ], "year": 2000, "venue": "Nature", "volume": "405", "issue": "6789", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard H R Hahnloser, Rahul Sarpeshkar, Misha A Mahowald, Rodney J Douglas, and H Sebastian Se- ung. 2000. Digital selection and analogue amplifica- tion coexist in a cortex-inspired silicon circuit. Na- ture, 405(6789):947.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering", "authors": [ { "first": "Ruining", "middle": [], "last": "He", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th International Conference on World Wide Web (WWW)", "volume": "3", "issue": "", "pages": "507--517", "other_ids": { "DOI": [ "10.1145/2872427.2883037" ] }, "num": null, "urls": [], "raw_text": "Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web (WWW), volume 3, pages 507-517. Interna- tional World Wide Web Conferences Steering Com- mittee.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Demographic factors improve classification performance", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "752--762", "other_ids": { "DOI": [ "10.3115/v1/P15-1073" ] }, "num": null, "urls": [], "raw_text": "Dirk Hovy. 2015. Demographic factors improve classi- fication performance. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 752-762, Beijing, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Increasing in-class similarity by retrofitting embeddings with demographic information", "authors": [ { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Tommaso", "middle": [], "last": "Fornaciari", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "671--677", "other_ids": { "DOI": [ "10.18653/v1/D18-1070" ] }, "num": null, "urls": [], "raw_text": "Dirk Hovy and Tommaso Fornaciari. 2018. Increasing in-class similarity by retrofitting embeddings with demographic information. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 671-677, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep structure learning for rumor detection on twitter", "authors": [ { "first": "Qi", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Chuan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mingwen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "2019 International Joint Conference on Neural Networks (IJCNN)", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Huang, Chuan Zhou, Jia Wu, Mingwen Wang, and Bin Wang. 2019. Deep structure learning for ru- mor detection on twitter. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural User Factor Adaptation for Text Classification: Learning to Generalize Across Author Demographics", "authors": [ { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Paul", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*{SEM} 2019)", "volume": "4", "issue": "", "pages": "136--146", "other_ids": { "DOI": [ "10.18653/v1/S19-1015" ] }, "num": null, "urls": [], "raw_text": "Xiaolei Huang and Michael J. Paul. 2019. Neural User Factor Adaptation for Text Classification: Learning to Generalize Across Author Demographics. In Pro- ceedings of the Eighth Joint Conference on Lexi- cal and Computational Semantics (*{SEM} 2019), volume 4, pages 136-146, Minneapolis, Minnesota. Association for Computational Linguistics, Associa- tion for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In Proceed- ings of the 3rd International Conference on Learn- ing Representations (ICLR).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines", "authors": [ { "first": "Michal", "middle": [], "last": "Kosinski", "suffix": "" }, { "first": "C", "middle": [], "last": "Sandra", "suffix": "" }, { "first": "", "middle": [], "last": "Matz", "suffix": "" }, { "first": "D", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Vesselin", "middle": [], "last": "Gosling", "suffix": "" }, { "first": "David", "middle": [], "last": "Popov", "suffix": "" }, { "first": "", "middle": [], "last": "Stillwell", "suffix": "" } ], "year": 2015, "venue": "American Psychologist", "volume": "70", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michal Kosinski, Sandra C Matz, Samuel D Gosling, Vesselin Popov, and David Stillwell. 2015. Face- book as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines. American Psychologist, 70(6):543.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of Machine Learning Research", "volume": "32", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Pro- ceedings of Machine Learning Research, volume 32, pages 1188-1196, Bejing, China. PMLR.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Document-level multi-aspect sentiment classification by jointly modeling users, aspects, and overall ratings", "authors": [ { "first": "Junjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haitong", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "925--936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Li, Haitong Yang, and Chengqing Zong. 2018. Document-level multi-aspect sentiment classifica- tion by jointly modeling users, aspects, and overall ratings. In Proceedings of the 27th International Conference on Computational Linguistics, pages 925-936, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "2012. langid.py: An off-the-shelf language identification tool", "authors": [ { "first": "Marco", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": null, "venue": "Proceedings of the ACL 2012 System Demonstrations", "volume": "", "issue": "", "pages": "25--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Pro- ceedings of the ACL 2012 System Demonstrations, pages 25-30, Jeju Island, Korea. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Hierarchical modeling for user personality prediction: The role of messagelevel attention", "authors": [ { "first": "Veronica", "middle": [], "last": "Lynn", "suffix": "" }, { "first": "Niranjan", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "H", "middle": [ "Andrew" ], "last": "Schwartz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5306--5316", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.472" ] }, "num": null, "urls": [], "raw_text": "Veronica Lynn, Niranjan Balasubramanian, and H. An- drew Schwartz. 2020. Hierarchical modeling for user personality prediction: The role of message- level attention. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 5306-5316, Online. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Tweet classification without the tweet: An empirical examination of user versus document attributes", "authors": [ { "first": "Veronica", "middle": [], "last": "Lynn", "suffix": "" }, { "first": "Salvatore", "middle": [], "last": "Giorgi", "suffix": "" }, { "first": "Niranjan", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "H", "middle": [ "Andrew" ], "last": "Schwartz", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science", "volume": "", "issue": "", "pages": "18--28", "other_ids": { "DOI": [ "10.18653/v1/W19-2103" ] }, "num": null, "urls": [], "raw_text": "Veronica Lynn, Salvatore Giorgi, Niranjan Balasubra- manian, and H. Andrew Schwartz. 2019. Tweet classification without the tweet: An empirical ex- amination of user versus document attributes. In Proceedings of the Third Workshop on Natural Lan- guage Processing and Computational Social Sci- ence, pages 18-28, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Human centered NLP with user-factor adaptation", "authors": [ { "first": "Veronica", "middle": [], "last": "Lynn", "suffix": "" }, { "first": "Youngseo", "middle": [], "last": "Son", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Niranjan", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "H", "middle": [ "Andrew" ], "last": "Schwartz", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1146--1155", "other_ids": { "DOI": [ "10.18653/v1/D17-1119" ] }, "num": null, "urls": [], "raw_text": "Veronica Lynn, Youngseo Son, Vivek Kulkarni, Ni- ranjan Balasubramanian, and H. Andrew Schwartz. 2017. Human centered NLP with user-factor adap- tation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1146-1155.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Unifying text, metadata, and user network representations with a neural network for geolocation prediction", "authors": [ { "first": "Yasuhide", "middle": [], "last": "Miura", "suffix": "" }, { "first": "Motoki", "middle": [], "last": "Taniguchi", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Taniguchi", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Ohkuma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1260--1272", "other_ids": { "DOI": [ "10.18653/v1/P17-1116" ] }, "num": null, "urls": [], "raw_text": "Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, and Tomoko Ohkuma. 2017. Unifying text, meta- data, and user network representations with a neural network for geolocation prediction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1260-1272, Vancouver, Canada.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Modeling personal biases in language use by inducing personalized word embeddings", "authors": [ { "first": "Daisuke", "middle": [], "last": "Oba", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Yoshinaga", "suffix": "" }, { "first": "Shoetsu", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Akasaki", "suffix": "" }, { "first": "Masashi", "middle": [], "last": "Toyoda", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2102--2108", "other_ids": { "DOI": [ "10.18653/v1/N19-1215" ] }, "num": null, "urls": [], "raw_text": "Daisuke Oba, Naoki Yoshinaga, Shoetsu Sato, Satoshi Akasaki, and Masashi Toyoda. 2019. Modeling per- sonal biases in language use by inducing personal- ized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2102-2108, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Social media-based user embedding: A literature review", "authors": [ { "first": "Shimei", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Ding", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19", "volume": "", "issue": "", "pages": "6318--6324", "other_ids": { "DOI": [ "10.24963/ijcai.2019/881" ] }, "num": null, "urls": [], "raw_text": "Shimei Pan and Tao Ding. 2019. Social media-based user embedding: A literature review. In Proceed- ings of the Twenty-Eighth International Joint Con- ference on Artificial Intelligence, IJCAI-19, pages 6318-6324. International Joint Conferences on Ar- tificial Intelligence Organization.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Gael", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Bertrand", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "Jake", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Passos", "suffix": "" }, { "first": "David", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "Duchesnay", "middle": [], "last": "And\u00e9douard", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Gael Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12(Oct):2825-2830.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A machine learning approach to twitter user classification", "authors": [ { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" } ], "year": 2011, "venue": "International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Pennacchiotti and Ana-Maria Popescu. 2011. A machine learning approach to twitter user classifica- tion. In International AAAI Conference on Web and Social Media.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Radim", "middle": [], "last": "Rehurek", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim Rehurek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45- 50. ELRA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Age prediction in blogs: A study of style, content, and online behavior in pre-and post-social media generations", "authors": [ { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "763--772", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Rosenthal and Kathleen McKeown. 2011. Age prediction in blogs: A study of style, content, and online behavior in pre-and post-social media genera- tions. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, volume 1, pages 763- 772.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Evaluation methods for unsupervised word embeddings", "authors": [ { "first": "Tobias", "middle": [], "last": "Schnabel", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Labutov", "suffix": "" }, { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "298--307", "other_ids": { "DOI": [ "10.18653/v1/D15-1036" ] }, "num": null, "urls": [], "raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298-307, Lis- bon, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Learning semantic representations of users and products for document level sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1014--1023", "other_ids": { "DOI": [ "10.3115/v1/P15-1098" ] }, "num": null, "urls": [], "raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Learning se- mantic representations of users and products for doc- ument level sentiment classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1014-1023.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Inferring Latent User Properties from Texts Published in Social Media", "authors": [ { "first": "Svitlana", "middle": [], "last": "Volkova", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Bachrach", "suffix": "" } ], "year": 2015, "venue": "AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svitlana Volkova, Yoram Bachrach, Michael Arm- strong, and Vijay Sharma. 2015. Inferring Latent User Properties from Texts Published in Social Me- dia. In AAAI Conference on Artificial Intelligence (AAAI), Austin, TX.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Exploring demographic language variations to improve multilingual sentiment analysis in social media", "authors": [ { "first": "Svitlana", "middle": [], "last": "Volkova", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2013, "venue": "EMNLP 2013 -2013 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference", "volume": "", "issue": "", "pages": "1815--1827", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic language variations to improve multilingual sentiment anal- ysis in social media. In EMNLP 2013 -2013 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Confer- ence, pages 1815-1827, Seattle, Washington, USA. Association for Computational Linguistics.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Cross-media user profiling with joint textual and social user embedding", "authors": [ { "first": "Jingjing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shoushan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Mingqi", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Hanqian", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1410--1420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Wang, Shoushan Li, Mingqi Jiang, Hanqian Wu, and Guodong Zhou. 2018. Cross-media user profiling with joint textual and social user embed- ding. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 1410- 1420, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "How does Twitter user behavior vary across demographic groups?", "authors": [ { "first": "Zach", "middle": [], "last": "Wood-Doughty", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Smith", "suffix": "" }, { "first": "David", "middle": [], "last": "Broniatowski", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Workshop on NLP and Computational Social Science", "volume": "", "issue": "", "pages": "83--89", "other_ids": { "DOI": [ "10.18653/v1/W17-2912" ] }, "num": null, "urls": [], "raw_text": "Zach Wood-Doughty, Michael Smith, David Bronia- towski, and Mark Dredze. 2017. How does Twitter user behavior vary across demographic groups? In Proceedings of the Second Workshop on NLP and Computational Social Science, pages 83-89.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Using noisy self-reports to predict twitter user demographics", "authors": [ { "first": "Zach", "middle": [], "last": "Wood-Doughty", "suffix": "" }, { "first": "Paiheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00635" ] }, "num": null, "urls": [], "raw_text": "Zach Wood-Doughty, Paiheng Xu, Xiao Liu, and Mark Dredze. 2020. Using noisy self-reports to predict twitter user demographics. arXiv preprint arXiv:2005.00635.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Personalized microblog sentiment classification via multitask learning", "authors": [ { "first": "Fangzhao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yongfeng", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16", "volume": "", "issue": "", "pages": "3059--3065", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fangzhao Wu and Yongfeng Huang. 2016. Person- alized microblog sentiment classification via multi- task learning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 3059-3065. AAAI Press.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Improving review representations with user attention and product attention for sentiment classification", "authors": [ { "first": "Zhen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xin-Yu", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Cunyan", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.07861" ] }, "num": null, "urls": [], "raw_text": "Zhen Wu, Xin-Yu Dai, Cunyan Yin, Shujian Huang, and Jiajun Chen. 2018. Improving review repre- sentations with user attention and product atten- tion for sentiment classification. arXiv preprint arXiv:1801.07861.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Incorporating metadata into content-based user embeddings", "authors": [ { "first": "Linzi", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Paul", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 3rd Workshop on Noisy Usergenerated Text", "volume": "", "issue": "", "pages": "45--49", "other_ids": { "DOI": [ "10.18653/v1/W17-4406" ] }, "num": null, "urls": [], "raw_text": "Linzi Xing and Michael J. Paul. 2017. Incorporating metadata into content-based user embeddings. In Proceedings of the 3rd Workshop on Noisy User- generated Text, pages 45-49.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Overcoming language variation in sentiment analysis with social attention", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "295--307", "other_ids": { "DOI": [ "10.1162/tacl_a_00062" ] }, "num": null, "urls": [], "raw_text": "Yi Yang and Jacob Eisenstein. 2017. Overcoming lan- guage variation in sentiment analysis with social at- tention. Transactions of the Association for Compu- tational Linguistics, 5:295-307.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Yelp Dataset Challenge", "authors": [ { "first": "", "middle": [], "last": "Yelp", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yelp. 2018. Yelp Dataset Challenge.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Neural review rating prediction with user and product memory", "authors": [ { "first": "Zhigang", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Fangzhao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Junxin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chuhan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yongfeng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Xie", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19", "volume": "", "issue": "", "pages": "2341--2344", "other_ids": { "DOI": [ "10.1145/3357384.3358138" ] }, "num": null, "urls": [], "raw_text": "Zhigang Yuan, Fangzhao Wu, Junxin Liu, Chuhan Wu, Yongfeng Huang, and Xing Xie. 2019. Neural re- view rating prediction with user and product mem- ory. In Proceedings of the 28th ACM International Conference on Information and Knowledge Manage- ment, CIKM '19, page 2341-2344, New York, NY, USA.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Joint effects of context and user history for predicting online conversation re-entries", "authors": [ { "first": "Xingshan", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "2809--2818", "other_ids": { "DOI": [ "10.18653/v1/P19-1270" ] }, "num": null, "urls": [], "raw_text": "Xingshan Zeng, Jing Li, Lu Wang, and Kam-Fai Wong. 2019. Joint effects of context and user history for predicting online conversation re-entries. In Pro- ceedings of the ACL, pages 2809-2818.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Using linguistic features to estimate suicide probability of chinese microblog users", "authors": [ { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Tianli", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhenxiang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tingshao", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2015, "venue": "Human Centered Computing", "volume": "", "issue": "", "pages": "549--559", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Zhang, Xiaolei Huang, Tianli Liu, Ang Li, Zhenx- iang Chen, and Tingshao Zhu. 2015. Using linguis- tic features to estimate suicide probability of chinese microblog users. In Human Centered Computing, pages 549-559, Cham. Springer International Pub- lishing.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Predicting author age from Weibo microblog posts", "authors": [ { "first": "Wanru", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Caines", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Alikaniotis", "suffix": "" }, { "first": "Paula", "middle": [], "last": "Buttery", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016", "volume": "", "issue": "", "pages": "2990--2997", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wanru Zhang, Andrew Caines, Dimitrios Alikaniotis, and Paula Buttery. 2016. Predicting author age from Weibo microblog posts. In Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016, pages 2990-2997.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Visualizations of IMDb users colored concerning their interests in 4 movie genres. We plot users using the embeddings from our proposed method (right) and user2vec(Amir et al., 2017) (left). The visualizations of Yelp and Amazon are omitted for reasons of space.", "uris": null, "type_str": "figure" }, "TABREF1": { "content": "
Amazon_healthImdb
30.610.620.811.0030.850.840.881.0030.700.650.651.00
User Groups2 10.62 0.700.63 1.001.00 0.630.81 0.62User Groups2 10.89 0.860.85 1.001.00 0.850.88 0.84User Groups2 10.77 0.780.79 1.001.00 0.790.65 0.65
01.000.700.620.6101.000.860.890.8501.000.780.770.70
01User Groups2301User Groups2301User Groups23
", "num": null, "type_str": "table", "text": "Statistical summary of the Amazon, Yelp and IMDb review datasets. Amazon-Health refers to healthrelated reviews. Tokens mean the number of average tokens per document. We present the data split for the evaluation task of text classification on the right side.", "html": null }, "TABREF2": { "content": "
Amazon_healthImdbYelp
074.9671.3174.7372.47076.4276.7877.3179.87083.0182.6086.2686.54
User Groups1 275.71 78.6176.32 79.8878.46 76.3274.73 71.31User Groups1 274.39 76.4078.89 81.1277.63 78.8977.31 76.78User Groups1 282.46 80.4185.01 88.7987.01 85.0186.26 82.60
379.6678.6175.7174.96376.9176.4074.3976.42381.6280.4182.4683.01
32User Groups1032User Groups1032User Groups10
Figure 2:
", "num": null, "type_str": "table", "text": "Document classification performance when training and testing on different groups of users. The datasets come from Amazon health, IMDb and Yelp reviews. Darker red indicates better classification performance, while darker blue means worse performance.", "html": null } } } }