{ "paper_id": "P14-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:07:03.787243Z" }, "title": "Weakly Supervised User Profile Extraction from Twitter", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "bdlijiwei@gmail.com" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "postCode": "15213", "settlement": "Pittsburgh", "region": "PA", "country": "USA" } }, "email": "rittera@cs.cmu.edu" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "", "affiliation": {}, "email": "ehovy@andrew.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users' profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users' attributes based on their tweets. 1", "pdf_parse": { "paper_id": "P14-1016", "_pdf_hash": "", "abstract": [ { "text": "While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users' profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users' attributes based on their tweets. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The overwhelming popularity of online social media creates an opportunity to display given aspects of oneself. Users' profile information in social networking websites such as Facebook 2 or Google Plus 3 provides a rich repository personal information in a structured data format, making it amenable to automatic processing. This includes, for example, users' jobs and education, and provides a useful source of information for applications such as search 4 , friend recommendation, on-@ [shanenicholson] has taken all the kids today so I can go shopping-CHILD FREE! #iloveyoushano #iloveyoucreditcard Tamworth line advertising, computational social science and more.", "cite_spans": [ { "start": 488, "end": 504, "text": "[shanenicholson]", "ref_id": null } ], "ref_spans": [ { "start": 602, "end": 610, "text": "Tamworth", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although profiles exist in an easy-to-use, structured data format, they are often sparsely populated; users rarely fully complete their online profiles. Additionally, some social networking services such as Twitter don't support this type of structured profile data. It is therefore difficult to obtain a reasonably comprehensive profile of a user, or a reasonably complete facet of information (say, education level) for a class of users. While many users do not explicitly list all their personal information in their online profile, their user generated content often contains strong evidence to suggest many types of user attributes, for example education, spouse, and employment (See Table 1 ). Can one use such information to infer more details? In particular, can one exploit indirect clues from an unstructured data source like Twitter to obtain rich, structured user profiles?", "cite_spans": [], "ref_spans": [ { "start": 689, "end": 696, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we demonstrate that it is feasible to automatically extract Facebook-style pro-files directly from users' tweets, thus making user profile data available in a structured format for upstream applications. We view user profile inference as a structured prediction task where both text and network information are incorporated. Concretely, we cast user profile prediction as binary relation extraction (Brin, 1999) , e.g., SPOUSE(User i , User j ), EDUCATION(User i , Entity j ) and EMPLOYER(User i , Entity j ). Inspired by the concept of distant supervision, we collect training tweets by matching attribute ground truth from an outside \"knowledge base\" such as Facebook or Google Plus.", "cite_spans": [ { "start": 413, "end": 425, "text": "(Brin, 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One contribution of the work presented here is the creation of the first large-scale dataset on three general Twitter user profile domains (i.e., EDUCA-TION, JOB, SPOUSE). Experiments demonstrate that by simultaneously harnessing both text features and network information, our approach is able to make accurate user profile predictions. We are optimistic that our approach can easily be applied to further user attributes such as HOBBIES and INTERESTS (MOVIES, BOOKS, SPORTS or STARS), RELIGION, HOMETOWN, LIVING LOCA-TION, FAMILY MEMBERS and so on, where training data can be obtained by matching ground truth retrieved from multiple types of online social media such as Facebook, Google Plus, or LinkedIn. Our contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We cast user profile prediction as an information extraction task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present a large-scale dataset for this task gathered from various structured and unstructured social media sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We demonstrate the benefit of jointly reasoning about users' social network structure when extracting their profiles from text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We experimentally demonstrate the effectiveness of our approach on 3 relations: SPOUSE, JOB and EDUCATION.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows: We summarize related work in Section 2. The creation of our dataset is described in Section 3. The details of our model are presented in Section 4. We present experimental results in Section 5 and conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While user profile inference from social media has received considerable attention Rao et al., 2011) , most previous work has treated this as a classification task where the goal is to predict unary predicates describing attributes of the user. Examples include gender (Ciot et al., 2013; Liu and Ruths, 2013; , age , or political polarity (Pennacchiotti and Popescu, 2011; Conover et al., 2011) .", "cite_spans": [ { "start": 83, "end": 100, "text": "Rao et al., 2011)", "ref_id": "BIBREF21" }, { "start": 269, "end": 288, "text": "(Ciot et al., 2013;", "ref_id": "BIBREF3" }, { "start": 289, "end": 309, "text": "Liu and Ruths, 2013;", "ref_id": "BIBREF11" }, { "start": 340, "end": 373, "text": "(Pennacchiotti and Popescu, 2011;", "ref_id": "BIBREF17" }, { "start": 374, "end": 395, "text": "Conover et al., 2011)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A significant challenge that has limited previous efforts in this area is the lack of available training data. For example, researchers obtain training data by employing workers from Amazon Mechanical Turk to manually identify users' gender from profile pictures (Ciot et al., 2013) . This approach is appropriate for attributes such as gender with a small numbers of possible values (e.g., male or female), for which the values can be directly identified. However for attributes such as spouse or education there are many possible values, making it impossible to manually search for gold standard answers within a large number of tweets which may or may not contain sufficient evidence.", "cite_spans": [ { "start": 263, "end": 282, "text": "(Ciot et al., 2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Also related is the Twitter user timeline extraction algorithm of Li and Cardie (2013) . This work is not focused on user attribute extraction, however.", "cite_spans": [ { "start": 66, "end": 86, "text": "Li and Cardie (2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Distant Supervision Distant supervision, also known as weak supervision, is a method for learning to extract relations from text using ground truth from an existing database as a source of supervision. Rather than relying on mentionlevel annotations, which are expensive and time consuming to generate, distant supervision leverages readily available structured data sources as a weak source of supervision for relation extraction from related text corpora (Craven et al., 1999) . For example, suppose r(e 1 , e 2 ) = IsIn(P aris, F rance) is a ground tuple in the database and s =\"Paris is the capital of France\" contains synonyms for both \"Paris\" and \"France\", then we assume that s may express the fact r(e 1 , e 2 ) in some way and can be used as positive training examples. In addition to the wide use in text entity relation extraction (Mintz et al., 2009; Ritter et al., 2013; Hoffmann et al., 2011; Surdeanu et al., 2012; Takamatsu et al., 2012) , distant supervision has been applied to multiple fields such as protein relation extraction (Craven et al., 1999; Ravikumar et al., 2012) , event extraction from Twitter (Benson et al., 2011) , sentiment analysis (Go et al., 2009) and Wikipedia infobox generation (Wu and Weld, 2007) .", "cite_spans": [ { "start": 457, "end": 478, "text": "(Craven et al., 1999)", "ref_id": "BIBREF5" }, { "start": 842, "end": 862, "text": "(Mintz et al., 2009;", "ref_id": "BIBREF14" }, { "start": 863, "end": 883, "text": "Ritter et al., 2013;", "ref_id": "BIBREF24" }, { "start": 884, "end": 906, "text": "Hoffmann et al., 2011;", "ref_id": "BIBREF8" }, { "start": 907, "end": 929, "text": "Surdeanu et al., 2012;", "ref_id": "BIBREF25" }, { "start": 930, "end": 953, "text": "Takamatsu et al., 2012)", "ref_id": "BIBREF26" }, { "start": 1048, "end": 1069, "text": "(Craven et al., 1999;", "ref_id": "BIBREF5" }, { "start": 1070, "end": 1093, "text": "Ravikumar et al., 2012)", "ref_id": null }, { "start": 1126, "end": 1147, "text": "(Benson et al., 2011)", "ref_id": "BIBREF1" }, { "start": 1169, "end": 1186, "text": "(Go et al., 2009)", "ref_id": "BIBREF6" }, { "start": 1220, "end": 1239, "text": "(Wu and Weld, 2007)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Homophily Online social media offers a rich source of network information. McPherson et al. (2001) discovered that people sharing more attributes such as background or hobby have a higher chance of becoming friends in social media. This property, known as HOMOPHILY (summarized by the proverb \"birds of a feather flock together\") (Al has been widely applied to community detection (Yang and Leskovec, 2013) and friend recommendation (Guy et al., 2010) on social media. In the user attribute extraction literature, researchers have considered neighborhood context to boost inference accuracy (Pennacchiotti and Popescu, 2011; , where information about the degree of their connectivity to their pre-labeled users is included in the feature vectors. A related algorithm by Mislove et al. (2010) crawled Facebook profiles of 4,000 Rice University students and alumni and inferred attributes such as major and year of matriculation purely based on network information. Mislove's work does not consider the users' text stream, however. As we demonstrate below, relying solely on network information is not enough to enable inference about attributes.", "cite_spans": [ { "start": 75, "end": 98, "text": "McPherson et al. (2001)", "ref_id": "BIBREF13" }, { "start": 381, "end": 406, "text": "(Yang and Leskovec, 2013)", "ref_id": "BIBREF28" }, { "start": 433, "end": 451, "text": "(Guy et al., 2010)", "ref_id": "BIBREF7" }, { "start": 591, "end": 624, "text": "(Pennacchiotti and Popescu, 2011;", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We now describe the generation of our distantly supervised training dataset in detail. We make use of Google Plus and Freebase to obtain ground facts and extract positive/negative bags of postings from users' twitter streams according to the ground facts. Education/Job We first used the Google Plus API 5 (shown in Figure 1 ) to obtain a seed set of users whose profiles contain both their education/job status and a link to their twitter account. 6 Then, we fetched tweets containing the mention of the education/job entity from each correspondent user's twitter stream using Twitter's search API 7 (shown in Figure 2 ) and used them to construct positive bags of tweets expressing the associated attribute, namely EDUCATION(User i , Entity j ), or EMPLOYER(User i , Entity j ). The Freebase API 8 is employed for alias recognition, to match terms such as \"Harvard University\", \"Harvard\", \"Harvard U\" to a single The remainder of each corresponding user's entire Twitter feed is used as negative training data. 9 We expanded our dataset from the seed users according to network information provided by Google Plus and Twitter. Concretely, we crawled circle information of users in the seed set from both their Twitter and Google Plus accounts and performed a matching to pick out shared users between one's Twitter follower list and Google Plus Circle. This process assures friend identity and avoids the problem of name ambiguity when matching accounts across websites. Among candidate users, those who explicitly display Job or Education information on Google Plus are preserved. We then gathered positive and negative data as described above.", "cite_spans": [ { "start": 449, "end": 450, "text": "6", "ref_id": null }, { "start": 1013, "end": 1014, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 316, "end": 324, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 611, "end": 619, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dataset Creation", "sec_num": "3" }, { "text": "Dataset statistics are presented in SPOUSE is an exception to the \"homophily\" effect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Creation", "sec_num": "3" }, { "text": "But it exhibits another unique property, known as, REFLEXIVITY: fact IsSpouseOf (e 1 , e 2 ) and IsSpouseOf (e 2 , e 1 ) will hold or not hold at the same time. Given training data expressing the tuple IsSpouseOf (e 1 , e 2 ) from user e 1 's twitter stream, we also gather user e 2 's tweet collection, and fetch tweets with the mention of e 1 . We augment negative training data from e 2 as in the case of Education and Job. Our Spouse dataset contains 1,636 users, where there are 554 couples (1108 users). Note that the number of positive entities (3,121) is greater than the number of users as (1) one user can have multiple spouses at different periods of time (2) multiple entities may point to the same individual, e.g., BarackObama, Barack Obama and Barack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Creation", "sec_num": "3" }, { "text": "We now describe our approach to predicting user profile attributes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Message X: Each user i \u2208 [1, I] is associated with his Twitter ID and his tweet corpus Tweet Collection L e i : L e i denotes the collection of postings containing the mention of entity e from user i. L e i \u2282 X i . Entity attribute indicator z k i,e and z k i,x : For each entity e \u2208 X i , there is a boolean variable z k i,e , indicating whether entity e expresses attribute k of user i. Each posting x \u2208 L e i is associated with attribute indicator z k i,x indicating whether posting x expresses attribute k of user i. z k i,e and z k i,x are observed during training and latent during testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation", "sec_num": "4.1" }, { "text": "X i . X i is comprised of a collection of tweets X i = {x i,j } j=N i j=1 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation", "sec_num": "4.1" }, { "text": "Neighbor set F k i : F k i denotes the neighbor set of user i. For Education (k = 0) and Job (k = 1), F k i denotes the group of users within the network that are in friend relation with user i. For Spouse attribute, F k i denote current user's spouse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation", "sec_num": "4.1" }, { "text": "The distant supervision assumes that if entity e corresponds to an attribute for user i, at least one posting from user i's Twitter stream containing a mention of e might express that attribute. For userlevel attribute prediction, we adopt the following two strategies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "(1) GLOBAL directly makes aggregate (entity) level prediction for z k i,e , where features for all tweets from L e i are aggregated to one vector for training and testing, following Mintz et al. (2009) .", "cite_spans": [ { "start": 182, "end": 201, "text": "Mintz et al. (2009)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "(2) LOCAL makes local tweet-level predictions for each tweet z e i,x , x \u2208 L k i in the first place, making the stronger assumption that all mentions of an entity in the users' profile are expressing the associated attribute. An aggregate-level decision z k i,e is then made from the deterministic OR operators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "z e i,x = 1 \u2203x \u2208 L e i , s.t.z k i,x = 1 0 Otherwise (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "The rest of this paper describes GLOBAL in detail. The model and parameters with LOCAL are identical to those in GLOBAL except that LOCAL encode a tweet-level feature vector rather than an aggregate one. They are therefore excluded for brevity. For each attribute k, we use a model that factorizes the joint distribution as product of two distributions that separately characterize text features and network information as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a8(z k i,e , X i , F k i : \u0398) \u221d \u03a8 text (z k i,e , X i )\u03a8 N eigh (z k i,e , F k i )", "eq_num": "(2)" } ], "section": "Model", "sec_num": "4.2" }, { "text": "Text Factor We use \u03a8 text (z k e , X i ) to capture the text related features which offer attribute clues:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "\u03a8 text (z k e , , X i ) = exp[(\u0398 k text ) T \u2022 \u03c8 text (z k i,e , X i )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "(3) The feature vector \u03c8 text (z k i,e , X i ) encodes the following standard general features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "\u2022 Entity-level: whether begins with capital letter, length of entity. \u2022 Token-level: for each token t \u2208 e, word identity, word shape, part of speech tags, name entity tags. \u2022 Conjunctive features for a window of k (k=1,2) words and part of speech tags. \u2022 Tweet-level: All tokens in the correspondent tweet. In addition to general features, we employ attribute-specific features, such as whether the entity matches a bag of words observed in the list of universities, colleges and high schools for Education attribute, whether it matches terms in a list of companies for Job attribute 12 . Lists of universities and companies are taken from knowledge base NELL 13 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "Neighbor Factor For Job and Education, we bias friends to have a larger possibility to share the same attribute. \u03a8 N eigh (z k i,e , F k i ) captures such influence from friends within the network:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a8 N eigh (z k i,e , F k i ) = j\u2208F k i \u03a6 N eigh (z k e , X j ) \u03a6 N eigh (z k i,e , X j ) = exp[(\u0398 k N eigh ) T \u2022 \u03c8 N eigh (z k i,e , X j )]", "eq_num": "(4)" } ], "section": "Model", "sec_num": "4.2" }, { "text": "Features we explore include the whether entity e is also the correspondent attribute with neighbor user j, i.e., I(z e j,k = 0) and I(z e j,k = 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "12 Freebase is employed for alias recognition. 13 http://rtw.ml.cmu.edu/rtw/kbbrowser/ Input: Tweet Collection {X i }, Neighbor set {F k i } Initialization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "\u2022 for each user i:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "for each candidate entity e \u2208 X i z k i,e = argmax z \u03a8(z , X i ) from text features End Initialization while not convergence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "\u2022 for each user i:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "update attribute values for j \u2208 F k i for each candidate entity e \u2208 X i z k i,e = argmax z \u03a8(z , X i , F k i ) end while: For Spouse, we set F spouse i = {e} and the neighbor factor can be rewritten as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a8 N eigh (z k i,e , X j ) = \u03a8 N eigh (C i , X e )", "eq_num": "(5)" } ], "section": "Model", "sec_num": "4.2" }, { "text": "It characterizes whether current user C i to be the spouse of user e (if e corresponds to a Twitter user). We expect clues about whether C i being entity e's spouse from e's Twitter corpus will in turn facilitate the spouse inference procedure of user i. \u03c8 N eigh (C i , X e ) encodes I(C i \u2208 S e ), I(C i \u2208 S e ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "Features we explore also include whether C i 's twitter ID appears in e's corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.2" }, { "text": "We separately trained three classifiers regarding the three attributes. All variables are observed during training; we therefore take a feature-based approach to learning structure prediction models inspired by structure compilation (Liang et al., 2008) . In our setting, a subset of the features (those based on network information) are computed based on predictions that will need to be made at test time, but are observed during training. This simplified approach to learning avoids expensive inference; at test time, however, we still need to jointly predict the best attribute values for friends as is described in section 4.4.", "cite_spans": [ { "start": 233, "end": 253, "text": "(Liang et al., 2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.3" }, { "text": "Job and Education Our inference algorithm for Job/Education is performed on two settings, depending on whether neighbor information is observed (NEIGH-OBSERVED) or latent (NEIGH-LATENT). Real world applications, where network information can be partly retrieved from all types of social networks, can always falls in between. Inference in the NEIGH-OBSERVED setting is trivial; for each entity e \u2208 G i , we simply predict it's candidate attribute values using Equ.6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z k i,e = argmax z \u03a8(z , X i , F k i )", "eq_num": "(6)" } ], "section": "Inference", "sec_num": "4.4" }, { "text": "For NEIGH-LATENT setting, attributes for each node along the network are treated latent and user attribute prediction depends on attributes of his neighbors. The objective function for joint inference would be difficult to optimize exactly, and algorithms for doing so would be unlikely to scale to network of the size we consider. Instead, we use a sieve-based greedy search approach to inference (shown in Figure 3 ) inspired by recent work on coreference resolution (Raghunathan et al., 2010) . Attributes are initialized using only text features, maximizing \u03a8 text (e, X i ), and ignoring network information. Then for each user we iteratively reestimate their profile given both their text features and network features (computed based on the current predictions made for their friends) which provide additional evidence.", "cite_spans": [ { "start": 469, "end": 495, "text": "(Raghunathan et al., 2010)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 408, "end": 416, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Inference", "sec_num": "4.4" }, { "text": "In this way, highly confident predictions will be made strictly from text in the first round, then the network can either support or contradict low confidence predictions as more decisions are made. This process continues until no changes are made at which point the algorithm terminates. We empirically found it to work well in practice. We expect that NEIGH-OBSERVED performs better than NEIGH-LATENT since the former benefits from gold network information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.4" }, { "text": "Spouse For Spouse inference, if candidate entity e has no correspondent twitter account, we directly determine z k i,e = argmax z \u03a8(z , X i ) from text features. Otherwise, the inference of z k i,e depends on the z k e,C i . Similarly, we initialize z k i,e and z k e,C i by maximizing text factor, as we did for Education and Job. Then we iteratively update z k given by the rest variables until convergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.4" }, { "text": "In this Section, we present our experimental results in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Education Job AFFINITY 74.3 14.5 Table 3 : Affinity values for Education and Job.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Each tweet posting is tokenized using Twitter NLP tool introduced by Noah's Ark 14 with # and @ separated following tokens. We assume that attribute values should be either name entities or terms following @ and #. Name entities are extracted using Ritter et al.'s NER system (2011) .", "cite_spans": [ { "start": 249, "end": 282, "text": "Ritter et al.'s NER system (2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing and Experiment Setup", "sec_num": "5.1" }, { "text": "Consecutive tokens with the same named entity tag are chunked (Mintz et al., 2009) . Part-ofspeech tags are assigned based on Owoputi et al's tweet POS system (Owoputi et al., 2013) . Data is divided in halves. The first is used as training data and the other as testing data.", "cite_spans": [ { "start": 62, "end": 82, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF14" }, { "start": 159, "end": 181, "text": "(Owoputi et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing and Experiment Setup", "sec_num": "5.1" }, { "text": "Our network intuition is that users are much more likely to be friends with other users who share attributes, when compared to users who have no attributes in common. In order to statistically show this, we report the value of AFFINITY defined by Mislove et al (2010) , which is used to quantitatively evaluate the degree of HOMOPHILY in the network. AFFINITY is the ratio of the fraction of links between attribute (k)-sharing users (S k ), relative to what is expected if attributes are randomly assigned in the network (E k ).", "cite_spans": [ { "start": 247, "end": 267, "text": "Mislove et al (2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Friends with Same Attribute", "sec_num": "5.2" }, { "text": "S k = i j\u2208F k i I(P k i = P k j ) i j\u2208F k i I E k = m T k m (T k m \u2212 1) U k (U k \u2212 1) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Friends with Same Attribute", "sec_num": "5.2" }, { "text": "where T k m denotes the number of users with m value for attribute k and U k = m T k m . Table 3 shows the affinity value of the Education and Job. As we can see, the property of HOMOPHILY indeed exists among users in the social network with respect to Education and Job attribute, as significant affinity is observed. In particular, the affinity value for Education is 74.3, implying that users connected by a link in the network are 74.3 times more likely affiliated in the same school than as expected if education attributes are randomly assigned. It is interesting to note that Education exhibits a much stronger HOMOPHILY property than Job. Such affinity demonstrates that our approach that tries to take advantage of network information for attribute prediction of holds promise.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Friends with Same Attribute", "sec_num": "5.2" }, { "text": "We evaluate settings described in Section 4.2 i.e., GLOBAL setting, where user-level attribute is predicted directly from jointly feature space and LO-CAL setting where user-level prediction is made based on tweet-level prediction along with different inference approaches described in Section 4.4, i.e. NEIGH-OBSERVED and NEIGH-LATENT, regarding whether neighbor information is observed or latent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "Baselines We implement the following baselines for comparison and use identical processing techniques for each approach for fairness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "\u2022 Only-Text: A simplified version of our algorithm where network/neighbor influence is ignored. Classifier is trained and tested only based on text features. \u2022 NELL: For Job and Education, candidate is selected as attribute value once it matches bag of words in the list of universities or companies borrowed from NELL. For Education, the list is extended by alias identification based on Freebase. For Job, we also fetch the name abbreviations 15 . NELL is only implemented for Education and Job attribute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "For each setting from each approach, we report the (P)recision, (R)ecall and (F)1-score. For LO-CAL setting, we report the performance for both entity-level prediction (Entity) and posting-level prediction (Tweet). Results for Education, Job and Spouse from different approaches appear in Table 4 , 5 and 6 respectively.", "cite_spans": [], "ref_spans": [ { "start": 289, "end": 297, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "Local or Global For horizontal comparison, we observe that GLOBAL obtains a higher Precision score but a lower Recall than LOCAL(ENTITY). This can be explained by the fact that LOCAL(U) sets z k i,e = 1 once one posting x \u2208 L e i is identified as attribute related, while GLOBAL tend to be more meticulous by considering the conjunctive feature space from all postings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "Homophile effect In agreement with our expectation, NEIGH-OBSERVED performs better than NEIGH-LATENT since erroneous predictions in 15 http://www.abbreviations.com/ NEIGH-LATENT setting will have negative influence on further prediction during the greedy search process. Both NEIGH-OBSERVED and NEIGH-LATENT where network information is harnessed, perform better than Only-Text, which the prediction is made independently on user's text features. The improvement of NEIGH-OBSERVED over Only-Text is 22.7% and 6.4% regarding F-1 score for Education and Job respectively, which further illustrate the usefulness of making use of Homophile effect for attribute inference on online social media. It is also interesting to note the improvement much more significant in Education inference than Job inference. This is in accord with what we find in Section 5.2, where education network exhibits stronger HOMOPHILE property than Job network, enabling a significant benefit for education inference, but limited for job inference.", "cite_spans": [ { "start": 132, "end": 134, "text": "15", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "Spouse prediction also benefits from neighboring effect and the improvement is about 12% for LOCAL(ENTITY) setting. Unlike Education and Job prediction, for which in NEIGH-OBSERVED setting all neighboring variables are observed, network variables are hidden during spouse prediction. By considering network information, the model benefits from evident clues offered by tweet corpus of user e's spouse when making prediction for e, but also suffers when erroneous decision are made and then used for downstream predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "NELL Baseline Notably, NELL achieves highest Recall score for Education inference. It is also worth noting that most of education mentions that NELL fails to retrieve are those involve irregular spellings, such as HarvardUniv and Cornell U, which means Recall score for NELL baseline would be even higher if these irregular spellings are recognized in a more sophisticated system. The reason for such high recall is that as our ground truths are obtained from Google plus, the users from which are mostly affiliated with decent schools found in NELL dictionary. However, the high recall from NELL is sacrificed at precision, as users can mention school entities in many of situations, such as paying a visit or reporting some relevant news. NELL will erroneously classify these cases as attribute mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "NELL does not work out for Job, with a fairly poor 0.0156 F1 score for LOCAL(ENTITY) and 0.163 for LOCAL(TWEET). Poor precision is expected for as users can mention firm entity in a great many of situations. The recall score for Table 6 : Results for Spouse Prediction NELL in job inference is also quite low as job related entities exhibit a greater diversity of mentions, many of which are not covered by the NELL dictionary.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 236, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "Vertical Comparison: Education, Job and Spouse Job prediction turned out to be much more difficult than Education, as shown in Tables 4 and 5. Explanations are as follows: (1) Job contains a much greater diversity of mentions than Education. Education inference can benefit a lot from the dictionary relevant feature which Job may not. (2) Education mentions are usually associated with clear evidence such as homework, exams, studies, cafeteria or books, while situations are much more complicated for job as vocabularies are usually specific for different types of jobs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "(3) The boundary between a user working in and a fun for a specific operation is usually ambiguous. For example, a Google engineer may constantly update information about outcome products of Google, so does a big fun. If the aforementioned engineer barely tweets about working conditions or colleagues (which might still be ambiguous), his tweet collection, which contains many of mentions about outcomes of Google product, will be significantly similar to tweets published by a Google fun. Such nuisance can be partly solved by the consideration of network information, but not totally. The relatively high F1 score for spouse prediction is largely caused by the great many of non-individual related entities in the dataset, the identification of which would be relatively simpler. A deeper look at the result shows that the classifier frequently makes wrong decisions for entities such as userID and name entities. Significant as some spouse relevant features are, such as love, husband, child, in most circumstances, spouse mentions are extremely hard to recognize. For example, in tweets \"Check this out, @alancross, it's awesome bit.ly/1bnjYHh.\" or \"Happy Birthday @alancross !\". alancross can reasonably be any option among current user's friend, colleague, parents, child or spouse. Repeated mentions add no confidence. Although we can identify alancross as spouse attribute once it jointly appear with other strong spouse indicators, they are still many cases where they never co-appear. How to integrate more useful side information for spouse recognition constitutes our future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "In this paper, we propose a framework for user attribute inference on Twitter. We construct the publicly available dataset based on distant supervision and experiment our model on three useful user profile attributes, i.e., Education, Job and Spouse. Our model takes advantage of network information on social network. We will keep updating the dataset as more data is collected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "One direction of our future work involves exploring more general categories of user profile at-tributes, such as interested books, movies, hometown, religion and so on. Facebook would an ideal ground truth knowledge base. Another direction involves incorporating richer feature space for better inference performance, such as multi-media sources (i.e. pictures and video).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Both code and data are available at http://aclweb. org/aclwiki/index.php?title=Profile_data 2 https://www.facebook.com/ 3 https://plus.google.com/ 4 https://www.facebook.com/about/ graphsearch", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://developers.google.com/+/api/ 6 An unambiguous twitter account link is needed here because of the common phenomenon of name duplication.7 https://twitter.com/search 8 http://wiki.freebase.com/wiki/ Freebase_API 9 Due to Twitter user timeline limit, we crawled at most 3200 tweets for each user.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://developers.facebook.com/docs/ graph-api/ 11 http://www.freebase.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://code.google.com/p/ ark-tweet-nlp/downloads/list", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "A special thanks is owned to Dr. Julian McAuley and Prof. Jure Leskovec from Stanford University for the Google+ circle/network crawler, without which the network analysis would not have been conducted. This work was supported in part by DARPA under award FA8750-13-2-0005.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Homophily and latent attribute inference: Inferring latent attributes of twitter users from neighbors", "authors": [ { "first": "Faiyaz", "middle": [], "last": "Zamal", "suffix": "" }, { "first": "Wendy", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Ruths", "suffix": "" } ], "year": 2012, "venue": "ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Faiyaz Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and latent attribute inference: Inferring latent attributes of twitter users from neighbors. In ICWSM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Event discovery in social media feeds", "authors": [ { "first": "Edward", "middle": [], "last": "Benson", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "389--398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1, pages 389-398. As- sociation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Extracting patterns and relations from the world wide web", "authors": [ { "first": "", "middle": [], "last": "Sergey Brin", "suffix": "" } ], "year": 1999, "venue": "The World Wide Web and Databases", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Brin. 1999. Extracting patterns and relations from the world wide web. In The World Wide Web and Databases.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Gender inference of twitter users in nonenglish contexts", "authors": [ { "first": "Morgane", "middle": [], "last": "Ciot", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Sonderegger", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Ruths", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "18--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morgane Ciot, Morgan Sonderegger, and Derek Ruths. 2013. Gender inference of twitter users in non- english contexts. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, Seattle, Wash, pages 18-21.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Political polarization on twitter", "authors": [ { "first": "Michael", "middle": [], "last": "Conover", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Ratkiewicz", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Francisco", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Gon\u00e7alves", "suffix": "" }, { "first": "Filippo", "middle": [], "last": "Menczer", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Flammini", "suffix": "" } ], "year": 2011, "venue": "ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Conover, Jacob Ratkiewicz, Matthew Fran- cisco, Bruno Gon\u00e7alves, Filippo Menczer, and Alessandro Flammini. 2011. Political polarization on twitter. In ICWSM.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Constructing biological knowledge bases by extracting information from text sources", "authors": [ { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Kumlien", "suffix": "" } ], "year": 1999, "venue": "ISMB", "volume": "1999", "issue": "", "pages": "77--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Craven and Johan Kumlien 1999. Construct- ing biological knowledge bases by extracting infor- mation from text sources. In ISMB, volume 1999, pages 77-86.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Twitter sentiment classification using distant supervision", "authors": [ { "first": "Alec", "middle": [], "last": "Go", "suffix": "" }, { "first": "Richa", "middle": [], "last": "Bhayani", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter sentiment classification using distant supervision. CS224N Project Report, Stanford, pages 1-12.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Social media recommendation based on people and tags", "authors": [ { "first": "Ido", "middle": [], "last": "Guy", "suffix": "" }, { "first": "Naama", "middle": [], "last": "Zwerdling", "suffix": "" }, { "first": "Inbal", "middle": [], "last": "Ronen", "suffix": "" }, { "first": "David", "middle": [], "last": "Carmel", "suffix": "" }, { "first": "Erel", "middle": [], "last": "Uziel", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "194--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Guy, Naama Zwerdling, Inbal Ronen, David Carmel, and Erel Uziel. 2010. Social media recom- mendation based on people and tags. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 194-201. ACM.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Knowledgebased weak supervision for information extraction of overlapping relations", "authors": [ { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Congle", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "S", "middle": [], "last": "Luke", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Weld", "suffix": "" } ], "year": 2011, "venue": "ACL", "volume": "", "issue": "", "pages": "541--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke S Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In ACL, pages 541-550.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Timeline generation: Tracking individuals on twitter", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 23rd international conference on World wide web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li and Claire Cardie. 2013. Timeline generation: Tracking individuals on twitter. Proceedings of the 23rd international conference on World wide web.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Structure compilation: trading structure for features", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th international conference on Machine learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Hal Daum\u00e9 III, and Dan Klein. 2008. Structure compilation: trading structure for features. In Proceedings of the 25th international conference on Machine learning.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Whats in a name? using first names as features for gender inference in twitter", "authors": [ { "first": "Wendy", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Ruths", "suffix": "" } ], "year": 2013, "venue": "AAAI Spring Symposium Series", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wendy Liu and Derek Ruths. 2013. Whats in a name? using first names as features for gender inference in twitter. In 2013 AAAI Spring Symposium Series.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Using social media to infer gender composition of commuter populations", "authors": [ { "first": "Wendy", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Faiyaz", "middle": [], "last": "Zamal", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Ruths", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the When the City Meets the Citizen Workshop, the International Conference on Weblogs and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wendy Liu, Faiyaz Zamal, and Derek Ruths. 2012. Using social media to infer gender composition of commuter populations. In Proceedings of the When the City Meets the Citizen Workshop, the Interna- tional Conference on Weblogs and Social Media.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Birds of a feather: Homophily in social networks", "authors": [ { "first": "Miller", "middle": [], "last": "Mcpherson", "suffix": "" }, { "first": "Lynn", "middle": [], "last": "Smith-Lovin", "suffix": "" }, { "first": "James M", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2001, "venue": "Annual review of sociology", "volume": "", "issue": "", "pages": "415--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology, pages 415- 444.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" }, { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "2", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 1003-1011. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "You are who you know: inferring user profiles in online social networks", "authors": [ { "first": "Alan", "middle": [], "last": "Mislove", "suffix": "" }, { "first": "Bimal", "middle": [], "last": "Viswanath", "suffix": "" }, { "first": "Krishna", "middle": [], "last": "Gummadi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Druschel", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the third ACM international conference on Web search and data mining", "volume": "", "issue": "", "pages": "251--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Mislove, Bimal Viswanath, Krishna Gummadi, and Peter Druschel. 2010. You are who you know: inferring user profiles in online social networks. In Proceedings of the third ACM international confer- ence on Web search and data mining, pages 251- 260. ACM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improved part-of-speech tagging for online conversational text with word clusters", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Oconnor", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "380--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olutobi Owoputi, Brendan OConnor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL-HLT, pages 380-390.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A machine learning approach to twitter user classification", "authors": [ { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Popescu", "suffix": "" } ], "year": 2011, "venue": "ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Pennacchiotti and Ana Popescu. 2011. A ma- chine learning approach to twitter user classification. In ICWSM.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A multipass sieve for coreference resolution", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Karthik Raghunathan", "suffix": "" }, { "first": "Sudarshan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Rangarajan", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multi- pass sieve for coreference resolution. In Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Detecting latent user properties in social media", "authors": [ { "first": "Delip", "middle": [], "last": "Rao", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2010, "venue": "Proc. of the NIPS MLSN Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Delip Rao and David Yarowsky. 2010. Detecting latent user properties in social media. In Proc. of the NIPS MLSN Workshop.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Classifying latent user attributes in twitter", "authors": [ { "first": "Delip", "middle": [], "last": "Rao", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Shreevats", "suffix": "" }, { "first": "Manaswi", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2nd international workshop on Search and mining usergenerated contents", "volume": "", "issue": "", "pages": "37--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user at- tributes in twitter. In Proceedings of the 2nd in- ternational workshop on Search and mining user- generated contents, pages 37-44. ACM.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Hierarchical bayesian models for latent attribute detection in social media", "authors": [ { "first": "Delip", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Fink", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Oates", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Coppersmith", "suffix": "" } ], "year": 2011, "venue": "ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Delip Rao, Michael Paul, Clayton Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hierarchical bayesian models for latent at- tribute detection in social media. In ICWSM.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Literature mining of protein-residue associations with graph rules learned through distant supervision", "authors": [ { "first": "Haibin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wall", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Verspoor", "suffix": "" } ], "year": 2012, "venue": "Journal of biomedical semantics", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haibin Liu, Michael Wall, Karin Verspoor, et al. 2012. Literature mining of protein-residue associations with graph rules learned through distant supervision. Journal of biomedical semantics, 3(Suppl 3):S2.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Named entity recognition in tweets: an experimental study", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Mausam", "suffix": "" }, { "first": "", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1524--1534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Sam Clark, Mausam, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an ex- perimental study. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing, pages 1524-1534. Association for Compu- tational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Modeling missing data in distant supervision for information extraction", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Mausam", "middle": [], "last": "", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Et- zioni. 2013. Modeling missing data in distant su- pervision for information extraction.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Multi-instance multi-label learning for relation extraction", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "455--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 455- 465. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Reducing wrong labels in distant supervision for relation extraction", "authors": [ { "first": "Shingo", "middle": [], "last": "Takamatsu", "suffix": "" }, { "first": "Issei", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", "volume": "1", "issue": "", "pages": "721--729", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervi- sion for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers-Volume 1, pages 721-729. Association for Computational Linguis- tics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Autonomously semantifying wikipedia", "authors": [ { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Daniel S Weld", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the sixteenth ACM conference on Conference on information and knowledge management", "volume": "", "issue": "", "pages": "41--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Wu and Daniel S Weld. 2007. Autonomously se- mantifying wikipedia. In Proceedings of the six- teenth ACM conference on Conference on infor- mation and knowledge management, pages 41-50. ACM.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Overlapping community detection at scale: A nonnegative matrix factorization approach", "authors": [ { "first": "Jaewon", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the sixth ACM international conference on Web search and data mining", "volume": "", "issue": "", "pages": "587--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaewon Yang and Jure Leskovec. 2013. Overlapping community detection at scale: A nonnegative matrix factorization approach. In Proceedings of the sixth ACM international conference on Web search and data mining, pages 587-596. ACM.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Illustration of Goolge Plus \"knowledge base\".", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "Example of fetching tweets containing entity USC mention from Miranda Cosgrove (an American actress and singer-songwriter)'s twitter stream.", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Inference for NEIGH-LATENT setting.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "type_str": "table", "num": null, "content": "
promo day with my handsome classy husband
@[shanenicholson]
Spouse: shanenicholson
Job: HuffPo
", "text": "got accepted to be part of the UofM engineering safety pilot program in [FSU] Here in class. (@ [Florida State University] -Williams Building) Don't worry , guys ! Our beloved [FSU] will always continue to rise \" to the top ! Education: Florida State University (FSU) first day of work at [HuffPo], a sports bar woo come visit me yo.. start to think we should just add a couple desks to the [HuffPo] newsroom for Business Insider writers just back from [HuffPo], what a hell ! Examples of Twitter message clues for user profile inference.", "html": null }, "TABREF1": { "type_str": "table", "num": null, "content": "", "text": "Our education dataset contains 7,208 users, 6,295 of which are connected to others in the network. The positive training set for the EDUCATION is comprised of 134,060 tweets.Spouse Facebook is the only type of social media where spouse information is commonly displayed. However, only a tiny amount of individual information is publicly accessible from Facebook Graph API 10 . To obtain ground truth for the spouse relation at large scale, we turned to Freebase 11 , a large, open-domain database, and gathered instances of the /PEOPLE/PERSON/SPOUSE relation. Positive/negative training tweets are obtained in the same way as was previously described for EDUCATION and JOB. It is worth noting that our Spouse dataset is not perfect, as individuals retrieved from Freebase are mostly celebrities, and thus it's not clear whether this group of people are representative of the general population.", "html": null }, "TABREF2": { "type_str": "table", "num": null, "content": "
EducationJobSpouse
#Users7,2081,8061,636
#Users Con-6,2951,4071,108
nected
#Edges11,1673,565554
#Pos Entities4513803121
#Pos Tweets124,80165,031135,466
#AverPos17.336.682.8
Tweets per User
#Neg Entity6,987,1864,405,5308,840,722
#Neg Tweets16,150,600 10,687,403 12,872,695
", "text": "where N i denotes the number of tweets user i published.", "html": null }, "TABREF3": { "type_str": "table", "num": null, "content": "", "text": "Statistics for our Dataset", "html": null }, "TABREF4": { "type_str": "table", "num": null, "content": "
GLOBALLOCAL(ENTITY)LOCAL(TWEET)
PRFPRFPRF
Our approachNEIGH-OBSERVED NEIGH-LATENT0.643 0.330 0.430 0.617 0.320 0.4210.374 0.2260.620 0.5440.467 0.3190.891 0.698 0.783 0.804 0.572 0.668
Only-Text--0.602 0.304 0.4040.1550.5010.2370.764 0.471 0.583
NELL--------0.0079 0.509 0.0156 0.094 0.604 0.163
", "text": "Results for Education Prediction", "html": null }, "TABREF5": { "type_str": "table", "num": null, "content": "
GLOBALLOCAL(ENTITY)LOCAL(TWEET)
PRFPRFPRF
Our approach --0.870 0.560 0.681 0.593 0.857 0.701 0.904 0.782 0.839
Only-Text--0.852 0.448 0.587 0.521 0.781 0.625 0.890 0.729 0.801
", "text": "Results for Job Prediction", "html": null } } } }