{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:00:35.183359Z" }, "title": "Predicting Responses to Psychological Questionnaires from Participants' Social Media Posts and Question Text Embeddings", "authors": [ { "first": "Huy", "middle": [], "last": "Vu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stony Brook University", "location": {} }, "email": "" }, { "first": "Suhaib", "middle": [], "last": "Abdurahman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Free University of Berlin", "location": {} }, "email": "suhaib.abdurahman@gmail.com" }, { "first": "Sudeep", "middle": [], "last": "Bhatia", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": {} }, "email": "bhatiasu@sas.upenn.edu" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": {} }, "email": "ungar@cis.upenn.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Psychologists routinely assess people's emotions and traits, such as their personality, by collecting their responses to survey questionnaires. Such assessments can be costly in terms of both time and money, and often lack generalizability, as existing data cannot be used to predict responses for new survey questions or participants. In this study, we propose a method for predicting a participant's questionnaire response using their social media texts and the text of the survey question they are asked. Specifically, we use Natural Language Processing (NLP) tools such as BERT embeddings to represent both participants (via the text they write) and survey questions as embeddings vectors, allowing us to predict responses for out-of-sample participants and questions. Our novel approach can be used by researchers to integrate new participants or new questions into psychological studies without the constraint of costly data collection, facilitating novel practical applications and furthering the development of psychological theory. Finally, as a side contribution, the success of our model also suggests a new approach to study survey questions using NLP tools such as text embeddings rather than response data used in traditional methods.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Psychologists routinely assess people's emotions and traits, such as their personality, by collecting their responses to survey questionnaires. Such assessments can be costly in terms of both time and money, and often lack generalizability, as existing data cannot be used to predict responses for new survey questions or participants. In this study, we propose a method for predicting a participant's questionnaire response using their social media texts and the text of the survey question they are asked. Specifically, we use Natural Language Processing (NLP) tools such as BERT embeddings to represent both participants (via the text they write) and survey questions as embeddings vectors, allowing us to predict responses for out-of-sample participants and questions. Our novel approach can be used by researchers to integrate new participants or new questions into psychological studies without the constraint of costly data collection, facilitating novel practical applications and furthering the development of psychological theory. Finally, as a side contribution, the success of our model also suggests a new approach to study survey questions using NLP tools such as text embeddings rather than response data used in traditional methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Psychologists conduct personality research in order to understand what aspects and factors consistently distinguish people from each other on an individual level. This is relevant because personality influences important life outcomes such as occupational and educational success and even physical and mental health (Judge et al., 1999; Roberts et al., 2007) .", "cite_spans": [ { "start": 316, "end": 336, "text": "(Judge et al., 1999;", "ref_id": "BIBREF17" }, { "start": 337, "end": 358, "text": "Roberts et al., 2007)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditionally, psychologists measure personality through questionnaires, by having participants read and answer questions on a rating scale, for instance from \"strongly disagree\" to \"strongly agree\". However, acquiring questionnaire data in psychological research is often a tedious and costly process, as study participants must be recruited and motivated to complete questionnaires. This problem is particularly pronounced for longer surveys, which suffer from low completion rates and careless responses due to low participant motivation (Niessen et al., 2016; Van de Mortel et al., 2008; Raghunathan and Grizzle, 1995; Champion and Sear, 1969) . Therefore, the ability to predict questionnaire responses would be of great use to researchers.", "cite_spans": [ { "start": 541, "end": 563, "text": "(Niessen et al., 2016;", "ref_id": "BIBREF24" }, { "start": 564, "end": 591, "text": "Van de Mortel et al., 2008;", "ref_id": "BIBREF23" }, { "start": 592, "end": 622, "text": "Raghunathan and Grizzle, 1995;", "ref_id": "BIBREF26" }, { "start": 623, "end": 647, "text": "Champion and Sear, 1969)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contribution of this paper is to address this issue. We propose a system that uses the participants' social media texts and the question texts to predict the participants' responses. The system extracts BERT embeddings from the two input components and then trains a predictive model. After training, we can predict the responses for every new participant, new question or both, only requiring the participants' social media texts and the question texts. If our approach is successful, it will greatly reduce the costs of collecting response data for psychologists, especially when the number of new participants or questions is large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Moreover, the success of our model also suggests a new approach to analyse psychological questionnaires by using Natural Language Processing (NLP). Traditionally, psychologists analyse questionnaires using only the participants' responses to the questionnaires, rather than the text and lexicon of the questions themselves (Cook and Beckman, 2006; Crocker and Algina, 1986) . For instance, participants' responses are used to measure the similarity between two questions. However, these traditional approaches have high requirements for available response data which are costly to collect and moreover, lack the flexibility of integrating new participants or questions into their studies. In contrast, our novel approach of applying NLP into questionnaire research, as implemented in our model, offers the possibility of extending existing survey datasets and questionnaires to new subject populations and to new theoretical constructs, greatly improving the generalizability of psychological research and opening up many practical applications for personality research.", "cite_spans": [ { "start": 323, "end": 347, "text": "(Cook and Beckman, 2006;", "ref_id": "BIBREF4" }, { "start": 348, "end": 373, "text": "Crocker and Algina, 1986)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the most widely known and researched psychological personality models is the Five Factor or \"Big Five\" personality model. This comprehensive model categorizes human personality traits into five bipolar categories: Openness to Experience, Conscientiousness, Extraversion, Agreeableness and Neuroticism (Goldberg, 1993) .", "cite_spans": [ { "start": 278, "end": 324, "text": "Agreeableness and Neuroticism (Goldberg, 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Personality questionnaires", "sec_num": "2.1" }, { "text": "These categories are meant to describe a person's characteristic behaviors throughout different contexts of their daily life. The NEO-PI-R is one of the most established and widely accepted BIG 5 questionnaires (Costa and McCrae, 1989; Costa Jr and McCrae, 2008) . As a proxy to the NEO-PI-R, this study uses the 100 question set from the publicly available International Personality Item Pool (IPIP), which is a large collection of questions for use in psychometric testing (Goldberg et al., 2006) . This set of questions has been widely used in previous research such as Kulkarni et al. (2018) ; Park et al. (2015) . Examples of questions measuring different categories are: \"I have a vivid imagination\" (openness) or \"I do not mind being the center of attention\" (extraversion). Each question is rated on a 5-point scale ranging from 1 (strongly disagree) to 5 (strongly agree). For each BIG 5 category, there are 20 questions which either increase or decrease the score of that specific category. In this paper, we call this the \"direction\" of the questions. Examples of questions that share a category but have opposite directions are: \"I am easy to satisfy\" (agreeableness -positive), \"I suspect hidden motives in others\" (agreeableness -negative). The full list of 100 questions, along with their categories and directions can be found at: https: //ipip.ori.org/newBigFive5broadKey.htm When measuring personality using questionnaire responses, psychologists commonly \"reverse\" the responses to negative questions to bring them in line with the positive questions. For example, a response of 1 to a negative question will be reversed to become 5 before further analysis. In this paper, we also use reverse-coding to pre-process all response data.", "cite_spans": [ { "start": 211, "end": 235, "text": "(Costa and McCrae, 1989;", "ref_id": "BIBREF5" }, { "start": 236, "end": 262, "text": "Costa Jr and McCrae, 2008)", "ref_id": "BIBREF6" }, { "start": 475, "end": 498, "text": "(Goldberg et al., 2006)", "ref_id": "BIBREF12" }, { "start": 573, "end": 595, "text": "Kulkarni et al. (2018)", "ref_id": "BIBREF18" }, { "start": 598, "end": 616, "text": "Park et al. (2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Personality questionnaires", "sec_num": "2.1" }, { "text": "First, the cold start problem, meaning that for every new participant for whom we want to predict responses, we lack the initial information necessary to determine their similarity to the other participants in the data set. While using advanced participant information such as participants' social media text embeddings can help with that problem to some extent (Sedhain et al., 2014) , a second issue remains: For every new question we add to the questionnaire, we lack information on how any new participant would answer it, meaning we cannot make any predictions for novel questions. For both problems, some responses need to be elicited from each participant and for each question before predictions can be made.", "cite_spans": [ { "start": 362, "end": 384, "text": "(Sedhain et al., 2014)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Predicting questionnaire responses", "sec_num": "2.2" }, { "text": "Our approach avoids this bottleneck by using a predictive model that can make predictions using the new participants' social media text or new questions' text embeddings, without requiring any response data for either new participants or questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting questionnaire responses", "sec_num": "2.2" }, { "text": "There is increasing interest in estimating human personality from online data, including users' so- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Characterizing users by their social media text", "sec_num": "2.3" }, { "text": "The BERT model proposed by Jacob Devlin (2019) has become increasingly popular as an out-ofthe-box and powerful pre-trained language model. Based on the idea of contextualized embeddings, BERT is a multi-purpose model for many downstream tasks and able to run efficiently thanks to parallel computation advantage of using transformers (Ashish Vaswani, 2017). Because of the capacity to capture contexts in both directions, an improvement over one-sided context models such as ELMO (Matthew E. Peters, 2018), sentence embeddings from BERT prove to be very strong features for many downstream tasks (Jacob Devlin, 2019). There are two main ways to use pre-trained BERT models. The first is by adding layers at the end of BERT and then fine-tuning the whole model end-toend for the new downstream tasks. The second is to take pre-trained BERT embeddings, such as words or sentence embeddings, as input for subsequent models. In this study, we use BERT pre-trained embeddings to capture both the participants' social media texts and the questionnaire's question texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT embeddings", "sec_num": "2.4" }, { "text": "3 Dataset", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT embeddings", "sec_num": "2.4" }, { "text": "We collected a dataset of 1000 Facebook users, each having at least 300 Facebook posts. For each user, we randomly picked 300 posts from their entire timeline. All selected users had posted at least 1000 words in total and were less than 65 years in age. Some sample posts are: \"Someone spoiled my good mood... :(\"; \"I big thanks to all my friends that wished me a happy birthday.\"; \"Day one at fair was totally fun. Wish you were here\". All users had responded to all 100 questions in the IPIP Big5 questionnaire using a custom application (Michal Kosinski, 2015). The responses have integer values from 1 to 5. As described above, the responses of \"negative\" questions were reversed before further analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data description", "sec_num": "3.1" }, { "text": "All participants explicitly acknowledged consent for their responses and Facebook information to be used for research purposes. All research procedures were approved by the University of Pennsylvania Institutional Review Board.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data description", "sec_num": "3.1" }, { "text": "\u2022 Question embeddings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User and question embedding", "sec_num": "3.2" }, { "text": "We used the pre-trained BERT embeddings to capture question text semantics. The model used is BERT Large and Uncased (24 layers, 1024 dimensions). The word embeddings in each question are averaged to get the question embeddings. Embeddings from the last four BERT layers were concatenated to create an embedding vector of size 4096. We then standardize the data and apply Principal Component Analysis (PCA) (Ian T. Jolliffe1, 2016) to reduce the dimensions down to 55 to avoid overfitting, while keeping the variation explained at 0.9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User and question embedding", "sec_num": "3.2" }, { "text": "\u2022 User embeddings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User and question embedding", "sec_num": "3.2" }, { "text": "We used the pre-trained BERT Base Uncased (12 layers, 768 dimensions) model to extract user features as follows: For each Facebook user, we randomly selected 300 posts from their timeline and then extracted the BERT embeddings from the words in these posts. The embeddings at the last four layers were averaged to get the word embeddings, which were then averaged to get the post embeddings, which were again averaged to get the user embeddings. The user embeddings were standardized, and PCA was used to reduce their dimension from 768 to 250, again keeping an explained variation of 0.9. The main reason we chose to average the last four embedding layers instead of concatenating them, as we did with the question embeddings, is because of the Facebook data's volume (hundreds of thousands of posts vs only 100 questions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User and question embedding", "sec_num": "3.2" }, { "text": "We first conducted two experiments to separately test the quality of the question and user embeddings, and then a third main experiment in which both user and question features were used together to predict the response of a user to a question. We", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "\u2022 Used question embeddings to build separate models for each user that predict their response to novel questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "\u2022 Used user embeddings to build separate models for each question to predict the response of a new user to that question", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "\u2022 Used both user and question embeddings to predict the response of a new user to a new question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Since text embeddings of the questions for assessment questionnaires have not been explored in previous studies, the first and third tasks are novel and play a crucial role in exploring this new prediction approach. The second task, in which the users' have been characterized by their social media posts, has been explored previously. However, we will show that BERT embeddings outperform the traditional Latent Dirichlet Analysis (LDA) used in prior work. Our main goal is to explore the novel idea of using the text embeddings of questions and of users to predict user responses to questions. Therefore, we do not focus on designing sophisticated deep learning models. Instead, we chose to use simpler but powerful, widely used models: ridge regression and K-nearest neighbors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Our first task sought to test the quality of question embeddings, asking how well BERT can capture the semantics of questions from a questionnaire. We did this by using question embeddings to build separate models for each user to predict their responses. Thus, For each user u i th (i = 1, ..., 1000), using 10-fold cross-validation, we trained a predictive model using 90 BERT question embeddings as input and the responses to the respective questions as labels, and then predicted responses on the 10 held-out questions. This novel task is important for this study because it shows that we can use text embeddings to capture the semantics of previously unseen questions and predict responses to those questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing question embeddings", "sec_num": "4.1" }, { "text": "We trained a ridge regression model on the data set and optimised the regularization hyperparameter alpha for total L1-loss and correlation, using the predictions across all users. The hyperparameter alpha was tuned between alpha = 1 and alpha = 1000 (multiplied by 10 for each step). Similarly, we also trained a KNN model and optimised the number of neighbors k for total L1-loss and correlation. The hyper-parameter k was tuned between k = 1 and k = 20 (increased by 1 for each step).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing question embeddings", "sec_num": "4.1" }, { "text": "The performance is measured by the correlation between the responses predictions and the groundtruth vectors as follows. For each user u i th , for i = 1, ..., 1000, we obtain a 10-folds (for each fold, training on 90 questions and testing on the left-out 10 questions) prediction vector prediction u i th , having the size of (1 \u00d7 100). We concatenate all prediction vectors of all users prediction u i th into one single prediction vector prediction u all of size (1 \u00d7 (1000 \u00d7 100)). Then the correlation between this prediction vector with the groundtruth vector groundtruth u all is calculated and reported.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing question embeddings", "sec_num": "4.1" }, { "text": "We compared the models with a baseline, in which for each fold of each user, the mean of the responses on the training questions partition is used as predictions for the testing questions partition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing question embeddings", "sec_num": "4.1" }, { "text": "Our second task used user embeddings to predict the response of a novel user to a given question. For each individual question q i th (i = 1, ..., 100), we trained a different model, predicting the response of any user with the BERT embedding of that user. I.e., for each question, we trained a separate model with 900 user embeddings as inputs and their response to the respective question as labels. The model was then tested on the 100 held-out users, using 10-fold cross-validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing user embeddings", "sec_num": "4.2" }, { "text": "We trained the same models as in the previous task, again optimising the regularization parameter alpha and the number of nearest neighbors k for total correlation and L1-loss, using the predictions across all questions. The hyperparameter alpha was tuned between alpha = 1 and alpha = 100, 000 (multiplied by 10 for each step) and the number of neighbors k was tuned between k = 5 and k = 450 (increased by 5 for each step). Finally, we compared the models with a baseline, which used the mean of the responses for each individual question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing user embeddings", "sec_num": "4.2" }, { "text": "We also compared our models against the LDA method, where a user embedding is the proportion of each of a set of LDA topics in their Facebook posts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing user embeddings", "sec_num": "4.2" }, { "text": "For our LDA-based personality prediction, we replicate the work of Kulkarni et al. (2018) , that is, we extracted users features using 2000 publicly available LDA topics (at https://dlatk.wwbp.org/datasets.", "cite_spans": [ { "start": 67, "end": 89, "text": "Kulkarni et al. (2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Testing user embeddings", "sec_num": "4.2" }, { "text": "learned from Facebook posts, which were created using the MALLET library (McCallum, 2002) with alpha = 30.", "cite_spans": [ { "start": 73, "end": 89, "text": "(McCallum, 2002)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "html?highlight=met_a30_2000_cp)", "sec_num": null }, { "text": "We seek to confirm the predictive quality of user LDA-based embeddings for predicting questionnaire responses, while also testing the relative performance of new feature extracting methods such as BERT over the older LDA (David M. Blei, 2003) .", "cite_spans": [ { "start": 221, "end": 242, "text": "(David M. Blei, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "html?highlight=met_a30_2000_cp)", "sec_num": null }, { "text": "The performance is measured by the correlation between the responses predictions and the groundtruth vectors as follows. For each question q i th , for i = 1, ..., 100, we obtain a 10-folds (for each fold, training on 900 users and testing on the left-out 100 users) prediction vector prediction q i th , having the size of (1 \u00d7 1000). We concatenate all prediction vectors of all questions prediction q i th into one sinlge prediction vector prediction q all of size (1 \u00d7 (100 \u00d7 1000)). Then the correlation between this prediction vector with the groundtruth vector groundtruth q all is calculated and reported.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "html?highlight=met_a30_2000_cp)", "sec_num": null }, { "text": "We also compared the models with a baseline, in which for each fold of each question, the mean of the responses on the training users partition is used as predictions on the testing users partition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "html?highlight=met_a30_2000_cp)", "sec_num": null }, { "text": "In our third task, the main predictive task of this study, we used both user and question embeddings to predict the response of a user to a question. This is a much more challenging task than the previous tasks, since the model must learn to generalize over both users and questions. For evaluation, we divided the users and questions into 10 folds, testing on (user, question) pairs for which neither the user nor the question is in the training set. I.e., for the i th loop, the i th user fold and i th question fold were kept as testing folds, while the model was trained on the remaining 9 user and question folds. Each training sample was created by combining one user embedding and one question embedding from the training folds. Since there were in total 1000 users and 100 questions, for each loop, we had 900 users and 90 questions for training, and 100 users and 10 questions for testing, resulting in 900 \u00d7 90 = 81, 000 training samples, and 100 \u00d7 10 = 1, 000 testing samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining user and question embeddings to predict responses", "sec_num": "4.3" }, { "text": "Again, we tested two models: ridge regression and K-nearest neighbors, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining user and question embeddings to predict responses", "sec_num": "4.3" }, { "text": "\u2022 Ridge regression:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining user and question embeddings to predict responses", "sec_num": "4.3" }, { "text": "The embeddings of the users and questions were concatenated to one vector and used as input features for the model, with the responses of the corresponding user/question pair used as the label. Since user and question embeddings required different regularizations, we rescaled them with two separate hyperparameters a question and a user , besides the model-wise alpha weight decay for regularization. We then ran a grid search on the three hyperparameters: a question , a user and alpha from 0.1 to 10, 000 (multiplied by 10 at each step) to look for the optimal set of hyper-parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining user and question embeddings to predict responses", "sec_num": "4.3" }, { "text": "\u2022 K-nearest neighbors: We applied KNN separately for the user and question features. For each test sample, consisting of one testing user and one testing question, we searched for the k user nearest users in the training set based on their user embeddings and the k question nearest questions in the training set based on their question embeddings. We then took the average of the responses of each of k user nearest users to each of k question nearest questions as the prediction value. For regularization, we ran a grid search on k user from 1 to 500 (increased by 25 at each step), and k question from 1 to 20 (increased by 1 at each step), and report the best performing set of hyperparameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining user and question embeddings to predict responses", "sec_num": "4.3" }, { "text": "The reported correlation is calculated as follows. For each k th fold with k = 1, ..., 10, a model is trained on the training partition of questions and users (q k th training f old \u00d7 u k th training f old ) of size (90 \u00d7 900) and tested on the left-out testing partition of size (10 \u00d7 100), which can be flatten out to a vector prediction q k th , u k th of size (1\u00d71000). The predictions across 10 folds are then concatenated into one vector prediction q all , u all of size (1 \u00d7 (10 \u00d7 1000)). We then calculate the correlation between this concatenated vector and the groundtruth vector groundtruth q all , u all and report the results. We compared the two models ridge regression and k-NN with a baseline, which simply takes the mean of all the responses within each training folds of questions and users as predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining user and question embeddings to predict responses", "sec_num": "4.3" }, { "text": "For the first task, we compare the best performance model for ridge regression, KNN and the baseline. Table 1 shows the highest correlation as r = 0.324 (p < 0.05) for ridge regression with a regularization parameter of alpha = 10, compared to the baseline correlation of r = 0.114 (p < 0.05). Questionnaire embeddings significantly improve predictions over the baseline.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Testing question embeddings", "sec_num": "5.1.1" }, { "text": "Thus question embeddings in fact do have predictive power for individual user responses. To further support this view, we visualised the question embeddings on a 2D plane for each pair of categories on a one versus one scheme in figure 2. The figures show that BERT embeddings are able to capture the differences of personality categories fairly well, and suggest their potential for use in future applications that use personality information. The figure was created by applying PCA to the questions embeddings reducing the dimensions to 2 and then plot them on a 2D plane. The full plots for all pairs of categories can be found in Appendices. Figure 2 illustrates the utility of using text embeddings to represent questionnaire questions. Psychologists commonly measure similarity between two questions by calculating the correlation of the responses to those questions. This works well-if one has collected user responses to the questions. Using BERT embeddings, in contrast, requires only the question's text; we can measure the semantic similarity between sentence pairs based on their distance in the embedding space, and thus to reduce the cost of data collection.", "cite_spans": [], "ref_spans": [ { "start": 646, "end": 654, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Testing question embeddings", "sec_num": "5.1.1" }, { "text": "Caveat: To put these results into context, typically psychological variables have a 'correlational upper-bound' around 0.3 to 0.4 correlation (G J Meyer, 2001) . Although our tasks are slightly different in the way that we measure correlations of users' responses to the personality questionnaire rather than the personality score as in (G J Meyer, 2001 ), but the value range of the correlations should be similar.", "cite_spans": [ { "start": 142, "end": 159, "text": "(G J Meyer, 2001)", "ref_id": "BIBREF9" }, { "start": 337, "end": 353, "text": "(G J Meyer, 2001", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Testing question embeddings", "sec_num": "5.1.1" }, { "text": "We now examine the second task, in which we build a separate model for each question, in order to predict the response of a given user to that question. Again, we compare the best performing models for ridge regression and KNN against the baseline. Table 1 shows the highest correlation to be r = 0.421 (p < 0.05) for the ridge regression with a regularization parameter of alpha = 1000, compared to the baseline correlation of r = 0.390 (p < 0.05).", "cite_spans": [], "ref_spans": [ { "start": 249, "end": 256, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Testing user embeddings", "sec_num": "5.1.2" }, { "text": "We again see a significant improvement of prediction over the baseline. This experiment thus reconfirms the utility of user embeddings in predicting personality. Moreover, the results also show improvement compared to the older LDA model, which by itself is a strong model, proving that BERT embeddings are superior in capturing personality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing user embeddings", "sec_num": "5.1.2" }, { "text": "It should be noted that our user embeddings require much higher regularization than the question embeddings in task one (k = 200 vs. k = 2 for KNN and alpha = 1000 vs alpha = 10 for ridge regression), which suggest a much higher level of noise in the user embeddings. This is not surprising, since the question texts are specifically designed to only measure one among five categories. User embeddings, on the other hand, are created from an aggregation of social media posts, of which each can be about any topic. It should therefore be expected that user embeddings contain more noise than the question embeddings and thus require stronger regularization to avoid overfitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing user embeddings", "sec_num": "5.1.2" }, { "text": "For this task, we reported the best performance model for ridge regression and KNN along with the baseline in Table 1 . We find the best correlation to be r = 0.22 (p < 0.05) for the KNN model (k question = 11, k user = 100), significantly higher than the baseline. It is thus possible to predict a user's response to a question using their social media text embeddings and the question text itself, even when neither user nor the question have ever been seen before. This is in stark contrast to collaborative filtering methods, which, for any new user or new question, always require some initial responses, as described in section 2.2.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Combining user and question embeddings to predict responses", "sec_num": "5.1.3" }, { "text": "The best performing model in this task has a correlation of r = 0.22 (p < 0.05), is better than baseline, but not as accurate as it would have been had one seen either the user (as in 4.2) or the question (as in 4.1) before. Generalizing over both users and questions is, not surprisingly, harder than generalizing over just one of them. The model is required to learn about two types of information, user and question embeddings, at the same time and across all users and all questions rather than on the individual user level or question level. We also find that, the ridge regression doesn't perform as well as the KNN model, in contrast to the first two experiments. A simple linear model concatenating users and questions is not able to compute how similar a question and a user are. (Beyond being a nonlinear relationship, remember that these embeddings are of different sizes.) KNN is a simple non-linear approach, and thus outperforms ridge regression. We expect that a reasonably designed neural network or deep learning model could improve these results substantially.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining user and question embeddings to predict responses", "sec_num": "5.1.3" }, { "text": "As mentioned in section 2, it is common to reversecode questionnaire responses; i.e., to transform the responses of negative questions (e.g. from a to (5 \u2212 a + 1)) to bring them in line with the positive questions. This transformation makes the prediction tasks easier because the model does not have to learn the direction (positive or negative) of the questions. However, we want to test whether our model can still perform well without reverse-coding information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing questions embeddings without reverse-coding responses", "sec_num": "5.2.1" }, { "text": "Since this task relies heavily on how well the BERT embeddings capture the direction of the questions, we reproduce the experiment in section 4.1 but with non-reverse-coded responses. The best performing ridge regression and KNN models are reported along with the baseline. Table 2 shows our models' performance on nonreverse-coded responses. The ridge regression model, although confronted with a more challenging task, still has a correlation of up to 0.325 (p < 0.05) as high as when using reverse-coded responses. The KNN model has a correlation of 0.234 (p < 0.05), still significantly better than the baseline. This proves that even without direction information of the questions, our model can still perform well. We also find that in this sce- nario, the baseline has much more difficulty in giving good predictions, with a very low correlation (r = 0.046, p < 0.05) and high L1 loss. This is probably caused by the value range being distributed more uniformly between 1 and 5 without reverse coding.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Testing questions embeddings without reverse-coding responses", "sec_num": "5.2.1" }, { "text": "In order for the model to perform much better than the baseline without reverse-coding, the questions embeddings must be able to capture not only the categories (O,C,E,A or N) but also the direction (negative and positive) of the questions. Indeed, this can be seen in the 2D plots in Figure 3 , which show that using text embeddings, we can visually separate positive and negative questions within one category.", "cite_spans": [], "ref_spans": [ { "start": 285, "end": 293, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Testing questions embeddings without reverse-coding responses", "sec_num": "5.2.1" }, { "text": "For task 4.1 and 4.2, we additionally looked into the models' performance on each BIG 5 category. Table 3 shows the results of the best performing model for the first two tasks. The complete results for all regularization configurations can be found in Appendices.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Prediction results for each category", "sec_num": "5.2.2" }, { "text": "\u2022 Regarding the predictions using user embeddings, table 3 shows the best performance in category Openness, followed by Agreeableness. The worst performance can be found in category Neuroticism. This might be partially explained by user activity on social media. Posts usually center around activities, experiences and feelings (Lai and To, 2015). These terms are usually associated with the first two categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction results for each category", "sec_num": "5.2.2" }, { "text": "\u2022 For the predictions using question embed- dings, the results in table 3 show relatively inconsistent results for the two models. This might be caused by the relatively small sample of question embeddings (100) compared to the user embeddings (1000). However, what is consistent over both models is the lower performance of Neuroticism and Agreeableness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction results for each category", "sec_num": "5.2.2" }, { "text": "While Neuroticism is consistent with the results for the user embeddings, Agreeableness is surprising and opposite to the explanations stated previously. As such, future research regarding category-specific performance should be conducted to gain further insight into these differences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction results for each category", "sec_num": "5.2.2" }, { "text": "Our study proposes a novel task: predicting responses of participants to a personality questionnaire, using their social media texts and the texts of the questions they are asked. Unlike prior work, we are able to successfully make out of sample predictions for both new survey questions and new participants. Our approach could potentially reduce the cost of data collection for psychologists, but more importantly our findings showcase a novel method for improving the generalizability of personality research. They also open up many novel applications that rely on existing social media and survey data to make predictions for out-of-sample participants and survey questions. Finally, our results offer the promise of improving psychological research by representing survey questions with informative text embeddings, which can be used by researchers and theorists to better understand the core dimensions of personality. We look forward to future work that integrates psychological theory with novel advances in natural language processing, to better measure, predict, and understand what distinguishes humans from each other. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "Appendices include:\u2022 The full visualizations of questions embeddings , for each pair of categories and opposite directions within each category.\u2022 Full results of task 1 and 2 described in 4.1 and 4.2 with choices of regularizations, correlations and L1 loss for each category separately. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Appendices", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mining the blogosphere: Age, gender and the varieties of selfexpression", "authors": [ { "first": "Shlomo", "middle": [], "last": "Argamon", "suffix": "" }, { "first": "Moshe", "middle": [], "last": "Koppel", "suffix": "" }, { "first": "W", "middle": [], "last": "James", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Pennebaker", "suffix": "" }, { "first": "", "middle": [], "last": "Schler", "suffix": "" } ], "year": 2007, "venue": "First Monday", "volume": "", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shlomo Argamon, Moshe Koppel, James W Pen- nebaker, and Jonathan Schler. 2007. Mining the blogosphere: Age, gender and the varieties of self- expression. First Monday, 12(9).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Attention is all you need", "authors": [], "year": 2017, "venue": "Advances in Neural Information Pro-cessing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niki Parmar Jakob Uszkoreit Llion Jones Aidan N. Gomez Lukasz Kaiser Illia Polosukhin Ashish Vaswani, Noam Shazeer. 2017. Attention is all you need. Advances in Neural Information Pro-cessing Systems.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Using author embeddings to improve tweet stance classification", "authors": [ { "first": "Adrian", "middle": [], "last": "Benton", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text", "volume": "", "issue": "", "pages": "184--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrian Benton and Mark Dredze. 2018. Using author embeddings to improve tweet stance classification. In Proceedings of the 2018 EMNLP Workshop W- NUT: The 4th Workshop on Noisy User-generated Text, pages 184-194.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Questionnaire response rate: A methodological analysis. Social Forces", "authors": [ { "first": "J", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Alan", "middle": [ "M" ], "last": "Champion", "suffix": "" }, { "first": "", "middle": [], "last": "Sear", "suffix": "" } ], "year": 1969, "venue": "", "volume": "", "issue": "", "pages": "335--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dean J Champion and Alan M Sear. 1969. Question- naire response rate: A methodological analysis. So- cial Forces, pages 335-339.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Current concepts in validity and reliability for psychometric instruments: theory and application. The American journal of medicine", "authors": [ { "first": "A", "middle": [], "last": "David", "suffix": "" }, { "first": "Thomas J", "middle": [], "last": "Cook", "suffix": "" }, { "first": "", "middle": [], "last": "Beckman", "suffix": "" } ], "year": 2006, "venue": "", "volume": "119", "issue": "", "pages": "166--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A Cook and Thomas J Beckman. 2006. Current concepts in validity and reliability for psychometric instruments: theory and application. The American journal of medicine, 119(2):166-e7.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neo five-factor inventory (neo-ffi)", "authors": [ { "first": "P", "middle": [ "T" ], "last": "Costa", "suffix": "" }, { "first": "", "middle": [], "last": "Mccrae", "suffix": "" } ], "year": 1989, "venue": "Psychological Assessment Resources", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "PT Costa and RR McCrae. 1989. Neo five-factor inven- tory (neo-ffi). Odessa, FL: Psychological Assess- ment Resources, 3.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Revised NEO Personality Inventory (NEO-PI-R)", "authors": [ { "first": "Costa", "middle": [], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Robert R Mccrae", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul T Costa Jr and Robert R McCrae. 2008. The Re- vised NEO Personality Inventory (NEO-PI-R). Sage Publications, Inc.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Introduction to classical and modern test theory", "authors": [ { "first": "Linda", "middle": [], "last": "Crocker", "suffix": "" }, { "first": "James", "middle": [], "last": "Algina", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda Crocker and James Algina. 1986. Introduction to classical and modern test theory. ERIC.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Latent dirichlet allocation", "authors": [ { "first": "I. Jordan", "middle": [], "last": "Michael", "suffix": "" }, { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Blei", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael I. Jordan David M. Blei, Andrew Y. Ng. 2003. Latent dirichlet allocation. Journal of Machine Learning Research.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Psychological testing and psychological assessment. a review of evidence and issues", "authors": [ { "first": "L D Eyde G G", "middle": [], "last": "Kay", "suffix": "" }, { "first": "K L", "middle": [], "last": "Moreland R R Dies", "suffix": "" }, { "first": "E", "middle": [], "last": "Eisman T W Kubiszyn", "suffix": "" }, { "first": "G M", "middle": [], "last": "Reed", "suffix": "" }, { "first": "G J", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "S E", "middle": [], "last": "Finn", "suffix": "" } ], "year": 2001, "venue": "Am Psychol", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L D Eyde G G Kay K L Moreland R R Dies E J Eisman T W Kubiszyn G M Reed G J Meyer, S E Finn. 2001. Psychological testing and psychological assessment. a review of evidence and issues. Am Psychol, 56(2).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Predicting personality from twitter", "authors": [ { "first": "Jennifer", "middle": [], "last": "Golbeck", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Robles", "suffix": "" }, { "first": "Michon", "middle": [], "last": "Edmondson", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Turner", "suffix": "" } ], "year": 2011, "venue": "2011 IEEE third international conference on privacy, security, risk and trust and 2011 IEEE third international conference on social computing", "volume": "", "issue": "", "pages": "149--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jennifer Golbeck, Cristina Robles, Michon Edmond- son, and Karen Turner. 2011. Predicting personal- ity from twitter. In 2011 IEEE third international conference on privacy, security, risk and trust and 2011 IEEE third international conference on social computing, pages 149-156. IEEE.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The structure of phenotypic personality traits", "authors": [ { "first": "", "middle": [], "last": "Lewis R Goldberg", "suffix": "" } ], "year": 1993, "venue": "American psychologist", "volume": "48", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lewis R Goldberg. 1993. The structure of phenotypic personality traits. American psychologist, 48(1):26.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The international personality item pool and the future of public-domain personality measures", "authors": [ { "first": "John", "middle": [ "A" ], "last": "Lewis R Goldberg", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "W", "middle": [], "last": "Herbert", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Eber", "suffix": "" }, { "first": "", "middle": [], "last": "Hogan", "suffix": "" }, { "first": "C", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Ashton", "suffix": "" }, { "first": "Harrison G", "middle": [], "last": "Cloninger", "suffix": "" }, { "first": "", "middle": [], "last": "Gough", "suffix": "" } ], "year": 2006, "venue": "Journal of Research in personality", "volume": "40", "issue": "1", "pages": "84--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lewis R Goldberg, John A Johnson, Herbert W Eber, Robert Hogan, Michael C Ashton, C Robert Cloninger, and Harrison G Gough. 2006. The in- ternational personality item pool and the future of public-domain personality measures. Journal of Re- search in personality, 40(1):84-96.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "staedt. 2013. Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret L. Kern Lukasz Dziurzynski Stephanie M. Ramones Megha Agrawal Achal Shah Michal Kosin- ski David Stillwell Martin E. P. Seligman Lyle H. Ungar H. Andrew Schwartz, Johannes C. Eich- staedt. 2013. Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "user2vec: Social media user representation based on distributed document embeddings", "authors": [ { "first": "Semiha", "middle": [], "last": "Ibrahim R Hallac", "suffix": "" }, { "first": "Betul", "middle": [], "last": "Makinist", "suffix": "" }, { "first": "Galip", "middle": [], "last": "Ay", "suffix": "" }, { "first": "", "middle": [], "last": "Aydin", "suffix": "" } ], "year": 2019, "venue": "2019 International Artificial Intelligence and Data Processing Symposium (IDAP)", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ibrahim R Hallac, Semiha Makinist, Betul Ay, and Galip Aydin. 2019. user2vec: Social media user rep- resentation based on distributed document embed- dings. In 2019 International Artificial Intelligence and Data Processing Symposium (IDAP), pages 1-5. IEEE.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Principal component analysis: a review and recent developments", "authors": [ { "first": "Jorge", "middle": [], "last": "Cadima", "suffix": "" }, { "first": "Ian", "middle": [ "T" ], "last": "Jolliffe1", "suffix": "" } ], "year": 2016, "venue": "Philosophical Transactions of The Royal Society A", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorge Cadima Ian T. Jolliffe1. 2016. Principal com- ponent analysis: a review and recent developments. Philosophical Transactions of The Royal Society A.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Kenton Lee Kristina Toutanova Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee Kristina Toutanova Jacob Devlin, Ming- Wei Chang. 2019. Bert: Pre-training of deep bidi- rectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The big five personality traits, general mental ability, and career success across the life span", "authors": [ { "first": "A", "middle": [], "last": "Timothy", "suffix": "" }, { "first": "Chad", "middle": [ "A" ], "last": "Judge", "suffix": "" }, { "first": "Carl", "middle": [ "J" ], "last": "Higgins", "suffix": "" }, { "first": "Murray", "middle": [ "R" ], "last": "Thoresen", "suffix": "" }, { "first": "", "middle": [], "last": "Barrick", "suffix": "" } ], "year": 1999, "venue": "Personnel psychology", "volume": "52", "issue": "3", "pages": "621--652", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy A Judge, Chad A Higgins, Carl J Thoresen, and Murray R Barrick. 1999. The big five person- ality traits, general mental ability, and career suc- cess across the life span. Personnel psychology, 52(3):621-652.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Latent human traits in the language of social media: An openvocabulary approach", "authors": [ { "first": "Vivek", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Margaret", "middle": [ "L" ], "last": "Kern", "suffix": "" }, { "first": "David", "middle": [], "last": "Stillwell", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Kosinski", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Matz", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" }, { "first": "H Andrew", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 2018, "venue": "PloS one", "volume": "", "issue": "11", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivek Kulkarni, Margaret L Kern, David Stillwell, Michal Kosinski, Sandra Matz, Lyle Ungar, Steven Skiena, and H Andrew Schwartz. 2018. Latent hu- man traits in the language of social media: An open- vocabulary approach. PloS one, 13(11).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Content analysis of social media: A grounded theory approach", "authors": [ { "first": "S", "middle": [ "L" ], "last": "Linda", "suffix": "" }, { "first": "Wai", "middle": [ "Ming" ], "last": "Lai", "suffix": "" }, { "first": "", "middle": [], "last": "To", "suffix": "" } ], "year": 2015, "venue": "Journal of Electronic Commerce Research", "volume": "16", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda SL Lai and Wai Ming To. 2015. Content analysis of social media: A grounded theory approach. Jour- nal of Electronic Commerce Research, 16(2):138.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deep contextualized word representations", "authors": [], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer Matt Gardner Christopher Clark Ken- ton Lee Luke Zettlemoyer Matthew E. Peters, Mark Neumann. 2018. Deep contextualized word representations. Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Mallet: A machine learning for language toolkit", "authors": [ { "first": "Andrew Kachites", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. Http://mallet.cs.umass.edu.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines", "authors": [ { "first": "Michal", "middle": [], "last": "Samuel D Gosling Vesselin Popov David Stillwell", "suffix": "" }, { "first": "", "middle": [], "last": "Kosinski", "suffix": "" } ], "year": 2015, "venue": "American Psychologist", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel D Gosling Vesselin Popov David Stillwell Michal Kosinski, Sandra C Matz. 2015. Facebook as a research tool for the social sciences: Opportuni- ties, challenges, ethical considerations, and practical guidelines. American Psychologist.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Faking it: social desirability response bias in self-report research", "authors": [ { "first": "F", "middle": [], "last": "Thea", "suffix": "" }, { "first": "", "middle": [], "last": "Van De Mortel", "suffix": "" } ], "year": 2008, "venue": "Australian Journal of Advanced Nursing", "volume": "25", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thea F Van de Mortel et al. 2008. Faking it: social de- sirability response bias in self-report research. Aus- tralian Journal of Advanced Nursing, The, 25(4):40.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Detecting careless respondents in webbased questionnaires: Which method to use? Journal of Research in Personality", "authors": [ { "first": "A", "middle": [], "last": "Susan", "suffix": "" }, { "first": "M", "middle": [], "last": "Niessen", "suffix": "" }, { "first": "Rob", "middle": [ "R" ], "last": "Meijer", "suffix": "" }, { "first": "Jorge", "middle": [ "N" ], "last": "Tendeiro", "suffix": "" } ], "year": 2016, "venue": "", "volume": "63", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "A Susan M Niessen, Rob R Meijer, and Jorge N Ten- deiro. 2016. Detecting careless respondents in web- based questionnaires: Which method to use? Jour- nal of Research in Personality, 63:1-11.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Automatic personality assessment through social media language", "authors": [ { "first": "Gregory", "middle": [], "last": "Park", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Johannes", "middle": [ "C" ], "last": "Eichstaedt", "suffix": "" }, { "first": "Margaret", "middle": [ "L" ], "last": "Kern", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Kosinski", "suffix": "" }, { "first": "J", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Stillwell", "suffix": "" }, { "first": "H", "middle": [], "last": "Lyle", "suffix": "" }, { "first": "Martin", "middle": [ "Ep" ], "last": "Ungar", "suffix": "" }, { "first": "", "middle": [], "last": "Seligman", "suffix": "" } ], "year": 2015, "venue": "Journal of personality and social psychology", "volume": "108", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Park, H Andrew Schwartz, Johannes C Eich- staedt, Margaret L Kern, Michal Kosinski, David J Stillwell, Lyle H Ungar, and Martin EP Seligman. 2015. Automatic personality assessment through so- cial media language. Journal of personality and so- cial psychology, 108(6):934.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A split questionnaire survey design", "authors": [ { "first": "E", "middle": [], "last": "Trivellore", "suffix": "" }, { "first": "James", "middle": [ "E" ], "last": "Raghunathan", "suffix": "" }, { "first": "", "middle": [], "last": "Grizzle", "suffix": "" } ], "year": 1995, "venue": "Journal of the American Statistical Association", "volume": "90", "issue": "429", "pages": "54--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trivellore E Raghunathan and James E Grizzle. 1995. A split questionnaire survey design. Journal of the American Statistical Association, 90(429):54-63.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The power of personality: The comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes", "authors": [ { "first": "W", "middle": [], "last": "Brent", "suffix": "" }, { "first": "", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Nathan R Kuncel", "suffix": "" }, { "first": "Avshalom", "middle": [], "last": "Shiner", "suffix": "" }, { "first": "Lewis R", "middle": [], "last": "Caspi", "suffix": "" }, { "first": "", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2007, "venue": "Perspectives on Psychological science", "volume": "2", "issue": "4", "pages": "313--345", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brent W Roberts, Nathan R Kuncel, Rebecca Shiner, Avshalom Caspi, and Lewis R Goldberg. 2007. The power of personality: The comparative validity of personality traits, socioeconomic status, and cogni- tive ability for predicting important life outcomes. Perspectives on Psychological science, 2(4):313- 345.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Personality, gender, and age in the language of social media: The open-vocabulary approach", "authors": [ { "first": "Andrew", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Johannes", "middle": [ "C" ], "last": "Eichstaedt", "suffix": "" }, { "first": "Margaret", "middle": [ "L" ], "last": "Kern", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Dziurzynski", "suffix": "" }, { "first": "M", "middle": [], "last": "Stephanie", "suffix": "" }, { "first": "Megha", "middle": [], "last": "Ramones", "suffix": "" }, { "first": "Achal", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Shah", "suffix": "" }, { "first": "David", "middle": [], "last": "Kosinski", "suffix": "" }, { "first": "", "middle": [], "last": "Stillwell", "suffix": "" }, { "first": "E", "middle": [ "P" ], "last": "Martin", "suffix": "" }, { "first": "", "middle": [], "last": "Seligman", "suffix": "" } ], "year": 2013, "venue": "PloS one", "volume": "8", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H Andrew Schwartz, Johannes C Eichstaedt, Mar- garet L Kern, Lukasz Dziurzynski, Stephanie M Ra- mones, Megha Agrawal, Achal Shah, Michal Kosin- ski, David Stillwell, Martin EP Seligman, et al. 2013. Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one, 8(9):e73791.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Social collaborative filtering for cold-start recommendations", "authors": [ { "first": "Suvash", "middle": [], "last": "Sedhain", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Sanner", "suffix": "" }, { "first": "Darius", "middle": [], "last": "Braziunas", "suffix": "" }, { "first": "Lexing", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Christensen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th ACM Conference on Recommender systems", "volume": "", "issue": "", "pages": "345--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suvash Sedhain, Scott Sanner, Darius Braziunas, Lex- ing Xie, and Jordan Christensen. 2014. Social col- laborative filtering for cold-start recommendations. In Proceedings of the 8th ACM Conference on Rec- ommender systems, pages 345-348.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Determining personality traits & privacy concerns from facebook activity", "authors": [ { "first": "Chris", "middle": [], "last": "Sumner", "suffix": "" }, { "first": "Alison", "middle": [], "last": "Byers", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Shearing", "suffix": "" } ], "year": 2011, "venue": "Black Hat Briefings", "volume": "11", "issue": "7", "pages": "197--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Sumner, Alison Byers, and Matthew Shearing. 2011. Determining personality traits & privacy con- cerns from facebook activity. Black Hat Briefings, 11(7):197-221.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Fixed users, testing questions embeddings Model Corr L1 Loss In-construct Corr", "authors": [ { "first": "O", "middle": [], "last": "", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" }, { "first": "E", "middle": [], "last": "", "suffix": "" }, { "first": "C", "middle": [], "last": "", "suffix": "" }, { "first": "N) ; O", "middle": [], "last": "", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" }, { "first": "E", "middle": [], "last": "", "suffix": "" }, { "first": "C", "middle": [], "last": "", "suffix": "" }, { "first": "N", "middle": [ ")" ], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fixed users, testing questions embeddings Model Corr L1 Loss In-construct Corr (O, A, E, C, N) In-construct L1 (O, A, E, C, N)", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Overview of proposed task: analyzing users' social media text and questions text to predict responses.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Visualizations of question embeddings for pairs of categories. Each point is the embedding for a question, projected on the first two principal components of the question embeddings. Questions about different factors of the BIG 5-factor model separate relatively cleanly", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Visualizations of opposite direction questions within one category", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Extraversion and Neuroticism (j) Agreeableness and NeuroticismFigure 4: Visualization of embeddings for each pair of categories. Each dot represents a question from the respective BIG 5 category. The Visualizations show that sentence embeddings are able to separate questionnaire questions by category. Visualization of embeddings for both question directions (positive vs. negative) in each category. The visualizations show that sentence embeddings can distinguish the direction of a questionnaire question reasonably well (for categories Openness, Extraversion and Agreeableness).", "num": null, "uris": null, "type_str": "figure" }, "TABREF2": { "text": "Main experiments predictions results", "content": "
: Predictions results for each category: Open- |
ness (O), Agreeableness (A), Consciousness (C), Ex- |
traversion (E) and Neuroticism (N). |