{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:30:59.227244Z" }, "title": "Determining a Person's Suicide Risk by Voting on the Short-Term History of Tweets for the CLPsych 2021 Shared Task", "authors": [ { "first": "Ulya", "middle": [], "last": "Bayram", "suffix": "", "affiliation": { "laboratory": "", "institution": "Onsekiz Mart University \u00c7anakkale", "location": { "country": "Turkey" } }, "email": "ulya.bayram@comu.edu.tr" }, { "first": "Lamia", "middle": [], "last": "Benhiba", "suffix": "", "affiliation": { "laboratory": "", "institution": "V University in Rabat Rabat", "location": { "region": "Mohammed", "country": "Morocco" } }, "email": "lamia.benhiba@um5.ac.ma" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this shared task, we accept the challenge of constructing models to identify Twitter users who attempted suicide based on their tweets 30 and 182 days before the adverse event's occurrence. We explore multiple machine learning and deep learning methods to identify a person's suicide risk based on the short-term history of their tweets. Taking the real-life applicability of the model into account, we make the design choice of classifying on the tweet level. By voting the tweet-level suicide risk scores through an ensemble of classifiers, we predict the suicidal users 30-days before the event with an 81.8% true-positives rate. Meanwhile, the tweet-level voting falls short on the six-month-long data as the number of tweets with weak suicidal ideation levels weakens the overall suicidal signals in the long term.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this shared task, we accept the challenge of constructing models to identify Twitter users who attempted suicide based on their tweets 30 and 182 days before the adverse event's occurrence. We explore multiple machine learning and deep learning methods to identify a person's suicide risk based on the short-term history of their tweets. Taking the real-life applicability of the model into account, we make the design choice of classifying on the tweet level. By voting the tweet-level suicide risk scores through an ensemble of classifiers, we predict the suicidal users 30-days before the event with an 81.8% true-positives rate. Meanwhile, the tweet-level voting falls short on the six-month-long data as the number of tweets with weak suicidal ideation levels weakens the overall suicidal signals in the long term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Suicide is amongst the most pressing public health issues facing today's society, stressing the need for rapid and effective detection tools. As people are increasingly self-expressing their distress on social media, an unprecedented volume of data is currently available to detect a person's suicide risk (Roy et al., 2020; Tadesse et al., 2020; Luo et al., 2020) . In this shared task, we aim to construct tools to identify suicidal Twitter users (who attempted suicide) based on their tweets collected from spans of 30-days (subtask 1) and six months (subtask 2) before the adverse event's occurrence date (Macavaney et al., 2021) . The small number of users in the labeled collections of subtask 1 (57 suicidal/57 control) and subtask 2 (82 suicidal/82 control) and the scarcity of tweets for some users pose these tasks as small-dataset classification challenges. On that note, Coppersmith et al. (2018) reported high performance with deep learning (DL) methods on these collections after enriching them with additional data (418 suicidal/418 control).", "cite_spans": [ { "start": 306, "end": 324, "text": "(Roy et al., 2020;", "ref_id": "BIBREF19" }, { "start": 325, "end": 346, "text": "Tadesse et al., 2020;", "ref_id": "BIBREF20" }, { "start": 347, "end": 364, "text": "Luo et al., 2020)", "ref_id": "BIBREF11" }, { "start": 609, "end": 633, "text": "(Macavaney et al., 2021)", "ref_id": "BIBREF12" }, { "start": 883, "end": 908, "text": "Coppersmith et al. (2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "When formulating the strategy to attack the challenge, we were motivated by the real-life applicability of the methods. Some social media domains already started implementing auto-detection tools to prevent suicide (Ji et al., 2020) . These tools continuously monitor the presence of suicide risk in new posts. Therefore, we chose to train the models at the tweet level. Next, we develop a majority voting scheme over the classified tweets to report an overall suicide risk score for a user. We employ simple machine learning (ML) methods and create an ensemble. We also experiment with DL methods to assess whether complexity would improve the results. Since successful ML applications thrive on feature engineering (Domingos, 2012) , we conduct feature selection to evaluate and determine the best feature sets for the models.", "cite_spans": [ { "start": 215, "end": 232, "text": "(Ji et al., 2020)", "ref_id": "BIBREF10" }, { "start": 717, "end": 733, "text": "(Domingos, 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our experiments suggest that majority voting (MV) over tweet-level classification scores is a viable approach for the short-term prediction of suicide risk. We observe that DL methods require plentiful resources despite the small size of the datasets. Simple ML methods with feature selection return satisfactory results, and the performance further improves by the ensemble classifier. We also observe that the MV approach falls short on the six-month-long data regardless of the applied model. Yet this limitation provides the invaluable insight that suicidal ideation signals are more significant when the date of the suicidal event is closer, which stresses the need for more complex, noise immune models for longer time-spanning data. In this context, we consider a noise-immune model as a suicidal ideation detection model that is not affected by tweets lacking suicidal ideation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pre-processing: We clean the tweets by removing user mentions, URLs, punctuation, and non-ASCII characters, then normalize hashtags into words using a probabilistic splitting tool based on English Wikipedia unigram frequencies (Anderson, 2019) . We maintain stopwords and emojis, as they might provide clues regarding the suicidal ideation of the users.", "cite_spans": [ { "start": 227, "end": 243, "text": "(Anderson, 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "Experimentation Framework: Before designing the experiments, we face a critical choice: Should we merge all tweets per user, or should we perform the assessment per tweet and then aggregate the scores? To answer this, we consider a real-life risk assessment system. The system should provide a score every time someone posts a tweet. Some social media domains already implement these systems (Ji et al., 2020) . Hence, we select to train the models to classify tweets, then apply majority voting (MV) per user to compute a risk score based on the tweet scores. Our framework is described in Figure 1 . Experiments with Standard ML methods: Before ML experiments, we initially explore a simple approach that constructs graphs from training sets and computes how well the given texts match the graphs (Bayram et al., 2018) . However, tweets proved to be unfit for the method due to low word counts.", "cite_spans": [ { "start": 392, "end": 409, "text": "(Ji et al., 2020)", "ref_id": "BIBREF10" }, { "start": 799, "end": 820, "text": "(Bayram et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 591, "end": 599, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "As most ML methods depend on learning from features, we select n-gram features where n \u2264 2 for their popularity in suicide studies (O'Dea et al., 2015; De Choudhury et al., 2016; Pestian et al., 2020) . For bigrams (n = 2), we apply a sliding window over concurrent words using the NLTK library (Bird et al., 2009) . Next, we eliminate infrequent n-grams from the training set to reduce uninformative features (occurring in \u22643 tweets in 30days, \u226410 tweets in 182-days training sets). Subsequently, we scale the features by row-normalizing them with the root of the sum of the square (i.e. variation) of the feature values.", "cite_spans": [ { "start": 131, "end": 151, "text": "(O'Dea et al., 2015;", "ref_id": "BIBREF15" }, { "start": 152, "end": 178, "text": "De Choudhury et al., 2016;", "ref_id": "BIBREF6" }, { "start": 179, "end": 200, "text": "Pestian et al., 2020)", "ref_id": "BIBREF17" }, { "start": 295, "end": 314, "text": "(Bird et al., 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "Among the popular ML methods in suicide literature is logistic regression (LR) (Walsh et al., 2017; De Choudhury et al., 2016; O'Dea et al., 2015) . We select the \"liblinear\" solver with default settings for being recommended for small datasets (Buitinck et al., 2013) . To cover diverse mathematical frameworks and assumptions, we also include two naive Bayes methods (Gaussian (GNB) and Multinomial (MNB) with default settings) (Buitinck et al., 2013) . We also experiment with K-Nearest Neighbors with different distance (uniform, weighted) and neighborhood (k \u2208 {3, 5, 8}) settings, but we eliminate it for low within-dataset results. Similarly, ensemblelearning methods (Adaboost, XGBoost, Random Forest) also return underwhelming performance despite the parameter tuning, and thus, were eliminated. Additionally, we evaluate support vector machines (SVM) for their popularity in suicide research (Zhu et al., 2020; Pestian et al., 2020; O'Dea et al., 2015) . SVM with rbf kernel proves to be successful but requires costly parameter tuning, while linear SVM (lSVM) shows success on withindataset evaluations with less cost. Consequently, we select lSVM of sklearn (default settings) for the shared task (Buitinck et al., 2013) , which returns only binary classification results. To convert them to probabilities, we apply probability calibration with logistic regression (CalibratedClassifierCV).", "cite_spans": [ { "start": 79, "end": 99, "text": "(Walsh et al., 2017;", "ref_id": "BIBREF21" }, { "start": 100, "end": 126, "text": "De Choudhury et al., 2016;", "ref_id": "BIBREF6" }, { "start": 127, "end": 146, "text": "O'Dea et al., 2015)", "ref_id": "BIBREF15" }, { "start": 245, "end": 268, "text": "(Buitinck et al., 2013)", "ref_id": "BIBREF3" }, { "start": 430, "end": 453, "text": "(Buitinck et al., 2013)", "ref_id": "BIBREF3" }, { "start": 902, "end": 920, "text": "(Zhu et al., 2020;", "ref_id": "BIBREF22" }, { "start": 921, "end": 942, "text": "Pestian et al., 2020;", "ref_id": "BIBREF17" }, { "start": 943, "end": 962, "text": "O'Dea et al., 2015)", "ref_id": "BIBREF15" }, { "start": 1209, "end": 1232, "text": "(Buitinck et al., 2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "Feature selection: Following the ML method selections, we evaluate the effect of feature selection on ML performance. To compute feature importance scores, we also use the LR. For each selected number of features, we gather top suicidal and control features. Next, we train and evaluate the ML methods in a leave-one-out (LOO) framework using those features. The feature selection results of the selected ML methods for two subtasks are in Figure 2 . We select the best ML models from these plots.", "cite_spans": [], "ref_spans": [ { "start": 440, "end": 448, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "Experiments with Ensemble: Ensemble classifiers previously showed success in ML challenges (Niculescu-Mizil et al., 2009) . Since every classifier renders predicted probabilities for every data point, we build an ensemble classifier to optimize the results of four selected ML methods (LR, GNB, MNB, lSVM). We adopt a weighting ensemble method where the weight of each classifier is set proportional to its performance (Rokach, 2010) . We call this method weighted Ensemble (wEns).", "cite_spans": [ { "start": 91, "end": 121, "text": "(Niculescu-Mizil et al., 2009)", "ref_id": "BIBREF13" }, { "start": 419, "end": 433, "text": "(Rokach, 2010)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "Experiments with DL: To measure whether re- sults would improve with complexity, we also evaluate shallow DL methods. We use the pre-trained transformer model Bert-base-uncased (Devlin et al., 2018) to catch the linguistics features of the tweets. The embeddings are then fed to a DL Recurrent Units-based architecture to learn text sequence orders. We experiment with two types of recurrent neural networks (RNNs): Long Short Term Memory (LSTM) (Gers et al., 1999) , and Gated Recurrent Unit (GRU) known for overcoming vanishing and exploding gradient problems faced by vanilla RNNs during training (Cho et al., 2014) . After assessing various configurations of both architectures, we settle on a multi-layer bi-directional GRU with the following characteristics: embedding dimen-sion=256, number of layers=2, batch size=32. We call this model GRU-Bert. We include a drop-out to regularise learning and a fully connected layer with a Sigmoid activation to produce the classification for each tweet. Finally, we include the same majority voting framework to infer the classification on the user level. We use Pytorch (Paszke et al., 2019) and scikit-learn (Buitinck et al., 2013) libraries for implementation.", "cite_spans": [ { "start": 177, "end": 198, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" }, { "start": 446, "end": 465, "text": "(Gers et al., 1999)", "ref_id": "BIBREF9" }, { "start": 600, "end": 618, "text": "(Cho et al., 2014)", "ref_id": "BIBREF4" }, { "start": 1117, "end": 1138, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF16" }, { "start": 1156, "end": 1179, "text": "(Buitinck et al., 2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "Before training each classifier, we employ the best performing top features from the Figure 2 , where every classifier has its most fitting top features for each subtask. Next, we construct a LOO crossvalidation framework for within-dataset evaluations. 1 It is important to note that, in each step of the LOO, we choose new user ids for evaluation and completely exclude all of their tweets from the training sets to evade ML methods potentially learning the way a person drafts tweets. That means the within-dataset LOO results of a subtask are reported for all users of the labeled set. Moreover, the labeled datasets have more users than the unlabeled test sets per subtask (e.g. 57 vs. 11 suicidal users in subtask1). Ergo, we expect a high magnitudinal difference between the within-dataset and the test results. The within-dataset evaluation results of the selected methods are in Table 1 . For subtask 1, we obtain the best LOO cross-validation score from the wEns method that combines the results of four ML methods (LR, MNB, GNB, lSVM) in a way that improves the results obtained from each of them. Meanwhile, GRU-Bert and MNB return the lowest false positive rates (FPR) for this subtask, which might be a critical rate to consider in real-life applications in social media domains. LOO results of subtask 2 in Table 1 show that wEns returns the best scores for the longer-spanning dataset as well, where LR returns the best FPR, and GBN returns the highest true positives rate (TPR). Based on the LOO results, we select three different methods we were allowed to submit for the evaluation of the test set: LR, wEns, and GRU-Bert. We choose LR and wEns for their high performance on LOO experiments, while we select GRU-Bert for measuring how a DL method would generalize over the test sets. The baseline classifier provided by the organizers is also a logistic regression. However, it performs the classification over merged tweets of users -therefore is different from our implementation of LR. In Table 2 , wEns appears to provide the best F1, F2, and TPR scores over the test set of subtask 1, while our LR outperforms the AUC of the baseline method. While these methods show the success of generalizability on the 30-days test set, the results are not that successful for subtask 2. The wEns method performs the same as the baseline in terms of TPR, but the rest of the scores are lower than the baseline results.", "cite_spans": [ { "start": 254, "end": 255, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 888, "end": 895, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1322, "end": 1329, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 2011, "end": 2018, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "In subtask 1, the test set results show that feature selection can considerably enhance the performance of ML models compared to the baseline. We also find that the ensemble classifier is comparably better than the baseline in this subtask. Meanwhile, though the baseline of CLPsych2021 is the same as our LR, our additional MV and feature selection together enable LR to substantially outperform the baseline. These successes of simple ML methods indicate that a collection of tweets from within the 30-days of a suicidal event is good enough to capture the existence of suicidal ideation, which is an important finding for future real-life suicide prevention applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "In contrast to the observations from subtask 1, our test results on subtask 2 are unsatisfactory. Yet, they provide the valuable insight that suicidal signals are more significant in the short-term, and older tweets lacking suicidal ideation generate noise. This insight suggests the need to account for a time-domain aspect. To investigate the viability of this claim, we experiment with a simple time-decay coefficient in the MV framework and evaluate it through LR on the test set. We multiply each vote by the coefficient 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "\u2212timeDif f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "half Lif e where timeDif f is the number of days between the current and last tweets, and half Lif e (=7 days) is a hyperparameter that reflects the weight of a vote in the final suicide risk score of a user. Initial experiments show that even this simple time-decay coefficient improves the test results significantly. This observation suggests that tweet dates are critical features for this subtask and should be included in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Notwithstanding, on both subtasks, the shallow DL methods we experimented with perform poorly. These results could be attributed to overfitting on the small dataset and noise sensitivity for the larger time-spanning dataset. Additionally, regardless of the dataset size, these methods proved to be computationally expensive. As within-dataset experiments using simple ML methods outperformed these expensive shallow DL methods, we excluded the latter from the test set evaluation. Future work on DL will include deeper, more complex, and noise immune methods that could integrate Convolutional neural networks (CNN), deeper LSTM or GRU layers, and experiments with various word embedding models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "If we compare our findings with those in Coppersmith et al. (2018), we observe different results in terms of short-term versus long-term dataset classifications. We attribute these different outcomes to the fact that the original study optimizes the design for detecting trait-level (relevant to risk for any point in time) suicide risk when we endeavor to identify suicidal ideation at the state level (immediate risk presence). This design choice, along with tweet-level classification, enabled our model to recognize suicidal nuances in short-term tweets. Meanwhile, we were unable to detect any suicidal ideation through manual inspection (reading and interpreting the tweets) over most of these tweets due to their noisy and ambiguous nature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "In this shared task, we investigate various models for identifying suicide risk based on user's tweets. Inspired by real-life applications, we focus on assessing suicide risk on the tweet level. Experimental results reveal that the ensemble classifier can identify suicidal users from 30-days tweets with a high performance rate, demonstrating the power of majority voting over tweet-level classifications for short-term suicide risk detection. Meanwhile, we construe from the underwhelming results on the six-month dataset that these models were more sensitive to the signals relevant to short term risk than those relevant to long term risk. In future work, we will incorporate a temporal aspect to improve the noise immunity of our models, and we will continue experimenting with more complex models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Within-dataset evaluation results of the selected ML and weighted ensemble methods are obtained from LOO crossvalidation. While for GRU-Bert, collections were split into training-validation-test sets in 70:10:20 ratios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The organizers are particularly grateful to the users who donated data to the OurDataHelps project without whom this work would not be possible, to Qntfy for supporting the OurDataHelps project and making the data available, to NORC for creating and administering the secure infrastructure, and to Amazon for supporting this research with computational resources on AWS. The authors are thankful to the anonymous reviewers for their constructive comments and valuable suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Secure access to the shared task dataset was provided with IRB approval under University of Maryland, College Park protocol 1642625.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethics Statement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "wordninja Python library", "authors": [ { "first": "Derek", "middle": [], "last": "Anderson", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Derek Anderson. 2019. wordninja Python li- brary. https://github.com/keredson/ wordninja. [Online; accessed 11-March-2021].", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A lexical network approach for identifying suicidal ideation in clinical interview transcripts", "authors": [ { "first": "Ulya", "middle": [], "last": "Bayram", "suffix": "" }, { "first": "A", "middle": [], "last": "Ali", "suffix": "" }, { "first": "John", "middle": [], "last": "Minai", "suffix": "" }, { "first": "", "middle": [], "last": "Pestian", "suffix": "" } ], "year": 2018, "venue": "International Conference on Complex Systems", "volume": "", "issue": "", "pages": "165--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ulya Bayram, Ali A Minai, and John Pestian. 2018. A lexical network approach for identifying suicidal ideation in clinical interview transcripts. In Interna- tional Conference on Complex Systems, pages 165- 172. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Natural language processing with Python: analyzing text with the natural language toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. \" O'Reilly Media, Inc.\".", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "API design for machine learning software: experiences from the scikit-learn project", "authors": [ { "first": "Lars", "middle": [], "last": "Buitinck", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Louppe", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Vlad", "middle": [], "last": "Niculae", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Jaques", "middle": [], "last": "Grobler", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Layton", "suffix": "" }, { "first": "Jake", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "Arnaud", "middle": [], "last": "Joly", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Holt", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" } ], "year": 2013, "venue": "ECML PKDD Workshop: Languages for Data Mining and Machine Learning", "volume": "", "issue": "", "pages": "108--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake VanderPlas, Ar- naud Joly, Brian Holt, and Ga\u00ebl Varoquaux. 2013. API design for machine learning software: experi- ences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Ma- chine Learning, pages 108-122.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "On the properties of neural machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.1259" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. arXiv preprint arXiv:1409.1259.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural language processing of social media as screening for suicide risk", "authors": [ { "first": "Glen", "middle": [], "last": "Coppersmith", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Leary", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Crutchley", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Fine", "suffix": "" } ], "year": 2018, "venue": "Biomedical informatics insights", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine. 2018. Natural language processing of so- cial media as screening for suicide risk. Biomedical informatics insights, 10:1178222618792860.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Discovering shifts to suicidal ideation from mental health content in social media", "authors": [ { "first": "Emre", "middle": [], "last": "Munmun De Choudhury", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Kiciman", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Mrinal", "middle": [], "last": "Coppersmith", "suffix": "" }, { "first": "", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "2098--2110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Dis- covering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 CHI Conference on Human Factors in Comput- ing Systems, pages 2098-2110. ACM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A few useful things to know about machine learning", "authors": [ { "first": "Pedro", "middle": [], "last": "Domingos", "suffix": "" } ], "year": 2012, "venue": "Communications of the ACM", "volume": "55", "issue": "10", "pages": "78--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedro Domingos. 2012. A few useful things to know about machine learning. Communications of the ACM, 55(10):78-87.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning to forget: Continual prediction with lstm", "authors": [ { "first": "J\u00fcrgen", "middle": [], "last": "Felix A Gers", "suffix": "" }, { "first": "Fred", "middle": [], "last": "Schmidhuber", "suffix": "" }, { "first": "", "middle": [], "last": "Cummins", "suffix": "" } ], "year": 1999, "venue": "9th International Conference on Artificial Neural Networks: ICANN '99", "volume": "", "issue": "", "pages": "850--855", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix A Gers, J\u00fcrgen Schmidhuber, and Fred Cummins. 1999. Learning to forget: Continual prediction with lstm. In 9th International Conference on Artificial Neural Networks: ICANN '99, pages 850-855. IET.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Suicidal ideation detection: A review of machine learning methods and applications", "authors": [ { "first": "Shaoxiong", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Shirui", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Xue", "middle": [], "last": "Li", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Long", "suffix": "" }, { "first": "Zi", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "IEEE Transactions on Computational Social Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaoxiong Ji, Shirui Pan, Xue Li, Erik Cambria, Guodong Long, and Zi Huang. 2020. Suicidal ideation detection: A review of machine learning methods and applications. IEEE Transactions on Computational Social Systems.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Exploring temporal suicidal behavior patterns on social media: Insight from twitter analytics", "authors": [ { "first": "Jianhong", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Jingcheng", "middle": [], "last": "Du", "suffix": "" }, { "first": "Cui", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yaoyun", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "Health informatics journal", "volume": "26", "issue": "2", "pages": "738--752", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianhong Luo, Jingcheng Du, Cui Tao, Hua Xu, and Yaoyun Zhang. 2020. Exploring temporal suicidal behavior patterns on social media: Insight from twit- ter analytics. Health informatics journal, 26(2):738- 752.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Community-level research on suicidality prediction in a secure environment: Overview of the CLPsych 2021 shared task", "authors": [ { "first": "Sean", "middle": [], "last": "Macavaney", "suffix": "" }, { "first": "Anjali", "middle": [], "last": "Mittu", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Coppersmith", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Leintz", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sean Macavaney, Anjali Mittu, Glen Coppersmith, Jeff Leintz, and Philip Resnik. 2021. Community-level research on suicidality prediction in a secure envi- ronment: Overview of the CLPsych 2021 shared task. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2021). Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Winning the kdd cup orange challenge with ensemble selection", "authors": [ { "first": "Alexandru", "middle": [], "last": "Niculescu-Mizil", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Perlich", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Swirszcz", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Sindhwani", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Prem", "middle": [], "last": "Melville", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Jianying", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Moninder", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandru Niculescu-Mizil, Claudia Perlich, Grzegorz Swirszcz, Vikas Sindhwani, Yan Liu, Prem Melville, Dong Wang, Jing Xiao, Jianying Hu, Moninder Singh, et al. 2009. Winning the kdd cup orange chal- lenge with ensemble selection. In KDD-Cup 2009", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Detecting suicidality on twitter", "authors": [ { "first": "O'", "middle": [], "last": "Bridianne", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Dea", "suffix": "" }, { "first": "", "middle": [], "last": "Wan", "suffix": "" }, { "first": "J", "middle": [], "last": "Philip", "suffix": "" }, { "first": "Alison", "middle": [ "L" ], "last": "Batterham", "suffix": "" }, { "first": "Cecile", "middle": [], "last": "Calear", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Paris", "suffix": "" }, { "first": "", "middle": [], "last": "Christensen", "suffix": "" } ], "year": 2015, "venue": "", "volume": "2", "issue": "", "pages": "183--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bridianne O'Dea, Stephen Wan, Philip J Batterham, Al- ison L Calear, Cecile Paris, and Helen Christensen. 2015. Detecting suicidality on twitter. Internet In- terventions, 2(2):183-188.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A machine learning approach to identifying changes in suicidal language", "authors": [ { "first": "John", "middle": [], "last": "Pestian", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Santel", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Sorter", "suffix": "" }, { "first": "Ulya", "middle": [], "last": "Bayram", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Connolly", "suffix": "" }, { "first": "Tracy", "middle": [], "last": "Glauser", "suffix": "" }, { "first": "Melissa", "middle": [], "last": "Delbello", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Tamang", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2020, "venue": "Suicide and Life-Threatening Behavior", "volume": "50", "issue": "5", "pages": "939--947", "other_ids": { "DOI": [ "10.1111/sltb.12642" ] }, "num": null, "urls": [], "raw_text": "John Pestian, Daniel Santel, Michael Sorter, Ulya Bayram, Brian Connolly, Tracy Glauser, Melissa DelBello, Suzanne Tamang, and Kevin Cohen. 2020. A machine learning approach to identifying changes in suicidal language. Suicide and Life-Threatening Behavior, 50(5):939-947.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Ensemble-based classifiers. Artificial intelligence review", "authors": [ { "first": "Lior", "middle": [], "last": "Rokach", "suffix": "" } ], "year": 2010, "venue": "", "volume": "33", "issue": "", "pages": "1--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lior Rokach. 2010. Ensemble-based classifiers. Artifi- cial intelligence review, 33(1):1-39.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A machine learning approach predicts future risk to suicidal ideation from social media data", "authors": [ { "first": "Arunima", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Katerina", "middle": [], "last": "Nikolitch", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Mcginn", "suffix": "" }, { "first": "Safiya", "middle": [], "last": "Jinah", "suffix": "" }, { "first": "William", "middle": [], "last": "Klement", "suffix": "" }, { "first": "Zachary", "middle": [ "A" ], "last": "Kaminsky", "suffix": "" } ], "year": 2020, "venue": "NPJ digital medicine", "volume": "3", "issue": "1", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arunima Roy, Katerina Nikolitch, Rachel McGinn, Safiya Jinah, William Klement, and Zachary A Kaminsky. 2020. A machine learning approach pre- dicts future risk to suicidal ideation from social me- dia data. NPJ digital medicine, 3(1):1-12.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Detection of suicide ideation in social media forums using deep learning", "authors": [ { "first": "Hongfei", "middle": [], "last": "Michael Mesfin Tadesse", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "Algorithms", "volume": "13", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Mesfin Tadesse, Hongfei Lin, Bo Xu, and Liang Yang. 2020. Detection of suicide ideation in social media forums using deep learning. Algo- rithms, 13(1):7.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Predicting risk of suicide attempts over time through machine learning", "authors": [ { "first": "G", "middle": [], "last": "Colin", "suffix": "" }, { "first": "Jessica", "middle": [ "D" ], "last": "Walsh", "suffix": "" }, { "first": "Joseph C", "middle": [], "last": "Ribeiro", "suffix": "" }, { "first": "", "middle": [], "last": "Franklin", "suffix": "" } ], "year": 2017, "venue": "Clinical Psychological Science", "volume": "5", "issue": "3", "pages": "457--469", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin G Walsh, Jessica D Ribeiro, and Joseph C Franklin. 2017. Predicting risk of suicide attempts over time through machine learning. Clinical Psy- chological Science, 5(3):457-469.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Comparisons of different classification algorithms while using text mining to screen psychiatric inpatients with suicidal behaviors", "authors": [ { "first": "H", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "", "middle": [], "last": "Xia", "suffix": "" }, { "first": "H", "middle": [], "last": "Yao", "suffix": "" }, { "first": "", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Q", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "Journal of psychiatric research", "volume": "124", "issue": "", "pages": "123--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "H Zhu, X Xia, J Yao, H Fan, Q Wang, and Q Gao. 2020. Comparisons of different classification algo- rithms while using text mining to screen psychiatric inpatients with suicidal behaviors. Journal of psy- chiatric research, 124:123-130.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Classification framework used to compute person-level risk scores from the tweet-level scores.", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "Feature selection evaluations on the labeled datasets of two subtasks.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "num": null, "text": "Within-dataset evaluation results.", "content": "
F1F2 TPR FPR AUC
Subtask 1: (30 days)
LR78.0 81.6 84.2 31.6 80.8
GNB81.2 88.8 94.7 38.6 89.3
MNB83.1 84.8 86.0 21.0 86.8
lSVM81.9 87.2 91.2 31.6 88.6
wEns85.0 90.6 94.7 28.1 93.2
GRU-Bert 81.2 82.2 83.1 21.7 84.0
Subtask 2: (6 months)
LR81.9 83.9 85.4 23.2 85.5
GNB69.6 83.0 95.1 78.0 81.5
MNB75.7 77.1 78.0 28.0 82.8
lSVM78.6 87.1 93.9 45.1 84.6
wEns81.7 88.0 92.7 34.1 88.5
GRU-Bert 74.5 75.4 76.0 28.6 77.5
", "type_str": "table", "html": null }, "TABREF1": { "num": null, "text": "Test results over unlabeled data and the results from the baseline method of CLPsych2021.", "content": "
F1F2 TPR FPR AUC
Subtask 1: (30 days)
Baseline 63.6 63.6 63.6 36.4 66.1
LR63.6 63.6 63.6 36.4 74.0
wEns69.2 76.3 81.8 54.5 70.2
Subtask 2: (6 months)
Baseline 71.0 72.4 73.3 33.3 76.4
LR64.5 65.8 66.7 40.0 56.9
wEns59.5 67.1 73.3 73.3 58.2
", "type_str": "table", "html": null } } } }