{ "paper_id": "D18-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:50:11.464066Z" }, "title": "Detecting Gang-Involved Escalation on Social Media Using Context", "authors": [ { "first": "Serina", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" }, { "first": "Ruiqi", "middle": [], "last": "Zhong", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" }, { "first": "Ethan", "middle": [], "last": "Adams", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" }, { "first": "Fei-Tzin", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" }, { "first": "Siddharth", "middle": [], "last": "Varia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" }, { "first": "Desmond", "middle": [], "last": "Patton", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" }, { "first": "William", "middle": [], "last": "Frey", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "w.frey@columbia.edu" }, { "first": "Chris", "middle": [], "last": "Kedzie", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "kedzie@cs.columbia.edu" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" }, { "first": "Serina", "middle": [], "last": "Contact", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Gang-involved youth in cities such as Chicago have increasingly turned to social media to post about their experiences and intents online. In some situations, when they experience the loss of a loved one, their online expression of emotion may evolve into aggression towards rival gangs and ultimately into real-world violence. In this paper, we present a novel system for detecting Aggression and Loss in social media. Our system features the use of domainspecific resources automatically derived from a large unlabeled corpus, and contextual representations of the emotional and semantic content of the user's recent tweets as well as their interactions with other users. Incorporating context in our Convolutional Neural Network (CNN) leads to a significant improvement. 1 We will make tweet IDs for the data available to researchers who sign an MOU specifying their intended use of the data and their agreement with our ethical guidelines.", "pdf_parse": { "paper_id": "D18-1005", "_pdf_hash": "", "abstract": [ { "text": "Gang-involved youth in cities such as Chicago have increasingly turned to social media to post about their experiences and intents online. In some situations, when they experience the loss of a loved one, their online expression of emotion may evolve into aggression towards rival gangs and ultimately into real-world violence. In this paper, we present a novel system for detecting Aggression and Loss in social media. Our system features the use of domainspecific resources automatically derived from a large unlabeled corpus, and contextual representations of the emotional and semantic content of the user's recent tweets as well as their interactions with other users. Incorporating context in our Convolutional Neural Network (CNN) leads to a significant improvement. 1 We will make tweet IDs for the data available to researchers who sign an MOU specifying their intended use of the data and their agreement with our ethical guidelines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In cities such as Chicago, gang-involved youth have increasingly turned to social media to post about their experience, often expressing grief when friends or family members are shot and killed. As grief turns to anger, their posts turn to retribution and ultimately to plans for revenge (Patton et al., 2018b) . Research in this space has shown that online posts often affect life in the real world (Moule et al., 2013; Patton et al., 2013; Pyrooz et al., 2015; Patton et al., , 2017a . In some communities, violence outreach workers manually scour online spaces to identify such possibilities and intervene to diffuse situations. A tool that identifies Aggression or Loss posts could help them filter irrelevant posts, but resources to develop a tool like this are scarce.", "cite_spans": [ { "start": 288, "end": 310, "text": "(Patton et al., 2018b)", "ref_id": "BIBREF25" }, { "start": 400, "end": 420, "text": "(Moule et al., 2013;", "ref_id": "BIBREF15" }, { "start": 421, "end": 441, "text": "Patton et al., 2013;", "ref_id": "BIBREF18" }, { "start": 442, "end": 462, "text": "Pyrooz et al., 2015;", "ref_id": "BIBREF29" }, { "start": 463, "end": 485, "text": "Patton et al., , 2017a", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present automatic approaches for constructing resources and context features in this domain, and apply them to detecting Aggression and Loss in the social media posts of ganginvolved youth in Chicago. We exploit both a small labeled dataset (4,936 posts) and a much larger unlabeled dataset (approximately 1 million posts), which we constructed using a method that enabled us to gather Twitter posts representative of the community we study. We incorporate our approaches into a CNN system, as well as a Support Vector Machine (SVM) to match the architecture of prior work, thus enabling analysis of the impact in different frameworks 1 .", "cite_spans": [ { "start": 653, "end": 654, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Key features of our system are the use of domainspecific word embeddings and a lexicon automatically induced from our unlabeled dataset. When classifying an individual tweet, our system considers the content and emotional impact of the tweets in the author's recent history. If applicable, our system additionally takes into account a model of the pairwise interactions between the author and other users in the tweet referenced via either retweet or mention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We compare our approaches with previous work that used a smaller dataset (800 tweets) and handcurated resources with an SVM (Blevins et al., 2016) . By integrating our induced domain-specific and context information in a CNN, we achieve a significant increase over their reported results.", "cite_spans": [ { "start": 124, "end": 146, "text": "(Blevins et al., 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 A new labeled dataset, six times larger than that of prior work;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Domain-specific resources, automatically induced from our constructed unlabeled dataset;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Context features that capture semantic and emotion content in the user's recent posts as well as their interactions with other users in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach brings us one step closer to building a useful tool that can help reduce gang violence in urban neighborhoods. In the remainder of the paper, we present related work, the dataset that we used, and our methodology. We conclude with an error analysis and a discussion of the impact of our contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Researchers have begun to explore how online data can be used to help prevent gun violence. Pavlick et al. 2016 are creating the Gun Violence Data Base by crowdsourcing annotations on newspaper articles that report on gun violence, labeling the sections of text that report on incidents, the shooter, and the victim. Researchers have also explored identifying deaths from police shootings with semisupervised methods for both CNNs and logistic regression (Keith et al., 2017) and found that logistic regression using a soft-labeling approach gave the best results. Researchers studying gun control issues analyzed social media for posts related to any issue around guns in the year following the Sandy Hook elementary school shooting (Benton et al., 2016) and argued that online media can be used to understand trends in gun violence and gun-related behaviors .", "cite_spans": [ { "start": 92, "end": 111, "text": "Pavlick et al. 2016", "ref_id": "BIBREF26" }, { "start": 455, "end": 475, "text": "(Keith et al., 2017)", "ref_id": "BIBREF12" }, { "start": 734, "end": 755, "text": "(Benton et al., 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Closely related research aims to automatically identify gang members' Twitter profiles (Balasuriya et al., 2016) . After collecting profiles using bootstrapping, they trained different classifiers on the tweets and meta-information about the authors. Further research analyzes the social networks of gangs (Radil et al., 2010) and predicts gang affiliation based on the analysis of graffiti style features (Piergallini et al., 2014) .", "cite_spans": [ { "start": 87, "end": 112, "text": "(Balasuriya et al., 2016)", "ref_id": "BIBREF2" }, { "start": 306, "end": 326, "text": "(Radil et al., 2010)", "ref_id": "BIBREF32" }, { "start": 406, "end": 432, "text": "(Piergallini et al., 2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The most relevant work in automatically analyzing social media posts by gang-involved youth is that of Blevins et al. 2016 . The labeled dataset that Blevins and collaborators used is extremely challenging, in part due to its size, but also because it contains text in a particular dialect of English -African American English (AAE) -which has very little core NLP tool support. Other research investigating the development of tools for understanding AAE in social media (Blodgett et al., 2016) shows that existing tools (e.g., dependency parsers) perform poorly on this language. Previous work by Patton on a subset of our dataset notes that due to the linguistic style, tweets from gang-involved youth in Chicago can be challenging for outsiders to interpret and thus are often open to misinterpretation and potential criminalization (Patton et al., 2017b) .", "cite_spans": [ { "start": 103, "end": 122, "text": "Blevins et al. 2016", "ref_id": "BIBREF4" }, { "start": 471, "end": 494, "text": "(Blodgett et al., 2016)", "ref_id": "BIBREF5" }, { "start": 836, "end": 858, "text": "(Patton et al., 2017b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The challenges of interpreting our data are further compounded by the usual difficulties with Twitter data. Twitter data is sometimes handled by translating it to Standard American English (SAE) through the use of a phrasebook. The NoSlang Slang Translator (NoSlang, 2018b), and the accompanying NoSlang Drug Slang Translator (NoSlang, 2018a) , have been used in other tasks to translate social media communication (Sarker et al., 2016) , (Han and Baldwin, 2011) .", "cite_spans": [ { "start": 326, "end": 342, "text": "(NoSlang, 2018a)", "ref_id": "BIBREF16" }, { "start": 415, "end": 436, "text": "(Sarker et al., 2016)", "ref_id": "BIBREF35" }, { "start": 439, "end": 462, "text": "(Han and Baldwin, 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To engineer features for an SVM classifier, Blevins et al. 2016 learned a part-of-speech (POS) tagger for their data and constructed a word level translation phrasebook to map emojis and slang to the Dictionary of Affect in Language (DAL) in order to identify their emotion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In contrast to Blevins' translation approach, we leverage our large unlabeled dataset to automatically induce resources, such as word embeddings, that function well within the domain of our task. Previous research on domain-specific word embeddings includes work in cybersecurity (Roy et al., 2017) , disease surveillance (Ghosh et al., 2016) , and construction (Tixier et al., 2016) . These domain-specific word embeddings tend to improve performance on tasks within that domain.", "cite_spans": [ { "start": 280, "end": 298, "text": "(Roy et al., 2017)", "ref_id": "BIBREF34" }, { "start": 322, "end": 342, "text": "(Ghosh et al., 2016)", "ref_id": "BIBREF9" }, { "start": 362, "end": 383, "text": "(Tixier et al., 2016)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Context has been used in previous research on detecting hate speech in social media. Qian et al. 2018 found significant improvements by collecting the entire history of a user's tweets and feeding them to a encoder to create an intra-user representation, which was used as input to a Bidirectional LSTM. They also used a representation of tweets similar to the tweet being classified. While their approach captures a user profile based on everything the user has posted, in our approach we investigate how the recent history of tweets and interactions with others can improve classification. Others also make use of a user profile, though not one learned from unlabeled data (Dadvar et al., 2013) .", "cite_spans": [ { "start": 675, "end": 696, "text": "(Dadvar et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our dataset consists of two parts: first, a collection of 4,936 tweets authored or retweeted by Gakirah Barnes, a powerful female Chicago gang member, and her top communicators, as well as ad-ditional Twitter users in the same demographic, annotated by social work researchers who have been studying Gakirah and the associated Chicago gangs. Second, we use a much larger collection of approximately one million unlabeled tweets automatically scraped from 279 users in the same social network. This social network is comprised of 214 users snowball-sampled from Gakirah Barnes' top 14 communicators. Traditionally, snowball sampling has been used to recruit hard-to-reach research subjects (Atkinson and Flint, 2001 ) and we have adapted it for social media. The remaining 65 users were added to this network by retaining those with the highest IQI score 2 from the full list of Gakirah's Twitter followers. Our tweets thus form a representative sample of Twitter dialogue between youth from Chicago neighborhoods with high levels of gang activity during that time period.", "cite_spans": [ { "start": 689, "end": 714, "text": "(Atkinson and Flint, 2001", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "The social work researchers performed a detailed, qualitative analysis of a subset of the dataset, with a focus on analyzing how context influences determination of a label. For example, they note that an aggressive tweet may reference a previous event, and will often use coded language to do so. Since much of the language used in our data differs significantly from standard American English, local youth active in similar environments served as consultants to answer questions about the language, as they were able to interpret the slang terms present in these tweets. The social work researchers conducted a fine-grained analysis using an online tool for annotation, identifying insults, threats, bragging, hypervigilance and challenges to authority, all of which were collapsed into a single category, Aggression. Posts including distress, sadness, loneliness and death were collapsed into the category Loss. The Other category includes discussion of other aspects of their life, such as friendships, relationships, drugs, general conversations, and happiness. We developed our system (as did Blevins et al. 2016) on the collapsed labels, as the task is difficult even with three-way categorization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Each tweet in a subset of the entire dataset consisting of 3,000 tweets was reviewed by two different annotators. Inter-rater reliability between raters was tracked, with dissimilar annotations flagged for further review. Flagged tweets were further analyzed by the social work researchers, which in- Table 1 3 . In order to mitigate potential issues with training and test data being drawn from different time periods or having different distributions of labels, we shuffled our data and drew stratified samples with equal distribution across classes for our training, validation, and test sets for each of the cross validation folds, using 64%, 16%, and 20% of our data for each respectively. The Aggression and Loss classes are relatively small, reflecting their low distribution in real life: we have only 329 Aggression tweets and 734 Loss tweets, with the Other class comprising the remaining 3,873 tweets.", "cite_spans": [], "ref_spans": [ { "start": 301, "end": 308, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "We approach this classification task using a standard CNN classifier architecture (Kim, 2014; Collobert et al., 2011) as our starting point. We initially experimented with both character and word level CNNs but found the word level to be 1.6 macro-F1 points better than the character level, so we only include the word level here. We leveraged the unlabeled corpora by constructing domain-specific embeddings and a lexicon that better fit our unique and low-resource domain. We then integrated our domain-specific resources into the CNN to represent the given tweet as well as to represent context features. Our context features represent a window of the user's recent tweets as well as the interactions of the author with other users via references in their tweets.", "cite_spans": [ { "start": 82, "end": 93, "text": "(Kim, 2014;", "ref_id": "BIBREF13" }, { "start": 94, "end": 117, "text": "Collobert et al., 2011)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "We exploited the large unlabeled corpus to build two domain-specific resources for this task: word embeddings and a task-specific lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain-Specific Resources", "sec_num": "4.1" }, { "text": "Word embeddings have proven useful in representing the semantic content of sentences. The semantic representation of a word by its associated embedding, however, depends on its usage in the corpus the embedding was trained on, and so off-theshelf word embeddings do not always adapt well to tasks with a unique domain (Roy et al., 2017) , (Ghosh et al., 2016) , (Tixier et al., 2016) . Thus, we were motivated to use our unlabeled corpus to create domain-specific word embeddings. We used the Word2Vec (Mikolov et al., 2013 ) CBOW model to train the embeddings which is the default training algorithm available in Gensim 4 . We used a window size of 5 words with a minimum word count of 5 to train w \u2208 R 300 . The CBOW model was trained for 20 epochs.", "cite_spans": [ { "start": 318, "end": 336, "text": "(Roy et al., 2017)", "ref_id": "BIBREF34" }, { "start": 339, "end": 359, "text": "(Ghosh et al., 2016)", "ref_id": "BIBREF9" }, { "start": 362, "end": 383, "text": "(Tixier et al., 2016)", "ref_id": "BIBREF36" }, { "start": 502, "end": 523, "text": "(Mikolov et al., 2013", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embeddings", "sec_num": "4.1.1" }, { "text": "Given the domain-specific nature of our users' language, we could not rely on standard NLP lexicons to represent emotion in their tweets. For our task, the two emotions of interest are Aggression and Loss. Previous work (Blevins et al., 2016) used a phrasebook to translate the domain-specific words of their corpus to Standard American English so that they could access emotion in the Dictionary of Affect in Language (DAL) (Whissell, 2009) , but this approach does not generalize to capture new words.", "cite_spans": [ { "start": 220, "end": 242, "text": "(Blevins et al., 2016)", "ref_id": "BIBREF4" }, { "start": 425, "end": 441, "text": "(Whissell, 2009)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Computing a Lexicon of Aggression and Loss", "sec_num": "4.1.2" }, { "text": "We therefore adapted the SENTPROP algorithm (Hamilton et al., 2016) to automatically induce a lexicon of Aggression and Loss from our unlabeled corpus. The SENTPROP algorithm constructs a lexical graph out of the word embeddings, then propagates labels from the seed sets over the unlabeled nodes via a random walk method. The resulting output for each word indicates the probability of a random walk from the seed set landing on that node. We chose SENTPROP as an induction method because it performs especially well for domain-specific corpora, and it is resource-light and interpretable.", "cite_spans": [ { "start": 44, "end": 67, "text": "(Hamilton et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Computing a Lexicon of Aggression and Loss", "sec_num": "4.1.2" }, { "text": "We created word embeddings by employing an SVD-based method that was reported by the SENT-PROP authors to perform optimally with their algorithm. We first constructed the positive point-wise mutual information matrix, M P P M I , over the unlabeled corpus, then computed singular value decomposition (SVD) to derive M P P M I = U \u03a3V . The word embedding for word w i was thus given by U i , truncated to a standard length of dimension 300. To construct our seed sets, we asked our annotators to consider words for Loss and Aggression which they associated most strongly with each class. They generated a set of 29 words for Aggression and a set of 40 words for Loss, which we include in our appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing a Lexicon of Aggression and Loss", "sec_num": "4.1.2" }, { "text": "We ran SENTPROP with our SVD-based embeddings and the seed sets from our annotators. We used the output probabilities from the random walks to map words to their association with Aggression and Loss, thus forming our lexicon of Aggression and Loss. Finally, we scaled the probabilities per class to mean 0 and variance 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing a Lexicon of Aggression and Loss", "sec_num": "4.1.2" }, { "text": "Our context features utilize the domain-specific resources that we induced from the unlabeled corpora. To capture context, we first considered the author's recent history, separately exploring representations by our domain-specific word embeddings and by the SENTPROP lexicon (SPLex). If applicable, we also considered the interactions between the author and other users who were referenced in the tweet, either via retweet or mention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Features", "sec_num": "4.2" }, { "text": "To obtain the user's recent history, we ordered all the tweets chronologically and bucketed them by author. Thus, for any given tweet occurring at time t, a t , we were able to retrieve previous tweets a t\u22121 , a t\u22122 , . . . by that user. We treated recent history as a sliding window and fetched tweets within the past d days from when the current tweet was tweeted, such that recent history tweets would be the set {a t\u22121 , . . . , a t\u2212k }, where t \u2212 k < d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User History", "sec_num": "4.2.1" }, { "text": "To represent the tweets within the context of recent history, we first combined word level representations into tweet level, then tweet level representations into context level. At each stage of combination, we tried both summing and averaging. Thus, our recent history representations were built by aggregating either word embeddings or SPLex scores, which maintained their dimensionality of 300 or 2, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User History", "sec_num": "4.2.1" }, { "text": "We also considered three types of tweets that would be relevant to a user. The user's own tweets (SELF) would always be relevant; we experimented with also including tweets where the user was retweeted (RETWEET) and tweets where the user was mentioned (MENTION). We included these parameters as additional sources of context because a user's tweet may be a response to a recent mention or retweet from another user.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User History", "sec_num": "4.2.1" }, { "text": "We also experimented with weighting the most recent tweets more heavily than further tweets within the recent history window. This became especially important when we experimented with larger windows of a month or more, since tweets from a few days ago are more likely to be related to the current tweet than tweets from a few weeks ago. To model this diminishing relevance, we introduced a weighting protocol with a variable half-life where weights decay exponentially over time. The parameter we tuned was the half-life ratio r, which is the proportion of the window size d that corresponds to the window's half-life. Then, before combining tweet level representations into context level, we multiplied each tweet representation b i by its weight, 2 \u2212 \u2206t f , where \u2206t = t \u2212 i is the distance in days between the context tweet a i and the current tweet a t , and f = d * r is the half life.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User History", "sec_num": "4.2.1" }, { "text": "As an additional context feature, we modeled the pairwise interactions between users. To identify interactions, we iterated through our unlabeled and labeled corpora and checked which users were involved in each tweet. We counted a user as involved in a tweet if they posted the tweet or were referenced via retweet or mention. For each pair of users, we aggregated all their tweets of mutual involvement into one document and averaged the document's word embeddings to create a representation of their pairwise interactions in R 300 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Interactions", "sec_num": "4.2.2" }, { "text": "We experimented with the efficacy of our domainspecific resources, the impact of different context parameters, and the contribution of context to predicting Aggression and Loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "For word level models, we preprocess each tweet by: i) lowercasing every character, ii) replacing every user mention and url with special tokens \"user\" and \"url\", iii) considering each emoji an individual token, whether space separated or not, and iv) removing emoji modifiers to reduce sparsity, just as we used lowercasing. We select the top 40K tokens based on frequency, replacing the remaining tokens with \"UNKNOWN\". We zero-pad or trim tweets so that tweet length will be 50 when passed to our CNN model. Similarly, we only consider users who occur (as author, source of retweet, or in mention) in the labeled and unlabeled corpus at least twice, resulting in 35,656 users in total.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus pre-processing", "sec_num": "5.1" }, { "text": "We extract the author of the tweets from metadata, and user mentions and original posters of retweets from the Twitter text, based on their Twitter display name. We used Twitter display name rather than user ID because we cannot collect user ID for interaction features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus pre-processing", "sec_num": "5.1" }, { "text": "For this 3-way classification task, we train two models; the first model predicts whether a tweet has the Aggression label and the second predicts for Loss. Each model maps a sequence of tokens to a probability value for a class. Here we define the architecture of our CNN model. Our input c is a token index sequence of length 50. We map each token index to a vector \u2208 R 300 with a trainable embedding matrix, followed by dropout 0.5. We apply a 1D Convolutional layer with kernel sizes 1 and 2, filter size 200 each, to the embedded token sequence, followed by ReLU activation, max pooling and dropout 0.5. We concatenate the output of max pooling for kernel sizes 1 and 2, stack another dense layer h with dimension 256, and connect the output of h to the final single output unit with sigmoid activation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CNN Architecture", "sec_num": "5.2" }, { "text": "In the prediction phase, for each data point, we classify it as Aggression if the the first model produces the probability score above threshold t A . If it is not predicted as Aggression, then we classify it as Loss if the second model produces a score above a threshold t L . The remaining tweets are classified as Other. t A and t L are tuned on the validation set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CNN Architecture", "sec_num": "5.2" }, { "text": "We incorporate context information into the neural network in the following way. Each type of context feature takes the form of a real vector: both word embedding user history and word embedding user interaction features are in R 300 , and SPLex user history features are in R 2 . We concatenate these feature vectors with the last layer h before the final classification output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CNN Architecture", "sec_num": "5.2" }, { "text": "We used as our baseline method a linear-kernel SVM classifier as used by Blevins et al. 2016 . We obtained code from the authors and trained on our larger dataset. In this method, after basic preprocessing is performed to replace urls and user mentions with special tokens, unigram, bigram, part-of-speech tag, and emotion features are extracted. Feature selection is performed to prune the feature space. The part-of-speech tagger used in Blevins et al. 2016 was developed for use on this domain; emotion features are computed using scores for each tweet word taken from the Dictionary of Affect in Language (DAL). We performed gridsearch to re-tune the loss function, the regularization penalty type, and the penalty parameter C, but found that the original settings for these parameters still performed best even on our new development set. We also tuned the class weights used: while the model performed best on the original data with balanced class weights, we found that less extreme balancing performed better here (weights 2, 1, and 0.12 for Aggression, Loss, and Other, respectively).", "cite_spans": [ { "start": 73, "end": 92, "text": "Blevins et al. 2016", "ref_id": "BIBREF4" }, { "start": 440, "end": 459, "text": "Blevins et al. 2016", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "SVM Baseline", "sec_num": "5.3" }, { "text": "While we retrained the SVM on our new training set, we did not modify the additional components used for feature selection such as the phrase table or the specialized part-of-speech tagger, as we had no additional data available for this. This indicates the difficulty of generalizing to new data with unseen vocabulary, and is one of the disadvantages of using manually-created specialized feature sets such as these.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM Baseline", "sec_num": "5.3" }, { "text": "In order to test the efficacy of our domain-specific word embeddings, we compared them with a number of other embedding types. Our baseline method was Pennington et al. 2014's GloVe embeddings pretrained on a general Twitter dataset, available from their website 5 . We trained a parallel set of word embeddings on the African American English (AAE) corpus of around 1.1 million tweets provided by Blodgett et al. 2016 , and another set on a corpus of a location-specific set of tweets that we scraped, drawn from users who posted from a specific area within the South Side of Chicago where the gangs we study are based. We also compared performance with a randomly initialized word embedding matrix.", "cite_spans": [ { "start": 398, "end": 418, "text": "Blodgett et al. 2016", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Domain Experiments", "sec_num": "5.4" }, { "text": "We first explored the impact of the user history parameters, tuning them separately for representations by our domain-specific word embeddings and by SPLex. We kept these representations separate because we expected them to capture different types of context: word embeddings should capture the semantic content of the user's history, while SPLex scores should capture something closer to the user's emotional state leading up to the tweet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Experiments", "sec_num": "5.5" }, { "text": "With each representation, we experimented with summing versus averaging word embeddings to yield a tweet level representation, and similarly experimented with summing and averaging from tweet embeddings to context level representations. We varied the size of the context window, d, trying 2 days, 1 week, 1 month, 2 months, and 3 months. We also varied the half-life ratio, r = .25, .5, .75, or no weighting. Lastly, we tried including different types of posts in the user history.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Experiments", "sec_num": "5.5" }, { "text": "Once we tuned the user history parameters, we experimented with adding our context features (user history and user interactions) to the best tweet level model we could achieve without context. For our CNN, our best tweet level model used our domain-specific word embeddings as pretrained weights for the embedding layer (CNN-DS in Table 3). To evaluate the impact of our resources in different frameworks, we additionally experimented with the contribution of context in an SVM. The best tweet level SVM included the averaged domain-specific word embeddings and summed SPLex scores of the tokens in the tweet (SVM-DS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Experiments", "sec_num": "5.5" }, { "text": "We report results comparing different embeddings and comparing parameters for context. We use the best results from these experiments to produce our final systems in the SVM and CNN frameworks. The best resulting architecture for the CNN framework is illustrated in Fig. 1 .", "cite_spans": [], "ref_spans": [ { "start": 266, "end": 272, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Experiments were performed using five-fold crossvalidation over the labeled data and were repeated five times for each fold to account for variance between runs. Reported F-scores, shown in Table 2 , are averaged across runs and across folds.", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 198, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Comparison of Embeddings", "sec_num": "6.1" }, { "text": "Word embeddings trained on our unlabeled corpus outperformed other embeddings by over 4 points. Related datasets such as the locationspecific or AAE corpus did not provide helpful semantic information, as their embeddings did not even beat random initialization. This was not an effect of corpus size, since these corpora contained 800,000 and 1.1 million tweets, respectively, compared to the 1 million in our unlabeled corpus. Thus, we attribute the difference to the importance of deriving embeddings directly from our community of interest, demonstrating that the language of our community is more specific than AAE in general and that our snowballing method was able to capture a better representation of user language than a location driven method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Embeddings", "sec_num": "6.1" }, { "text": "Experiments were performed using five-fold crossvalidation and F-scores computed as in the word embedding experiments. We found that user history represented by domain-specific word embeddings performed optimally when we averaged from word to tweet level and from tweet to context level. The best window size was d = 90 days, including only SELF posts, and using a half-life ratio of r = 0.25. For user history represented by SPLex, we found the best method of combination to be summing, at both the word and tweet level. We hypothesize this is because summing captures not only the presence but also the number or density of highly indicative Aggression or Loss words posted by the user over the context window. The best window size was d = 2 days, including both SELF and RETWEET posts, without half-life weighting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User History Parameters", "sec_num": "6.2" }, { "text": "Our approach was designed to implement and test previous insights about the domain, particularly that context plays a role in the interpretation of posts. The short time frame for SPLex user history corresponds with the 2 day window found in Patton et al. 2018b's research and reflects the fact that emotional states may fluctuate often and within a certain number of days. In contrast, word embeddings improved consistently as we extended the context window from 2 days to 90 days. Since word embedding user history is meant to capture the user's semantics, a larger window size means the representation can be drawn from more tweets, and thus reflects a more representative sample of the user's semantics around this time period.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User History Parameters", "sec_num": "6.2" }, { "text": "To develop a more stable measurement of comparison between different systems, we create four independent sets of 5-fold cross validation splits on our data set (altogether 20 folds); to account for randomness in neural net training, we train each neural net model 5 times and take the majority vote of the predictions. For each class, we calculate the statistical significance of F-score based on the predictions on the concatenated test sets of all 20 folds using the Approximate Randomization Test (Riezler and Maxwell 2005) with the Bonferroni correction for multiple comparisons. Results are shown in Table 3 . Adding context contributed to a significant improvement in both the CNN and SVM frameworks, demonstrating the independent value of our context features over domain-specific resources. For contrast, we also compared our context features with user profiles built from averaging the word embeddings in all of the user's tweets. Our pairwise and user history features outperformed user profiles by .7 points, demonstrating that it is valuable to provide dynamic representations of users that can adjust to their recent posts or their interactions with other users, as opposed to stereotyping their overall behavior.", "cite_spans": [ { "start": 500, "end": 526, "text": "(Riezler and Maxwell 2005)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 605, "end": 612, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Comparison of Best Systems", "sec_num": "6.3" }, { "text": "Additionally, we compare the impact of our domain-specific resources to those used by Blevins et al. (2016) . In particular, we expect that their emotion scores will not generalize to the new vocabulary in our large unlabeled corpus (see Section 4.1). Our domain-specific resources alone without context raise our SVM to comparable performance with the Blevins et al. retrained baseline, and the resources push our CNN without context over this baseline. This demonstrates that our automatic methods can do as well as if not better than phrasebook methods, and they are significantly more efficient to generate.", "cite_spans": [ { "start": 86, "end": 107, "text": "Blevins et al. (2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of Best Systems", "sec_num": "6.3" }, { "text": "In this section we provide an analysis of the tradeoffs of each classifier by analyzing some of the examples in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 119, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Context vs non-context CNN. Our best CNN -a system which incorporated context -was able to correctly predict tweets 3 and 4, whereas our baseline using only our pretrained Word2Vec embeddings was not. Correctly classifying tweet 4 relies on the knowledge that the referenced user, DMoney, is a deceased member of a rival gang of the poster. In tweet 3, the poster is saying that he has seen Gakirah's death on the news; this is an expression of loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Domain-specific vocabulary. Our CNN trained on domain-specific word embeddings is able to correctly classify tweet 5, while the one trained on Twitter word embeddings did not pick up the aggressive content. This user is talking about how their friend is ready to kill someone. This tweet contains the word thirsty but in this domain-specific context it means being ready and having an urge (although it would not always refer to killing).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Hashtags and character sequences. Despite their strengths, both our best CNN and our best SVM classifiers were still unable to correctly classify some of the trickier cases. There were certain types of tweets they were categorically unable to recognize: tweet 1 features a hashtag that refers to an incarcerated acquaintance of the poster, but as both our CNN and SVM models operate at the word level, this tag would have appeared simply as a rare or unknown token to them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Anger miscategorized as Aggression. At times, the classifier categorized posts that express anger as Aggression. For example, in tweet 4 the author uses profanity to express grief related to the loss of a friend. In addition, the devil face emoji, which is sometimes used to express aggression, is also used in the context of anger. While the best CNN model managed to correctly predict this as Loss, the SVM miscategorized it as Aggression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Our ethics guidelines include just treatment of the users who provide our data, removal of identifying information for publication, and the inclusion of Chicago-based community members as domain experts in the analysis and validation of our findings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethics", "sec_num": "8" }, { "text": "There are risks involved with detecting Aggression and Loss in social media data using automatic detection systems. These risks include possible misidentifications of tweets, increased police involvement, and loss of privacy, which all have the potential to harm marginalized communities and people. Our mitigation strategies begin by partnering with violence prevention organizations and incorporating domain experts (Frey et al., 2018) to ensure the highest ethical standards for interpreting social media posts and for the dissemination and use of our research for violence prevention. Through insights gained from these partnerships, we developed our own risk mitigation strategies: de-identifying each tweet and rendering it unsearchable through textual modification without altering meaning; encrypting our social media corpus to protect user identities; and relying on violence prevention organizations' expertise in deciding if and when to involve law enforcement to prevent the unethical use of our data (e.g., hyper-surveillance of communities of color).", "cite_spans": [ { "start": 418, "end": 437, "text": "(Frey et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Ethics", "sec_num": "8" }, { "text": "Our approach shows that integrating emotions and semantic content of a user's recent posts is an important component for the task of predicting Aggression and Loss in social media posts of gang-involved youth. Furthermore, using domainspecific embeddings and an Aggression-Loss lexicon induced from a corpus of language constructed to represent our specific community of users is also critical to success. Our experiments reveal that our snowballing technique is more effective than a location based approach and that fitting our community is more complex than resorting to their demographic, as captured in the AAE corpus of Blodgett et al. (2016) .", "cite_spans": [ { "start": 626, "end": 648, "text": "Blodgett et al. (2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "Our work has real life implications for the use of machine learning to identify unique characteristics in social media data that may indicate the process by which gun violence may occur (Patton et al., 2018a) . Our partnership between computer scientists, social work researchers and practitioners has advanced plans to create applications to help outreach workers in Chicago identify factors related to potential violence, potentially allowing them to prevent and intervene in aggressive online activity. The tool, which would be co-created with community stakeholders, would enable quick scanning of large quantities of social media posts that outreach workers would be unable to perform manually.", "cite_spans": [ { "start": 186, "end": 208, "text": "(Patton et al., 2018a)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "We expect our methods to be generalizable because we compute embeddings and lexicons from neighborhood-specific data and do not rely on large, hand-crafted resources such as dictionaries. However, we hope to test generalizability in future work by applying our methods to other gang-related corpora, because there is variation in language, local concepts, and behavior across gangs. In the future, we are also interested in further experimenting with the context features introduced in this work; for instance, by extending our pairwise interaction features to take into account direction between users. Finally, we intend to explore other types of context, such as reference to specific events that may trigger the emotions of either Aggression or Loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "https://www.brookings.edu/wp-content/ uploads/2016/06/isis_twitter_census_ berger_morgan.pdf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our data was scraped from publicly available posts and was determined exempt by our organization's IRB. User names are replaced with USER in the table, and text has been modified to render tweets unsearchable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://radimrehurek.com/gensim/ models/word2vec.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlp.stanford.edu/projects/ glove/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is supported in part by DARPA contract 55630053. The authors also thank the anonymous reviewers for their thoughtful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "10" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Accessing hidden and hard-to-reach populations: Snowball research strategies", "authors": [ { "first": "R", "middle": [], "last": "Atkinson", "suffix": "" }, { "first": "J", "middle": [], "last": "Flint", "suffix": "" } ], "year": 2001, "venue": "Social research update", "volume": "33", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Atkinson and J. Flint. 2001. Accessing hidden and hard-to-reach populations: Snowball research strate- gies. Social research update 33:1-4.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Big media data can inform gun violence prevention", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Ayers", "suffix": "" }, { "first": "B", "middle": [ "M" ], "last": "Althouse", "suffix": "" }, { "first": "E", "middle": [ "C" ], "last": "Leas", "suffix": "" }, { "first": "T", "middle": [], "last": "Alcorn", "suffix": "" }, { "first": "M", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2016, "venue": "Bloomberg Data for Good Exchange", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. W. Ayers, B. M. Althouse, E. C. Leas, T. Alcorn, and M. Dredze. 2016. Big media data can inform gun violence prevention. In Bloomberg Data for Good Exchange.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Finding street gang members on twitter", "authors": [ { "first": "L", "middle": [], "last": "Balasuriya", "suffix": "" }, { "first": "S", "middle": [], "last": "Wijeratne", "suffix": "" }, { "first": "D", "middle": [], "last": "Doran", "suffix": "" }, { "first": "A", "middle": [], "last": "Sheth", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE/ACM International Conference on Advances in Social Network Analysis and Mining", "volume": "", "issue": "", "pages": "685--692", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Balasuriya, S. Wijeratne, D. Doran, and A. Sheth. 2016. Finding street gang members on twitter. In Proceedings of the IEEE/ACM International Confer- ence on Advances in Social Network Analysis and Mining. pages 685-692.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "After sandy hook elementary: A year in the gun control debate on twitter", "authors": [ { "first": "A", "middle": [], "last": "Benton", "suffix": "" }, { "first": "B", "middle": [], "last": "Hancock", "suffix": "" }, { "first": "G", "middle": [], "last": "Coppersmith", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Ayers", "suffix": "" }, { "first": "M", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2016, "venue": "Bloomberg Data for Good Exchange", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Benton, B. Hancock, G. Coppersmith, J. W. Ayers, and M. Dredze. 2016. After sandy hook elemen- tary: A year in the gun control debate on twitter. In Bloomberg Data for Good Exchange.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatically processing tweets from gang-involved youth: Towards detecting loss and aggression", "authors": [ { "first": "T", "middle": [], "last": "Blevins", "suffix": "" }, { "first": "R", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "J", "middle": [], "last": "Macbeth", "suffix": "" }, { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "D", "middle": [], "last": "Patton", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2196--2206", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Blevins, R. Kwiatkowski, J. Macbeth, K. McKe- own, D. Patton, and O. Rambow. 2016. Automat- ically processing tweets from gang-involved youth: Towards detecting loss and aggression. In Proceed- ings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Technical Pa- pers. pages 2196-2206.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Demographic dialectal variation in social media: A case study of african-american english", "authors": [ { "first": "S", "middle": [ "L" ], "last": "Blodgett", "suffix": "" }, { "first": "L", "middle": [], "last": "Green", "suffix": "" }, { "first": "B", "middle": [], "last": "O'connor", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1119--1130", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. L. Blodgett, L. Green, and B. O'Connor. 2016. Demographic dialectal variation in so- cial media: A case study of african-american english. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics, Austin, Texas, pages 1119-1130. https://aclweb.org/anthology/D16-1120.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "M", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "K", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "P", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural lan- guage processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493-2537.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving cyberbullying detection with user context", "authors": [ { "first": "M", "middle": [], "last": "Dadvar", "suffix": "" }, { "first": "D", "middle": [], "last": "Trieschnigg", "suffix": "" }, { "first": "R", "middle": [], "last": "Ordelman", "suffix": "" }, { "first": "F", "middle": [], "last": "De Jong", "suffix": "" } ], "year": 2013, "venue": "European Conference on Information Retrieval", "volume": "", "issue": "", "pages": "693--696", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Dadvar, D. Trieschnigg, R. Ordelman, and F. de Jong. 2013. Improving cyberbullying detec- tion with user context. In European Conference on Information Retrieval. Springer, pages 693-696.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Artificial intelligence and inclusion: Formerly gang-involved youth as domain experts for analyzing unstructured twitter data", "authors": [ { "first": "W", "middle": [ "R" ], "last": "Frey", "suffix": "" }, { "first": "D", "middle": [ "U" ], "last": "Patton", "suffix": "" }, { "first": "M", "middle": [ "B" ], "last": "Gaskell", "suffix": "" }, { "first": "K", "middle": [ "A" ], "last": "Mcgregor", "suffix": "" } ], "year": 2018, "venue": "Social Science Computer Review", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. R. Frey, D. U. Patton, M. B. Gaskell, and K. A. McGregor. 2018. Artificial intelligence and inclusion: Formerly gang-involved youth as domain experts for analyzing unstructured twit- ter data. Social Science Computer Review https://doi.org/0894439318788314.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Designing domain specific word embeddings: Applications to disease surveillance", "authors": [ { "first": "S", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "P", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "E", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Brownstein", "suffix": "" }, { "first": "N", "middle": [], "last": "Ramakrishnan", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Ghosh, P. Chakraborty, E. Cohn, J. S. Brown- stein, and N. Ramakrishnan. 2016. Designing domain specific word embeddings: Applications to disease surveillance. CoRR abs/1603.00106. http://arxiv.org/abs/1603.00106.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Inducing domain-specific sentiment lexicons from unlabeled corpora", "authors": [ { "first": "W", "middle": [ "L" ], "last": "Hamilton", "suffix": "" }, { "first": "K", "middle": [], "last": "Clark", "suffix": "" }, { "first": "J", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. L. Hamilton, K. Clark, J. Leskovec, and D. Jurafsky. 2016. Inducing domain-specific sentiment lexicons from unlabeled corpora. EMNLP abs/1606.02820. https://arxiv.org/abs/1606.02820.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Lexical normalisation of short text messages: Makn sens a #twitter", "authors": [ { "first": "B", "middle": [], "last": "Han", "suffix": "" }, { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "368--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Han and T. Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies -Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT '11, pages 368-378.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Identifying civilians killed by police with distantly supervised entity-event extraction", "authors": [ { "first": "K", "middle": [ "A" ], "last": "Keith", "suffix": "" }, { "first": "A", "middle": [], "last": "Handler", "suffix": "" }, { "first": "M", "middle": [], "last": "Pinkham", "suffix": "" }, { "first": "C", "middle": [], "last": "Magliozzi", "suffix": "" }, { "first": "J", "middle": [], "last": "Mcduffie", "suffix": "" }, { "first": "B", "middle": [ "T" ], "last": "O'connor", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. A. Keith, A. Handler, M. Pinkham, C. Magliozzi, J. McDuffie, and B. T. O'Connor. 2017. Identifying civilians killed by police with distantly supervised entity-event extraction. In EMNLP.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Y", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1408.5882" ] }, "num": null, "urls": [], "raw_text": "Y. Kim. 2014. Convolutional neural networks for sen- tence classification. arXiv preprint arXiv:1408.5882 .", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR abs/1301.3781.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "From 'What the f#@% is a Facebook?'to 'Who doesn't use Facebook?': The role of criminal lifestyles in the adoption and use of the Internet", "authors": [ { "first": "R", "middle": [ "K" ], "last": "Moule", "suffix": "" }, { "first": "D", "middle": [ "C" ], "last": "Pyrooz", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Decker", "suffix": "" } ], "year": 2013, "venue": "Social Science Research", "volume": "42", "issue": "6", "pages": "1411--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. K. Moule, D. C. Pyrooz, and S. H. Decker. 2013. From 'What the f#@% is a Facebook?'to 'Who doesn't use Facebook?': The role of criminal lifestyles in the adoption and use of the Internet. So- cial Science Research 42(6):1411-1421.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Noslang drug slang translator. Accessed", "authors": [ { "first": "", "middle": [], "last": "Noslang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "NoSlang. 2018a. Noslang drug slang translator. Accessed: 2018-05-20.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Noslang slang translator", "authors": [ { "first": "", "middle": [], "last": "Noslang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "NoSlang. 2018b. Noslang slang translator. Accessed: 2018-05-20. http://www.noslang.com/.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Internet banging: New trends in social media, gang violence, masculinity and hip hop", "authors": [ { "first": "D", "middle": [ "U" ], "last": "Patton", "suffix": "" }, { "first": "R", "middle": [ "D" ], "last": "Eschmann", "suffix": "" }, { "first": "D", "middle": [ "A" ], "last": "Butler", "suffix": "" } ], "year": 2013, "venue": "Computers in Human Behavior", "volume": "29", "issue": "5", "pages": "54--59", "other_ids": { "DOI": [ "10.1016/j.chb.2012.12.035" ] }, "num": null, "urls": [], "raw_text": "D. U. Patton, R. D. Eschmann, and D. A. Butler. 2013. Internet banging: New trends in social media, gang violence, masculinity and hip hop. Computers in Human Behavior 29(5):A54 -A59. https://doi.org/10.1016/j.chb.2012.12.035.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sticks, stones and facebook accounts: What violence outreach workers know about social media and urban-based gang violence in chicago", "authors": [ { "first": "D", "middle": [ "U" ], "last": "Patton", "suffix": "" }, { "first": "R", "middle": [ "D" ], "last": "Eschmann", "suffix": "" }, { "first": "C", "middle": [], "last": "Elsaesser", "suffix": "" }, { "first": "E", "middle": [], "last": "Bocanegra", "suffix": "" } ], "year": 2016, "venue": "Human Behavior", "volume": "65", "issue": "", "pages": "591--600", "other_ids": { "DOI": [ "10.1016/j.chb.2016.05.052" ] }, "num": null, "urls": [], "raw_text": "D. U. Patton, R. D. Eschmann, C. Elsaesser, and E. Bocanegra. 2016. Sticks, stones and facebook accounts: What violence out- reach workers know about social media and urban-based gang violence in chicago. Com- puters in Human Behavior 65:591 -600.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Tweets, gangs, and guns: A snapshot of gang communications in detroit 32", "authors": [ { "first": "D", "middle": [ "U" ], "last": "Patton", "suffix": "" }, { "first": "S", "middle": [], "last": "Patel", "suffix": "" }, { "first": "J", "middle": [], "last": "Hong", "suffix": "" }, { "first": "M", "middle": [], "last": "Ranney", "suffix": "" }, { "first": "M", "middle": [], "last": "Crandall", "suffix": "" }, { "first": "L", "middle": [], "last": "Dungy", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. U. Patton, S. Patel, J. Hong, M. Ranney, M. Crandall, and L. Dungy. 2017a. Tweets, gangs, and guns: A snapshot of gang communications in detroit 32.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "I know god's got a day 4 me: Violence, trauma, and coping among gang-involved twitter users", "authors": [ { "first": "D", "middle": [ "U" ], "last": "Patton", "suffix": "" }, { "first": "N", "middle": [], "last": "Sanchez", "suffix": "" }, { "first": "D", "middle": [], "last": "Fitch", "suffix": "" }, { "first": "J", "middle": [], "last": "Macbeth", "suffix": "" }, { "first": "P", "middle": [], "last": "Leonard", "suffix": "" } ], "year": 2017, "venue": "Social Science Computer Review", "volume": "35", "issue": "2", "pages": "226--243", "other_ids": { "DOI": [ "10.1177/0894439315613319" ] }, "num": null, "urls": [], "raw_text": "D. U. Patton, N. Sanchez, D. Fitch, J. Mac- beth, and P. Leonard. 2017b. I know god's got a day 4 me: Violence, trauma, and cop- ing among gang-involved twitter users. So- cial Science Computer Review 35(2):226-243. https://doi.org/10.1177/0894439315613319.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Youth gun violence prevention in a digital age", "authors": [ { "first": "D", "middle": [ "U" ], "last": "Patton", "suffix": "" }, { "first": "K", "middle": [], "last": "Mcgreggor", "suffix": "" }, { "first": "G", "middle": [], "last": "Slutkin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "2017--2438", "other_ids": { "DOI": [ "10.1542/peds.2017-2438" ] }, "num": null, "urls": [], "raw_text": "D.U. Patton, K. McGreggor, and G. Slutkin. 2018a. Youth gun violence prevention in a digital age. Pediatrics pages 2017-2438.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Expressions of loss predict aggressive comments on twitter among gang involved youth in chicago", "authors": [ { "first": "D", "middle": [ "U" ], "last": "Patton", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "J", "middle": [], "last": "Auerbach", "suffix": "" }, { "first": "K", "middle": [], "last": "Li", "suffix": "" }, { "first": "W", "middle": [], "last": "Frey", "suffix": "" } ], "year": 2018, "venue": "Nature Partner Journal: Digital Medicine", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D.U. Patton, O. Rambow, J. Auerbach, K. Li, and W. Frey. 2018b. Expressions of loss predict ag- gressive comments on twitter among gang involved youth in chicago. Nature Partner Journal: Digital Medicine .", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The gun violence database: A new task and data set for nlp", "authors": [ { "first": "E", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "H", "middle": [], "last": "Ji", "suffix": "" }, { "first": "X", "middle": [], "last": "Pan", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Pavlick, H. Ji, X. Pan, and C. Callison-Burch. 2016. The gun violence database: A new task and data set for nlp .", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "J", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pennington, R. Socher, and C. D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Modeling the use of graffiti style features to signal social relations within a multidomain learning paradigm", "authors": [ { "first": "M", "middle": [], "last": "Piergallini", "suffix": "" }, { "first": "A", "middle": [ "S" ], "last": "Dogru\u00f6z", "suffix": "" }, { "first": "P", "middle": [], "last": "Gadde", "suffix": "" }, { "first": "D", "middle": [], "last": "Adamson", "suffix": "" }, { "first": "C", "middle": [ "P" ], "last": "Ros\u00e9", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "107--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Piergallini, A. S. Dogru\u00f6z, P. Gadde, D. Adamson, and C. P. Ros\u00e9. 2014. Modeling the use of graffiti style features to signal social relations within a multi- domain learning paradigm. pages 107-115.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Criminal and routine activities in online settings: Gangs, offenders, and the internet", "authors": [ { "first": "D", "middle": [ "C" ], "last": "Pyrooz", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Decker", "suffix": "" }, { "first": "R", "middle": [ "K" ], "last": "Moule", "suffix": "" } ], "year": 2015, "venue": "Justice Quarterly", "volume": "32", "issue": "3", "pages": "471--499", "other_ids": { "DOI": [ "10.1080/07418825.2013.778326" ] }, "num": null, "urls": [], "raw_text": "D. C. Pyrooz, S. H. Decker, and R. K. Moule Jr. 2015. Criminal and routine activities in online settings: Gangs, offenders, and the internet. Justice Quarterly 32(3):471-499.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Leveraging intra-user and interuser representation learning for automated hate speech detection", "authors": [ { "first": "J", "middle": [], "last": "Qian", "suffix": "" }, { "first": "M", "middle": [], "last": "Elsherief", "suffix": "" }, { "first": "E", "middle": [ "M" ], "last": "Belding", "suffix": "" }, { "first": "W", "middle": [ "Y" ], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Qian, M. ElSherief, E. M. Belding, and W. Y. Wang. 2018. Leveraging intra-user and inter- user representation learning for automated hate speech detection. CoRR abs/1804.03124.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Spatializing social networks: Using social network analysis to investigate geographies of gang rivalry, territoriality, and violence in los angeles", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Radil", "suffix": "" }, { "first": "C", "middle": [], "last": "Flint", "suffix": "" }, { "first": "G", "middle": [ "E" ], "last": "Tita", "suffix": "" } ], "year": 2010, "venue": "Annals of the Association of American Geographers", "volume": "100", "issue": "2", "pages": "307--326", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. M. Radil, C. Flint, and G. E. Tita. 2010. Spatializing social networks: Using social network analysis to investigate geographies of gang rivalry, territoriality, and violence in los angeles. Annals of the Associa- tion of American Geographers 100(2):307-326.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "On some pitfalls in automatic evaluation and significance testing for mt", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "John T", "middle": [], "last": "Maxwell", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Riezler and John T Maxwell. 2005. On some pitfalls in automatic evaluation and significance test- ing for mt. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for ma- chine translation and/or summarization. pages 57- 64.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Learning domain-specific word embeddings from sparse cybersecurity texts", "authors": [ { "first": "A", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Y", "middle": [], "last": "Park", "suffix": "" }, { "first": "S", "middle": [], "last": "Pan", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Roy, Y. Park, and S. Pan. 2017. Learning domain-specific word embeddings from sparse cybersecurity texts. CoRR abs/1709.07470.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Social media mining for toxicovigilance: Automatic monitoring of prescription medication abuse from twitter", "authors": [ { "first": "A", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "K", "middle": [], "last": "O'connor", "suffix": "" }, { "first": "R", "middle": [], "last": "Ginn", "suffix": "" }, { "first": "M", "middle": [], "last": "Scotch", "suffix": "" }, { "first": "K", "middle": [], "last": "Smith", "suffix": "" }, { "first": "D", "middle": [], "last": "Malone", "suffix": "" }, { "first": "G", "middle": [], "last": "Gonzalez", "suffix": "" } ], "year": 2016, "venue": "Drug Safety", "volume": "39", "issue": "3", "pages": "231--240", "other_ids": { "DOI": [ "10.1007/s40264-015-0379-4" ] }, "num": null, "urls": [], "raw_text": "A. Sarker, K. O'Connor, R. Ginn, M. Scotch, K. Smith, D. Malone, and G. Gonzalez. 2016. Social media mining for toxicovigilance: Au- tomatic monitoring of prescription medication abuse from twitter. Drug Safety 39(3):231-240. https://doi.org/10.1007/s40264-015-0379-4.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Word embeddings for the construction domain", "authors": [ { "first": "A", "middle": [ "J" ], "last": "Tixier", "suffix": "" }, { "first": "M", "middle": [], "last": "Vazirgiannis", "suffix": "" }, { "first": "M", "middle": [ "R" ], "last": "Hallowell", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. J.-P. Tixier, M. Vazirgiannis, and M. R. Hal- lowell. 2016. Word embeddings for the construction domain. CoRR abs/1610.09333.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Using the revised dictionary of affect in language to quantify the emotional undertones of samples of natural language", "authors": [ { "first": "C", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "Psychological Reports", "volume": "105", "issue": "", "pages": "509--521", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Whissell. 2009. Using the revised dictionary of affect in language to quantify the emotional under- tones of samples of natural language. Psychological Reports 105:509-521.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Diagram of our steps to generate domain-specific and context features for our neural net system.", "num": null, "type_str": "figure" }, "TABREF0": { "text": "Example tweetsNo.", "num": null, "type_str": "table", "content": "
Tweet TextLabel
1#FreeDaDommmmm [URL]Loss
Damn juss peeped shorty on
2tha news out here @USER ..smh..Loss
crazyy.. #RIPShorty
3I'm smokin on Dat DMoney man Im high as fuckAggress
Lost Ty to Sum Fuck Shit dont
4Fuck around wit Fuck rounds n u a type of Niggas Ion fuck witLoss
5My bro Mooki thirsty he jus wana sumAggress
", "html": null }, "TABREF1": { "text": "Results comparing different embeddings with CNN. GN refers to Google News, LS to location specific embeddings, GT to Glove Twitter embeddings, and DS to our domain specific embeddings. A, L and O refer to Aggression, Loss, and Other respectively.", "num": null, "type_str": "table", "content": "
Embeddings TypeF1Macro F1
ALO
GN27.9 66.6 86.960.5
AAE27.3 69.8 86.561.2
LS31.3 68.3 87.962.5
Random Init. 29.3 70.5 88.962.9
GT29.0 71.1 89.063.0
DS37.9 73.4 90.3 67.20
", "html": null }, "TABREF2": { "text": "Comparison of different models. The below pairs of algorithms achieve statistical significance p < 0.002 for each class (the higher performing algorithm comes first): i) CNN-Context vs. CNN-DS; ii) CNN-DS vs. SVM-Retrained; iii) SVM-Context vs. SVM-DS. SVM-Context outperforms SVM-Retrained in the Aggression class by a robust margin (5 points).", "num": null, "type_str": "table", "content": "
ModelAggressionLossOtherMacro F1
PRFPRFPRF
SVM-Retrained(baseline) 36.4 31.3 33.7 73.7 68.8 71.2 89.8 92.0 90.965.3
SVM-DS32.4 38.9 35.4 66.9 72.9 69.8 90.8 87.7 89.264.8
SVM-Context35.0 43.7 38.8 68.6 74.0 71.2 91.6 88.2 89.966.6
CNN-DS35.7 41.1 38.2 78.9 70.3 74.3 90.7 91.4 91.067.9
CNN-Context38.3 46.4 42.0 78.8 73.2 75.9 91.3 91.7 91.569.8
", "html": null } } } }